On Thu, 19 May 2022 at 01:03, Vivek Goyal <vgoyal(a)redhat.com> wrote:
On Wed, May 18, 2022 at 10:58:57AM -0500, Eric Sandeen wrote:
> On 5/18/22 10:45 AM, Mike Snitzer wrote:
> > On Tue, May 17 2022 at 2:34P -0400,
> > Tao Liu <ltao(a)redhat.com> wrote:
>
> ...
>
> >> I'm not an expert to fs and IO. If the async IO fails in 2nd kernel
> >> for kdump, mostly the reason is insufficient file system space, and it
> >> worked well for kdump in the past. However as for the case of lvm2
> >> thinp, I found the userspace program no longer gets informed in async
> >> when thin pool autoextend fails. So I turned to the sync flag, which
> >> force the userspace program to wait the date been synced to disk
> >> before exit, and it works well according to my test. But it does cost
> >> more writing time than async...
> >
> > I've consulted Eric Sandeen (cc'd) and he agrees there is a more
> > generic problem in the kdump userspace if it isn't able to detect
> > write failures without using the "sync" mount option.
> >
> > kdump's job is to dump system memory as carefully as possible. Yet
> > you're saying kdump is using buffered IO. Buffered IO creates
> > additional memory use, and associated pages don't get written back
> > until writeback kicks in.. hence the delayed nature of write failures.
>
> (cc: vivek, who may have thoughts on buffered IO vs direct IO)
[ cc Bao and Dave ]
>
> > But those failures can happen with non-thinp block devices too.
>
> Exactly.
>
> > Seems logical that kdump should be using direct IO to write system
> > memory back (rather than buffered IO). Again, using buffered IO
> > creates more memory use -- so you're needlessly increasing memory
> > reserve for kdump's use by using buffered IO.
> >
> > Please take a closer look at how to properly detect write failures.
> > Doing so properly should make kdump work, as in detect write failure,
> > on any storage if it runs out of space.
> >
> > Please see:
https://lwn.net/Articles/457667/
> >
> > Anything short of that and you're papering over a general kdump
> > problem by making it seem like a thinp specific problem.
>
> Yep. If you want to know if your buffered write succeeded or not, you
> have several options, including calling fsync() and handling any errors.
> The article above goes into much more detail.
>
> This should be done regardless of the storage type; there is no need to
> single out thinp here. IO can fail for any number of reasons, on any type
> of storage. You should always make the proper data integrity syscalls
> and do error handling if you care about the results of your buffered write()
> calls.
Right. I think key thing is to call fsync() after saving vmcore is
finished and based on the result of fsync() determine if file could
make it to disk or not.
If fsync() is not reporting errors properly, then that's an issue
we should try to fix. I remember Jeff Layton had done fixes in
this area to report errors if page writeback failed.
I am not sure if kdump scripts call fsync() or not. I think that's
the first thing we should verify. And if we are not doing it, fix it.
Bao/Dave you probably are in best position to answer that.
Hi Vivek,
Checking the kdump scripts, we have below, a separate "sync" command
is added after saving vmcore, so it would be good since that cover
all the core collectors, if we use fsync then we need to patch
makedumpfile and cp, maybe it is not necessary:
$CORE_COLLECTOR /proc/vmcore "$_dump_fs_path/vmcore-incomplete"
_dump_exitcode=$?
if [ $_dump_exitcode -eq 0 ]; then
mv "$_dump_fs_path/vmcore-incomplete"
"$_dump_fs_path/vmcore"
sync
dinfo "saving vmcore complete"
direct I/O, O_SYNC I/O these all are slow options. We also have a
requirement to save dump ASAP and reboot back into the original
kernel so that we don't keep the machine down for longer duration.
So if problem is about error detection, fsync() should solve it. Using
direct I/O or O_SYNC is an option user should be able to choose if they
wish it. I don't think Kdump provides any mechanism to do direct I/O. But
it might be allowing passing "sync" mount option so that every I/O
is effectively will use O_SYNC. Bao, do I get it right.
Not sure if the sync mount option will cause vmcore saving slow down,
since we are the only user in the kdump kernel so I suspect it will be
fine, but it may need some actual testing.. Another thing is if we
use sync io then the saving progress indicator will be more accurate,
let's see how others think about this :)
Thanks
Vivek
Thanks
Dave