Thin provision is a mechanism that you can allocate a lvm volume which has a large virtual size for file systems but actually in a small physical size. The physical size can be autoextended in use if thin pool reached a threshold specified in /etc/lvm/lvm.conf.
There are 3 works should be handled when enable lvm2 thinp for kdump:
1) Check if the dump target device or directory is thinp device. 2) Monitor the thin pool and autoextend its size when it reached the threshold during kdump. 3) If thin pool size-autoextend fails, the user space program will not know due to buffered IO. So "sync -f vmcore" is used during kdump in 2nd kernel, to force sync vmcore data into disk.
According to my testing, the memory consumption procedure for lvm2 thinp is the thin pool size-autoextend phase. For fedora and rhel9, the default crashkernel value is enough. But for rhel8, the default crashkernel value 1G-4G:160M is not enough, so it should be handled particularly.
v1 -> v2:
1) Modified the usage of lvs cmd when check if target is lvm2 thinp device. 2) Removed the sync flag way of mounting for lvm2 thinp target during kdump, use "sync -f vmcore" to force sync data, and handle the error if fails.
Tao Liu (4): Add lvm2 thin provision dump target checker Add lvm2-monitor.service for kdump when lvm2 thinp enabled lvm.conf should be check modified if lvm2 thinp enabled Fix the sync issue for dump_fs
dracut-kdump.sh | 10 ++++++++-- dracut-lvm2-monitor.service | 15 +++++++++++++++ dracut-module-setup.sh | 16 ++++++++++++++++ kdump-lib-initramfs.sh | 20 ++++++++++++++++++++ kdumpctl | 1 + kexec-tools.spec | 2 ++ 6 files changed, 62 insertions(+), 2 deletions(-) create mode 100644 dracut-lvm2-monitor.service
We need to check if a directory or a device is lvm2 thinp target.
First, we use get_block_dump_target() to convert dump path into block device, then we check if the device is lvm2 thinp target by cmd lvs.
Signed-off-by: Tao Liu ltao@redhat.com --- kdump-lib-initramfs.sh | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+)
diff --git a/kdump-lib-initramfs.sh b/kdump-lib-initramfs.sh index 84e6bf7..92404f4 100755 --- a/kdump-lib-initramfs.sh +++ b/kdump-lib-initramfs.sh @@ -131,3 +131,22 @@ is_fs_dump_target() { [ -n "$(kdump_get_conf_val "ext[234]|xfs|btrfs|minix")" ] } + +is_lvm2_thinp_device() +{ + _device_path=$1 + _lvm2_thin_device=$(lvs -S 'lv_layout=sparse && lv_layout=thin' \ + --nosuffix --noheadings -o vg_name,lv_name "$_device_path" 2>/dev/null) + + [ -n "$_lvm2_thin_device" ] && return $? +} + +is_lvm2_thinp_dump_target() +{ + _target=$(get_block_dump_target) + if [ -n "$_target" ]; then + is_lvm2_thinp_device "$_target" + else + return 1 + fi +} \ No newline at end of file
If lvm2 thinp is enabled in kdump, lvm2-monitor.service is needed for monitor and autoextend the size of thin pool. Otherwise the vmcore dumped to a no-enough-space target will be incomplete and unable for further analysis.
In this patch, lvm2-monitor.service will be started before kdump-capture .service for 2nd kernel, then be stopped in kdump post.d phase. So the thin pool monitoring and size-autoextend can be ensured during kdump.
Signed-off-by: Tao Liu ltao@redhat.com --- dracut-lvm2-monitor.service | 15 +++++++++++++++ dracut-module-setup.sh | 16 ++++++++++++++++ kexec-tools.spec | 2 ++ 3 files changed, 33 insertions(+) create mode 100644 dracut-lvm2-monitor.service
diff --git a/dracut-lvm2-monitor.service b/dracut-lvm2-monitor.service new file mode 100644 index 0000000..88e79e1 --- /dev/null +++ b/dracut-lvm2-monitor.service @@ -0,0 +1,15 @@ +[Unit] +Description=Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling +Documentation=man:dmeventd(8) man:lvcreate(8) man:lvchange(8) man:vgchange(8) +After=initrd.target initrd-parse-etc.service sysroot.mount +After=dracut-initqueue.service dracut-pre-mount.service dracut-mount.service dracut-pre-pivot.service +Before=initrd-cleanup.service kdump-capture.service shutdown.target local-fs-pre.target +DefaultDependencies=no +Conflicts=shutdown.target + +[Service] +Type=oneshot +Environment=LVM_SUPPRESS_LOCKING_FAILURE_MESSAGES=1 +ExecStart=/usr/sbin/lvm vgchange --monitor y +ExecStop=/usr/sbin/lvm vgchange --monitor n +RemainAfterExit=yes \ No newline at end of file diff --git a/dracut-module-setup.sh b/dracut-module-setup.sh index c319fc2..19c0f46 100755 --- a/dracut-module-setup.sh +++ b/dracut-module-setup.sh @@ -1016,6 +1016,20 @@ remove_cpu_online_rule() { sed -i '/SUBSYSTEM=="cpu"/d' "$file" }
+kdump_install_lvm2_monitor_service() +{ + inst "$moddir/lvm2-monitor.service" "$systemdsystemunitdir/lvm2-monitor.service" + systemctl -q --root "$initdir" add-wants initrd.target lvm2-monitor.service + + # We should stop lvm2-monitor service after kdump. SIGTERM is ignored + # by dmeventd when device is monitored. So before stopping dmevend, devices + # shall be unmonitored. This can save the waiting time between systemd-shutdown + # Sending SIGTERM and SIGKILL to remaining processes. + mkdir -p "${initdir}/etc/kdump/post.d" + echo "systemctl stop lvm2-monitor" > "${initdir}/etc/kdump/post.d/stop-lvm2-monitor.sh" + chmod +x "${initdir}/etc/kdump/post.d/stop-lvm2-monitor.sh" +} + install() { local arch
@@ -1058,6 +1072,8 @@ install() { inst "$moddir/kdump.sh" "/usr/bin/kdump.sh" inst "$moddir/kdump-capture.service" "$systemdsystemunitdir/kdump-capture.service" systemctl -q --root "$initdir" add-wants initrd.target kdump-capture.service + is_lvm2_thinp_dump_target && + kdump_install_lvm2_monitor_service # Replace existing emergency service and emergency target cp "$moddir/kdump-emergency.service" "$initdir/$systemdsystemunitdir/emergency.service" cp "$moddir/kdump-emergency.target" "$initdir/$systemdsystemunitdir/emergency.target" diff --git a/kexec-tools.spec b/kexec-tools.spec index 6673000..5f4344d 100644 --- a/kexec-tools.spec +++ b/kexec-tools.spec @@ -60,6 +60,7 @@ Source109: dracut-early-kdump-module-setup.sh
Source200: dracut-fadump-init-fadump.sh Source201: dracut-fadump-module-setup.sh +Source202: dracut-lvm2-monitor.service
%ifarch ppc64 ppc64le Requires(post): servicelog @@ -240,6 +241,7 @@ cp %{SOURCE102} $RPM_BUILD_ROOT/etc/kdump-adv-conf/kdump_dracut_modules/99kdumpb cp %{SOURCE104} $RPM_BUILD_ROOT/etc/kdump-adv-conf/kdump_dracut_modules/99kdumpbase/%{remove_dracut_prefix %{SOURCE104}} cp %{SOURCE106} $RPM_BUILD_ROOT/etc/kdump-adv-conf/kdump_dracut_modules/99kdumpbase/%{remove_dracut_prefix %{SOURCE106}} cp %{SOURCE107} $RPM_BUILD_ROOT/etc/kdump-adv-conf/kdump_dracut_modules/99kdumpbase/%{remove_dracut_prefix %{SOURCE107}} +cp %{SOURCE202} $RPM_BUILD_ROOT/etc/kdump-adv-conf/kdump_dracut_modules/99kdumpbase/%{remove_dracut_prefix %{SOURCE202}} chmod 755 $RPM_BUILD_ROOT/etc/kdump-adv-conf/kdump_dracut_modules/99kdumpbase/%{remove_dracut_prefix %{SOURCE100}} chmod 755 $RPM_BUILD_ROOT/etc/kdump-adv-conf/kdump_dracut_modules/99kdumpbase/%{remove_dracut_prefix %{SOURCE101}} mkdir -p -m755 $RPM_BUILD_ROOT/etc/kdump-adv-conf/kdump_dracut_modules/99earlykdump
lvm2 relies on /etc/lvm/lvm.conf to determine its behaviour. The important configs such as thin_pool_autoextend_threshold and thin_pool_autoextend_percent will be used during kdump in 2nd kernel. So if the file is modified, the initramfs should be rebuild to include the latest.
Signed-off-by: Tao Liu ltao@redhat.com --- kdump-lib-initramfs.sh | 1 + kdumpctl | 1 + 2 files changed, 2 insertions(+)
diff --git a/kdump-lib-initramfs.sh b/kdump-lib-initramfs.sh index 92404f4..8ea2d66 100755 --- a/kdump-lib-initramfs.sh +++ b/kdump-lib-initramfs.sh @@ -8,6 +8,7 @@ DEFAULT_SSHKEY="/root/.ssh/kdump_id_rsa" KDUMP_CONFIG_FILE="/etc/kdump.conf" FENCE_KDUMP_CONFIG_FILE="/etc/sysconfig/fence_kdump" FENCE_KDUMP_SEND="/usr/libexec/fence_kdump_send" +LVM_CONF="/etc/lvm/lvm.conf"
# Read kdump config in well formated style kdump_read_conf() diff --git a/kdumpctl b/kdumpctl index 6188d47..b157eb8 100755 --- a/kdumpctl +++ b/kdumpctl @@ -383,6 +383,7 @@ check_files_modified()
# HOOKS is mandatory and need to check the modification time files="$files $HOOKS" + is_lvm2_thinp_dump_target && files="$files $LVM_CONF" check_exist "$files" && check_executable "$EXTRA_BINS" || return 2
for file in $files; do
Previously the sync for dump_fs is problematic, it always return success according to man 2 sync. So it cannot detect the error of the dump target is full and not all of vmcore data been written back the disk, which will leave the vmcore imcomplete and report misleading log as "saving vmcore complete".
In this patch, we will use "sync -f vmcore" instead, which will return error if syncfs on the dump target fails. In this way, vmcore sync related failures, such as autoextend of lvm2 thinpool fails, can be detected and handled properly.
Signed-off-by: Tao Liu ltao@redhat.com --- dracut-kdump.sh | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/dracut-kdump.sh b/dracut-kdump.sh index f4456a1..343e2b0 100755 --- a/dracut-kdump.sh +++ b/dracut-kdump.sh @@ -175,8 +175,14 @@ dump_fs() _dump_exitcode=$? if [ $_dump_exitcode -eq 0 ]; then mv "$_dump_fs_path/vmcore-incomplete" "$_dump_fs_path/vmcore" - sync - dinfo "saving vmcore complete" + sync -f "$_dump_fs_path/vmcore" + _sync_exitcode=$? + if [ $_sync_exitcode -eq 0 ]; then + dinfo "saving vmcore complete" + else + derror "sync vmcore failed, exitcode:$_sync_exitcode" + return 1 + fi else derror "saving vmcore failed, exitcode:$_dump_exitcode" return 1
On Tue, May 24, 2022 at 10:12:40PM +0800, Tao Liu wrote:
Thin provision is a mechanism that you can allocate a lvm volume which has a large virtual size for file systems but actually in a small physical size. The physical size can be autoextended in use if thin pool reached a threshold specified in /etc/lvm/lvm.conf.
There are 3 works should be handled when enable lvm2 thinp for kdump:
- Check if the dump target device or directory is thinp device.
- Monitor the thin pool and autoextend its size when it reached the threshold during kdump.
Have you tested that auto-extend logic is working fine?
Secondly, can you please also test what happens if thin pool gets full and there is no more space for extension. Is system hanging? If yes, that's not a good situation. We want to reboot back after saving dump automatically.
If it does hang, We need to add logic to configure xfs error handling so that it does not retry infinitely.
- If thin pool size-autoextend fails, the user space program will not know due to buffered IO. So "sync -f vmcore" is used during kdump in 2nd kernel, to force sync vmcore data into disk.
It would be good if this "sync -f vmcore" fix is sent as a separate patch. This is needed anyway irrespective of thin pool support.
Thanks Vivek
According to my testing, the memory consumption procedure for lvm2 thinp is the thin pool size-autoextend phase. For fedora and rhel9, the default crashkernel value is enough. But for rhel8, the default crashkernel value 1G-4G:160M is not enough, so it should be handled particularly.
v1 -> v2:
- Modified the usage of lvs cmd when check if target is lvm2 thinp device.
- Removed the sync flag way of mounting for lvm2 thinp target during kdump, use "sync -f vmcore" to force sync data, and handle the error if fails.
Tao Liu (4): Add lvm2 thin provision dump target checker Add lvm2-monitor.service for kdump when lvm2 thinp enabled lvm.conf should be check modified if lvm2 thinp enabled Fix the sync issue for dump_fs
dracut-kdump.sh | 10 ++++++++-- dracut-lvm2-monitor.service | 15 +++++++++++++++ dracut-module-setup.sh | 16 ++++++++++++++++ kdump-lib-initramfs.sh | 20 ++++++++++++++++++++ kdumpctl | 1 + kexec-tools.spec | 2 ++ 6 files changed, 62 insertions(+), 2 deletions(-) create mode 100644 dracut-lvm2-monitor.service
-- 2.33.1
On Tue, May 24, 2022 at 10:24:32AM -0400, Vivek Goyal wrote:
On Tue, May 24, 2022 at 10:12:40PM +0800, Tao Liu wrote:
Thin provision is a mechanism that you can allocate a lvm volume which has a large virtual size for file systems but actually in a small physical size. The physical size can be autoextended in use if thin pool reached a threshold specified in /etc/lvm/lvm.conf.
There are 3 works should be handled when enable lvm2 thinp for kdump:
- Check if the dump target device or directory is thinp device.
- Monitor the thin pool and autoextend its size when it reached the threshold during kdump.
Hi Vivek,
Have you tested that auto-extend logic is working fine?
Secondly, can you please also test what happens if thin pool gets full and there is no more space for extension. Is system hanging? If yes, that's not a good situation. We want to reboot back after saving dump automatically.
If it does hang, We need to add logic to configure xfs error handling so that it does not retry infinitely.
I have tested the autoextend logic locally, which works fine. If thinpool gets full, it will first reach "device-mapper: thin: 253:2: switching pool to out-of-data-space (queue IO) mode", then 60s later it will switch to "device-mapper: thin: 253:2: switching pool to out-of-data-space (error IO) mode", then continues. So it will hang for 60s mostly, please see the dmesg log below.
As Zdenek suggested, we can use "lvchange --errorwhenfull y|n vgname/thinpoolname" to skip the waiting, but I think it is not harmful for now.
[ 3.627063] kdump[506]: saving vmcore-dmesg.txt complete [ 3.635826] kdump[508]: saving vmcore [ 4.248430] device-mapper: thin: 253:2: reached low water mark for data device: sending event. [ 3.875066] lvm[440]: Insufficient free space: 3 extents needed, but only 2 available [ 3.886365] lvm[440]: Failed command for vg00-thinpool-tpool. [ 3.896824] lvm[440]: WARNING: Thin pool vg00-thinpool-tpool data is now 95.12% full. [ 4.436433] device-mapper: thin: 253:2: switching pool to out-of-data-space (queue IO) mode [ 3.980617] lvm[440]: Insufficient free space: 4 extents needed, but only 2 available [ 3.992068] lvm[440]: Failed command for vg00-thinpool-tpool. [ 4.066058] lvm[440]: Insufficient free space: 4 extents needed, but only 2 available [ 4.083457] lvm[440]: Failed command for vg00-thinpool-tpool. [ 4.092540] lvm[440]: WARNING: Thin pool vg00-thinpool-tpool data is now 100.00% full. [ 4.271764] kdump.sh[509]: ^MChecking for memory holes : [ 0.0 %] / ^MChecking for memory s [ 4.295810] kdump.sh[509]: The dumpfile is saved to /kdumproot/mnt/var/crash/127.0.0.1-2022-05-24-14:42:53//vmcore-incomplete. [ 4.304127] kdump.sh[509]: makedumpfile Completed. [ 12.564731] lvm[440]: Insufficient free space: 4 extents needed, but only 2 available [ 12.581630] lvm[440]: Failed command for vg00-thinpool-tpool. [ 42.569627] lvm[440]: Insufficient free space: 4 extents needed, but only 2 available [ 42.587529] lvm[440]: Failed command for vg00-thinpool-tpool. [ 67.085196] device-mapper: thin: 253:2: switching pool to out-of-data-space (error IO) mode [ 67.126206] EXT4-fs warning (device dm-3): ext4_end_bio:345: I/O error 3 writing to inode 32389 starting block 28672) [ 67.142081] Buffer I/O error on device dm-3, logical block 26625 [ 67.143062] Buffer I/O error on device dm-3, logical block 26626 [ 67.143062] Buffer I/O error on device dm-3, logical block 26627 [ 67.143062] Buffer I/O error on device dm-3, logical block 26628 [ 67.173719] Buffer I/O error on device dm-3, logical block 26629 [ 67.174703] Buffer I/O error on device dm-3, logical block 26630 [ 67.174703] Buffer I/O error on device dm-3, logical block 26631 [ 67.174703] Buffer I/O error on device dm-3, logical block 26632 [ 67.203393] Buffer I/O error on device dm-3, logical block 26633 [ 67.204375] Buffer I/O error on device dm-3, logical block 26634 [ 67.218086] EXT4-fs warning (device dm-3): ext4_end_bio:345: I/O error 3 writing to inode 32389 starting block 29184) [ 67.230461] EXT4-fs warning (device dm-3): ext4_end_bio:345: I/O error 3 writ: [ 67.243262] EXT4-fs warning (device dm-3): ext4_end_bio:345: I/O error 3 writing to inode 32389 starting block 32768) [ 67.257469] EXT4-fs warning (device dm-3): ext4_end_bio:345: I/O error 3 writing to inode 32389 starting block 33280) [ 67.270994] EXT4-fs warning (device dm-3): ext4_end_bio:345: I/O error 3 writing to inode 32389 starting block 34816) [ 67.283991] EXT4-fs warning (device dm-3): ext4_end_bio:345: I/O error 3 writing to inode 32389 starting block 36864) [ 67.296368] EXT4-fs warning (device dm-3): ext4_end_bio:345: I/O error 3 writing to inode 32389 starting block 37376) [ 67.310025] EXT4-fs warning (device dm-3): ext4_end_bio:345: I/O error 3 writing to inode 32389 starting block 38912) [ 67.323535] EXT4-fs warning (device dm-3): ext4_end_bio:345: I/O error 3 writing to inode 32389 starting block 43520) [ 67.338894] JBD2: Detected IO errors while flushing file data on dm-3-8 [ 66.856873] lvm[440]: Insufficient free space: 4 extents needed, but only 2 available [ 66.872656] lvm[440]: Failed command for vg00-thinpool-tpool. [ 66.884612] kdump.sh[512]: sync: error syncing '/kdumproot/mnt/var/crash/127.0.0.1-2022-05-24-14:42:53//vmcore': Input/output error [ 66.902511] kdump[514]: sync vmcore failed, exitcode:1 [ 66.914007] kdump[516]: saving vmcore failed
- If thin pool size-autoextend fails, the user space program will not know due to buffered IO. So "sync -f vmcore" is used during kdump in 2nd kernel, to force sync vmcore data into disk.
It would be good if this "sync -f vmcore" fix is sent as a separate patch. This is needed anyway irrespective of thin pool support.
OK, I will split it to a seperate patch.
Thanks, Tao Liu
Thanks Vivek
According to my testing, the memory consumption procedure for lvm2 thinp is the thin pool size-autoextend phase. For fedora and rhel9, the default crashkernel value is enough. But for rhel8, the default crashkernel value 1G-4G:160M is not enough, so it should be handled particularly.
v1 -> v2:
- Modified the usage of lvs cmd when check if target is lvm2 thinp device.
- Removed the sync flag way of mounting for lvm2 thinp target during kdump, use "sync -f vmcore" to force sync data, and handle the error if fails.
Tao Liu (4): Add lvm2 thin provision dump target checker Add lvm2-monitor.service for kdump when lvm2 thinp enabled lvm.conf should be check modified if lvm2 thinp enabled Fix the sync issue for dump_fs
dracut-kdump.sh | 10 ++++++++-- dracut-lvm2-monitor.service | 15 +++++++++++++++ dracut-module-setup.sh | 16 ++++++++++++++++ kdump-lib-initramfs.sh | 20 ++++++++++++++++++++ kdumpctl | 1 + kexec-tools.spec | 2 ++ 6 files changed, 62 insertions(+), 2 deletions(-) create mode 100644 dracut-lvm2-monitor.service
-- 2.33.1
On Tue, May 24, 2022 at 11:02:12PM +0800, Tao Liu wrote:
On Tue, May 24, 2022 at 10:24:32AM -0400, Vivek Goyal wrote:
On Tue, May 24, 2022 at 10:12:40PM +0800, Tao Liu wrote:
Thin provision is a mechanism that you can allocate a lvm volume which has a large virtual size for file systems but actually in a small physical size. The physical size can be autoextended in use if thin pool reached a threshold specified in /etc/lvm/lvm.conf.
There are 3 works should be handled when enable lvm2 thinp for kdump:
- Check if the dump target device or directory is thinp device.
- Monitor the thin pool and autoextend its size when it reached the threshold during kdump.
Hi Vivek,
Have you tested that auto-extend logic is working fine?
Secondly, can you please also test what happens if thin pool gets full and there is no more space for extension. Is system hanging? If yes, that's not a good situation. We want to reboot back after saving dump automatically.
If it does hang, We need to add logic to configure xfs error handling so that it does not retry infinitely.
I have tested the autoextend logic locally, which works fine. If thinpool gets full, it will first reach "device-mapper: thin: 253:2: switching pool to out-of-data-space (queue IO) mode", then 60s later it will switch to "device-mapper: thin: 253:2: switching pool to out-of-data-space (error IO) mode", then continues. So it will hang for 60s mostly, please see the dmesg log below.
As Zdenek suggested, we can use "lvchange --errorwhenfull y|n vgname/thinpoolname" to skip the waiting, but I think it is not harmful for now.
[ 3.627063] kdump[506]: saving vmcore-dmesg.txt complete [ 3.635826] kdump[508]: saving vmcore [ 4.248430] device-mapper: thin: 253:2: reached low water mark for data device: sending event. [ 3.875066] lvm[440]: Insufficient free space: 3 extents needed, but only 2 available [ 3.886365] lvm[440]: Failed command for vg00-thinpool-tpool. [ 3.896824] lvm[440]: WARNING: Thin pool vg00-thinpool-tpool data is now 95.12% full. [ 4.436433] device-mapper: thin: 253:2: switching pool to out-of-data-space (queue IO) mode [ 3.980617] lvm[440]: Insufficient free space: 4 extents needed, but only 2 available [ 3.992068] lvm[440]: Failed command for vg00-thinpool-tpool. [ 4.066058] lvm[440]: Insufficient free space: 4 extents needed, but only 2 available [ 4.083457] lvm[440]: Failed command for vg00-thinpool-tpool. [ 4.092540] lvm[440]: WARNING: Thin pool vg00-thinpool-tpool data is now 100.00% full. [ 4.271764] kdump.sh[509]: ^MChecking for memory holes : [ 0.0 %] / ^MChecking for memory s [ 4.295810] kdump.sh[509]: The dumpfile is saved to /kdumproot/mnt/var/crash/127.0.0.1-2022-05-24-14:42:53//vmcore-incomplete. [ 4.304127] kdump.sh[509]: makedumpfile Completed. [ 12.564731] lvm[440]: Insufficient free space: 4 extents needed, but only 2 available [ 12.581630] lvm[440]: Failed command for vg00-thinpool-tpool. [ 42.569627] lvm[440]: Insufficient free space: 4 extents needed, but only 2 available [ 42.587529] lvm[440]: Failed command for vg00-thinpool-tpool. [ 67.085196] device-mapper: thin: 253:2: switching pool to out-of-data-space (error IO) mode
[..]
[ 67.126206] EXT4-fs warning (device dm-3): ext4_end_bio:345: I/O error 3 writing to inode 32389 starting block 28672)
You are testing with EXT4 filesystem. Please test with XFS as well. While using xfs on top of thin devices, docker had run into issues when thin pool gets full. EXT4 seemed to be fine and I think it remoted file system read-only and continued.
Thanks Vivek
[ 67.142081] Buffer I/O error on device dm-3, logical block 26625 [ 67.143062] Buffer I/O error on device dm-3, logical block 26626 [ 67.143062] Buffer I/O error on device dm-3, logical block 26627 [ 67.143062] Buffer I/O error on device dm-3, logical block 26628 [ 67.173719] Buffer I/O error on device dm-3, logical block 26629 [ 67.174703] Buffer I/O error on device dm-3, logical block 26630 [ 67.174703] Buffer I/O error on device dm-3, logical block 26631 [ 67.174703] Buffer I/O error on device dm-3, logical block 26632 [ 67.203393] Buffer I/O error on device dm-3, logical block 26633 [ 67.204375] Buffer I/O error on device dm-3, logical block 26634 [ 67.218086] EXT4-fs warning (device dm-3): ext4_end_bio:345: I/O error 3 writing to inode 32389 starting block 29184) [ 67.230461] EXT4-fs warning (device dm-3): ext4_end_bio:345: I/O error 3 writ: [ 67.243262] EXT4-fs warning (device dm-3): ext4_end_bio:345: I/O error 3 writing to inode 32389 starting block 32768) [ 67.257469] EXT4-fs warning (device dm-3): ext4_end_bio:345: I/O error 3 writing to inode 32389 starting block 33280) [ 67.270994] EXT4-fs warning (device dm-3): ext4_end_bio:345: I/O error 3 writing to inode 32389 starting block 34816) [ 67.283991] EXT4-fs warning (device dm-3): ext4_end_bio:345: I/O error 3 writing to inode 32389 starting block 36864) [ 67.296368] EXT4-fs warning (device dm-3): ext4_end_bio:345: I/O error 3 writing to inode 32389 starting block 37376) [ 67.310025] EXT4-fs warning (device dm-3): ext4_end_bio:345: I/O error 3 writing to inode 32389 starting block 38912) [ 67.323535] EXT4-fs warning (device dm-3): ext4_end_bio:345: I/O error 3 writing to inode 32389 starting block 43520) [ 67.338894] JBD2: Detected IO errors while flushing file data on dm-3-8 [ 66.856873] lvm[440]: Insufficient free space: 4 extents needed, but only 2 available [ 66.872656] lvm[440]: Failed command for vg00-thinpool-tpool. [ 66.884612] kdump.sh[512]: sync: error syncing '/kdumproot/mnt/var/crash/127.0.0.1-2022-05-24-14:42:53//vmcore': Input/output error [ 66.902511] kdump[514]: sync vmcore failed, exitcode:1 [ 66.914007] kdump[516]: saving vmcore failed
- If thin pool size-autoextend fails, the user space program will not know due to buffered IO. So "sync -f vmcore" is used during kdump in 2nd kernel, to force sync vmcore data into disk.
It would be good if this "sync -f vmcore" fix is sent as a separate patch. This is needed anyway irrespective of thin pool support.
OK, I will split it to a seperate patch.
Thanks, Tao Liu
Thanks Vivek
According to my testing, the memory consumption procedure for lvm2 thinp is the thin pool size-autoextend phase. For fedora and rhel9, the default crashkernel value is enough. But for rhel8, the default crashkernel value 1G-4G:160M is not enough, so it should be handled particularly.
v1 -> v2:
- Modified the usage of lvs cmd when check if target is lvm2 thinp device.
- Removed the sync flag way of mounting for lvm2 thinp target during kdump, use "sync -f vmcore" to force sync data, and handle the error if fails.
Tao Liu (4): Add lvm2 thin provision dump target checker Add lvm2-monitor.service for kdump when lvm2 thinp enabled lvm.conf should be check modified if lvm2 thinp enabled Fix the sync issue for dump_fs
dracut-kdump.sh | 10 ++++++++-- dracut-lvm2-monitor.service | 15 +++++++++++++++ dracut-module-setup.sh | 16 ++++++++++++++++ kdump-lib-initramfs.sh | 20 ++++++++++++++++++++ kdumpctl | 1 + kexec-tools.spec | 2 ++ 6 files changed, 62 insertions(+), 2 deletions(-) create mode 100644 dracut-lvm2-monitor.service
-- 2.33.1
On Tue, May 24, 2022 at 11:12:23AM -0400, Vivek Goyal wrote:
On Tue, May 24, 2022 at 11:02:12PM +0800, Tao Liu wrote:
On Tue, May 24, 2022 at 10:24:32AM -0400, Vivek Goyal wrote:
On Tue, May 24, 2022 at 10:12:40PM +0800, Tao Liu wrote:
Thin provision is a mechanism that you can allocate a lvm volume which has a large virtual size for file systems but actually in a small physical size. The physical size can be autoextended in use if thin pool reached a threshold specified in /etc/lvm/lvm.conf.
There are 3 works should be handled when enable lvm2 thinp for kdump:
- Check if the dump target device or directory is thinp device.
- Monitor the thin pool and autoextend its size when it reached the threshold during kdump.
Hi Vivek,
Have you tested that auto-extend logic is working fine?
Secondly, can you please also test what happens if thin pool gets full and there is no more space for extension. Is system hanging? If yes, that's not a good situation. We want to reboot back after saving dump automatically.
If it does hang, We need to add logic to configure xfs error handling so that it does not retry infinitely.
I have tested the autoextend logic locally, which works fine. If thinpool gets full, it will first reach "device-mapper: thin: 253:2: switching pool to out-of-data-space (queue IO) mode", then 60s later it will switch to "device-mapper: thin: 253:2: switching pool to out-of-data-space (error IO) mode", then continues. So it will hang for 60s mostly, please see the dmesg log below.
As Zdenek suggested, we can use "lvchange --errorwhenfull y|n vgname/thinpoolname" to skip the waiting, but I think it is not harmful for now.
[ 3.627063] kdump[506]: saving vmcore-dmesg.txt complete [ 3.635826] kdump[508]: saving vmcore [ 4.248430] device-mapper: thin: 253:2: reached low water mark for data device: sending event. [ 3.875066] lvm[440]: Insufficient free space: 3 extents needed, but only 2 available [ 3.886365] lvm[440]: Failed command for vg00-thinpool-tpool. [ 3.896824] lvm[440]: WARNING: Thin pool vg00-thinpool-tpool data is now 95.12% full. [ 4.436433] device-mapper: thin: 253:2: switching pool to out-of-data-space (queue IO) mode [ 3.980617] lvm[440]: Insufficient free space: 4 extents needed, but only 2 available [ 3.992068] lvm[440]: Failed command for vg00-thinpool-tpool. [ 4.066058] lvm[440]: Insufficient free space: 4 extents needed, but only 2 available [ 4.083457] lvm[440]: Failed command for vg00-thinpool-tpool. [ 4.092540] lvm[440]: WARNING: Thin pool vg00-thinpool-tpool data is now 100.00% full. [ 4.271764] kdump.sh[509]: ^MChecking for memory holes : [ 0.0 %] / ^MChecking for memory s [ 4.295810] kdump.sh[509]: The dumpfile is saved to /kdumproot/mnt/var/crash/127.0.0.1-2022-05-24-14:42:53//vmcore-incomplete. [ 4.304127] kdump.sh[509]: makedumpfile Completed. [ 12.564731] lvm[440]: Insufficient free space: 4 extents needed, but only 2 available [ 12.581630] lvm[440]: Failed command for vg00-thinpool-tpool. [ 42.569627] lvm[440]: Insufficient free space: 4 extents needed, but only 2 available [ 42.587529] lvm[440]: Failed command for vg00-thinpool-tpool. [ 67.085196] device-mapper: thin: 253:2: switching pool to out-of-data-space (error IO) mode
[..]
[ 67.126206] EXT4-fs warning (device dm-3): ext4_end_bio:345: I/O error 3 writing to inode 32389 starting block 28672)
You are testing with EXT4 filesystem. Please test with XFS as well. While using xfs on top of thin devices, docker had run into issues when thin pool gets full. EXT4 seemed to be fine and I think it remoted file system read-only and continued.
Yes, I have tested with xfs, which works fine too. Please see the dmesg log:
[ 3.426025] XFS (dm-3): Mounting V5 Filesystem [ 3.599421] XFS (dm-3): Starting recovery (logdev: internal) [ 3.624443] XFS (dm-3): Ending recovery (logdev: internal) [ 3.647705] xfs filesystem being mounted at /kdumproot/mnt supports timestamps until 2038 (0x7fffffff) .... [ 3.552870] kdump[514]: saving vmcore [ 4.082238] device-mapper: thin: 253:2: reached low water mark for data device: sending event. [ 4.142192] device-mapper: thin: 253:2: switching pool to out-of-data-space (queue IO) mode [ 3.679640] lvm[446]: WARNING: Sum of all thin volume sizes (300.00 MiB) exceeds the size of thin pools and the size of whole volume group (48.00 MiB). [ 3.690712] lvm[446]: Size of logical volume vg00/thinpool_tdata changed from 12.00 MiB (3 extents) to 20.00 MiB (5 extents). [ 4.211179] device-mapper: thin: 253:2: switching pool to write mode [ 4.227481] device-mapper: thin: 253:2: growing the data device from 192 to 320 blocks [ 3.789801] lvm[446]: Logical volume vg00/thinpool_tdata successfully resized. [ 4.269802] device-mapper: thin: 253:2: reached low water mark for data device: sending event. [ 4.335903] device-mapper: thin: 253:2: switching pool to out-of-data-space (queue IO) mode [ 3.948400] lvm[446]: WARNING: Sum of all thin volume sizes (300.00 MiB) exceeds the size of thin pools and the size of whole volume group (48.00 MiB). [ 3.972514] lvm[446]: Size of logical volume vg00/thinpool_tdata changed from 20.00 MiB (5 extents) to 32.00 MiB (8 extents). [ 4.468128] device-mapper: thin: 253:2: switching pool to write mode [ 4.485914] device-mapper: thin: 253:2: growing the data device from 320 to 512 blocks [ 4.049409] lvm[446]: Logical volume vg00/thinpool_tdata successfully resized. [ 4.528457] device-mapper: thin: 253:2: reached low water mark for data device: sending event. [ 4.605303] device-mapper: thin: 253:2: switching pool to out-of-data-space (queue IO) mode [ 4.224389] lvm[446]: Insufficient free space: 4 extents needed, but only 2 available [ 4.238011] lvm[446]: Failed command for vg00-thinpool-tpool. [ 4.251287] lvm[446]: WARNING: Thin pool vg00-thinpool-tpool data is now 100.00% full. [ 4.272262] lvm[446]: Insufficient free space: 4 extents needed, but only 2 available [ 4.293131] lvm[446]: Failed command for vg00-thinpool-tpool. [ 4.339268] kdump.sh[515]: ^MChecking for memory holes : [ 0.0 %] / ^MChecking for memory holes : [100.0 %] | ^MExcluding unnecessary pages : [100.0 %] \ ^MCopying data : [ 97.8 %] - eta: 0s^MCopying data : [100.0 %] / eta: 0s^MCopying data : [100.0 %] | eta: 0s [ 4.365706] kdump.sh[515]: The dumpfile is saved to /kdumproot/mnt/var/crash/127.0.0.1-2022-05-25-00:26:53//vmcore-incomplete. [ 4.374532] kdump.sh[515]: makedumpfile Completed. [ 23.344673] lvm[446]: Insufficient free space: 4 extents needed, but only 2 available [ 23.361064] lvm[446]: Failed command for vg00-thinpool-tpool. [ 53.344919] lvm[446]: Insufficient free space: 4 extents needed, but only 2 available [ 53.372743] lvm[446]: Failed command for vg00-thinpool-tpool. [ 67.034930] device-mapper: thin: 253:2: switching pool to out-of-data-space (error IO) mode [ 67.049185] dm-3: writeback error on inode 134, offset 176128, sector 1664 [ 67.049388] dm-3: writeback error on inode 134, offset 21590016, sector 42448 [ 67.058358] dm-3: writeback error on inode 134, offset 25784320, sector 58752 [ 67.067440] dm-3: writeback error on inode 134, offset 29978624, sector 64128 [ 67.077099] dm-3: writeback error on inode 134, offset 34172928, sector 67328 [ 67.086122] dm-3: writeback error on inode 134, offset 34340864, sector 82816 [ 67.095074] dm-3: writeback error on inode 134, offset 37294080, sector 88704 [ 67.104907] dm-3: writeback error on inode 134, offset 40325120, sector 90368 [ 67.120230] dm-3: writeback error on inode 134, offset 770048, sector 1784 [ 66.670306] kdump.sh[562]: sync: error syncing '/kdumproot/mnt/var/crash/127.0.0.1-2022-05-25-00:26:53//vmcore': Input/output error [ 66.694671] kdump[570]: sync vmcore failed, exitcode:1 [ 66.709217] lvm[446]: Insufficient free space: 4 extents needed, but only 2 available [ 66.757336] kdump[572]: saving vmcore failed
Thanks, Tao Liu
Thanks Vivek
[ 67.142081] Buffer I/O error on device dm-3, logical block 26625 [ 67.143062] Buffer I/O error on device dm-3, logical block 26626 [ 67.143062] Buffer I/O error on device dm-3, logical block 26627 [ 67.143062] Buffer I/O error on device dm-3, logical block 26628 [ 67.173719] Buffer I/O error on device dm-3, logical block 26629 [ 67.174703] Buffer I/O error on device dm-3, logical block 26630 [ 67.174703] Buffer I/O error on device dm-3, logical block 26631 [ 67.174703] Buffer I/O error on device dm-3, logical block 26632 [ 67.203393] Buffer I/O error on device dm-3, logical block 26633 [ 67.204375] Buffer I/O error on device dm-3, logical block 26634 [ 67.218086] EXT4-fs warning (device dm-3): ext4_end_bio:345: I/O error 3 writing to inode 32389 starting block 29184) [ 67.230461] EXT4-fs warning (device dm-3): ext4_end_bio:345: I/O error 3 writ: [ 67.243262] EXT4-fs warning (device dm-3): ext4_end_bio:345: I/O error 3 writing to inode 32389 starting block 32768) [ 67.257469] EXT4-fs warning (device dm-3): ext4_end_bio:345: I/O error 3 writing to inode 32389 starting block 33280) [ 67.270994] EXT4-fs warning (device dm-3): ext4_end_bio:345: I/O error 3 writing to inode 32389 starting block 34816) [ 67.283991] EXT4-fs warning (device dm-3): ext4_end_bio:345: I/O error 3 writing to inode 32389 starting block 36864) [ 67.296368] EXT4-fs warning (device dm-3): ext4_end_bio:345: I/O error 3 writing to inode 32389 starting block 37376) [ 67.310025] EXT4-fs warning (device dm-3): ext4_end_bio:345: I/O error 3 writing to inode 32389 starting block 38912) [ 67.323535] EXT4-fs warning (device dm-3): ext4_end_bio:345: I/O error 3 writing to inode 32389 starting block 43520) [ 67.338894] JBD2: Detected IO errors while flushing file data on dm-3-8 [ 66.856873] lvm[440]: Insufficient free space: 4 extents needed, but only 2 available [ 66.872656] lvm[440]: Failed command for vg00-thinpool-tpool. [ 66.884612] kdump.sh[512]: sync: error syncing '/kdumproot/mnt/var/crash/127.0.0.1-2022-05-24-14:42:53//vmcore': Input/output error [ 66.902511] kdump[514]: sync vmcore failed, exitcode:1 [ 66.914007] kdump[516]: saving vmcore failed
- If thin pool size-autoextend fails, the user space program will not know due to buffered IO. So "sync -f vmcore" is used during kdump in 2nd kernel, to force sync vmcore data into disk.
It would be good if this "sync -f vmcore" fix is sent as a separate patch. This is needed anyway irrespective of thin pool support.
OK, I will split it to a seperate patch.
Thanks, Tao Liu
Thanks Vivek
According to my testing, the memory consumption procedure for lvm2 thinp is the thin pool size-autoextend phase. For fedora and rhel9, the default crashkernel value is enough. But for rhel8, the default crashkernel value 1G-4G:160M is not enough, so it should be handled particularly.
v1 -> v2:
- Modified the usage of lvs cmd when check if target is lvm2 thinp device.
- Removed the sync flag way of mounting for lvm2 thinp target during kdump, use "sync -f vmcore" to force sync data, and handle the error if fails.
Tao Liu (4): Add lvm2 thin provision dump target checker Add lvm2-monitor.service for kdump when lvm2 thinp enabled lvm.conf should be check modified if lvm2 thinp enabled Fix the sync issue for dump_fs
dracut-kdump.sh | 10 ++++++++-- dracut-lvm2-monitor.service | 15 +++++++++++++++ dracut-module-setup.sh | 16 ++++++++++++++++ kdump-lib-initramfs.sh | 20 ++++++++++++++++++++ kdumpctl | 1 + kexec-tools.spec | 2 ++ 6 files changed, 62 insertions(+), 2 deletions(-) create mode 100644 dracut-lvm2-monitor.service
-- 2.33.1
On Wed, May 25, 2022 at 09:54:30AM +0800, Tao Liu wrote:
On Tue, May 24, 2022 at 11:12:23AM -0400, Vivek Goyal wrote:
On Tue, May 24, 2022 at 11:02:12PM +0800, Tao Liu wrote:
On Tue, May 24, 2022 at 10:24:32AM -0400, Vivek Goyal wrote:
On Tue, May 24, 2022 at 10:12:40PM +0800, Tao Liu wrote:
Thin provision is a mechanism that you can allocate a lvm volume which has a large virtual size for file systems but actually in a small physical size. The physical size can be autoextended in use if thin pool reached a threshold specified in /etc/lvm/lvm.conf.
There are 3 works should be handled when enable lvm2 thinp for kdump:
- Check if the dump target device or directory is thinp device.
- Monitor the thin pool and autoextend its size when it reached the threshold during kdump.
Hi Vivek,
Have you tested that auto-extend logic is working fine?
Secondly, can you please also test what happens if thin pool gets full and there is no more space for extension. Is system hanging? If yes, that's not a good situation. We want to reboot back after saving dump automatically.
If it does hang, We need to add logic to configure xfs error handling so that it does not retry infinitely.
I have tested the autoextend logic locally, which works fine. If thinpool gets full, it will first reach "device-mapper: thin: 253:2: switching pool to out-of-data-space (queue IO) mode", then 60s later it will switch to "device-mapper: thin: 253:2: switching pool to out-of-data-space (error IO) mode", then continues. So it will hang for 60s mostly, please see the dmesg log below.
As Zdenek suggested, we can use "lvchange --errorwhenfull y|n vgname/thinpoolname" to skip the waiting, but I think it is not harmful for now.
[ 3.627063] kdump[506]: saving vmcore-dmesg.txt complete [ 3.635826] kdump[508]: saving vmcore [ 4.248430] device-mapper: thin: 253:2: reached low water mark for data device: sending event. [ 3.875066] lvm[440]: Insufficient free space: 3 extents needed, but only 2 available [ 3.886365] lvm[440]: Failed command for vg00-thinpool-tpool. [ 3.896824] lvm[440]: WARNING: Thin pool vg00-thinpool-tpool data is now 95.12% full. [ 4.436433] device-mapper: thin: 253:2: switching pool to out-of-data-space (queue IO) mode [ 3.980617] lvm[440]: Insufficient free space: 4 extents needed, but only 2 available [ 3.992068] lvm[440]: Failed command for vg00-thinpool-tpool. [ 4.066058] lvm[440]: Insufficient free space: 4 extents needed, but only 2 available [ 4.083457] lvm[440]: Failed command for vg00-thinpool-tpool. [ 4.092540] lvm[440]: WARNING: Thin pool vg00-thinpool-tpool data is now 100.00% full. [ 4.271764] kdump.sh[509]: ^MChecking for memory holes : [ 0.0 %] / ^MChecking for memory s [ 4.295810] kdump.sh[509]: The dumpfile is saved to /kdumproot/mnt/var/crash/127.0.0.1-2022-05-24-14:42:53//vmcore-incomplete. [ 4.304127] kdump.sh[509]: makedumpfile Completed. [ 12.564731] lvm[440]: Insufficient free space: 4 extents needed, but only 2 available [ 12.581630] lvm[440]: Failed command for vg00-thinpool-tpool. [ 42.569627] lvm[440]: Insufficient free space: 4 extents needed, but only 2 available [ 42.587529] lvm[440]: Failed command for vg00-thinpool-tpool. [ 67.085196] device-mapper: thin: 253:2: switching pool to out-of-data-space (error IO) mode
[..]
[ 67.126206] EXT4-fs warning (device dm-3): ext4_end_bio:345: I/O error 3 writing to inode 32389 starting block 28672)
You are testing with EXT4 filesystem. Please test with XFS as well. While using xfs on top of thin devices, docker had run into issues when thin pool gets full. EXT4 seemed to be fine and I think it remoted file system read-only and continued.
Yes, I have tested with xfs, which works fine too. Please see the dmesg log:
[ 3.426025] XFS (dm-3): Mounting V5 Filesystem [ 3.599421] XFS (dm-3): Starting recovery (logdev: internal) [ 3.624443] XFS (dm-3): Ending recovery (logdev: internal) [ 3.647705] xfs filesystem being mounted at /kdumproot/mnt supports timestamps until 2038 (0x7fffffff) .... [ 3.552870] kdump[514]: saving vmcore [ 4.082238] device-mapper: thin: 253:2: reached low water mark for data device: sending event. [ 4.142192] device-mapper: thin: 253:2: switching pool to out-of-data-space (queue IO) mode [ 3.679640] lvm[446]: WARNING: Sum of all thin volume sizes (300.00 MiB) exceeds the size of thin pools and the size of whole volume group (48.00 MiB). [ 3.690712] lvm[446]: Size of logical volume vg00/thinpool_tdata changed from 12.00 MiB (3 extents) to 20.00 MiB (5 extents). [ 4.211179] device-mapper: thin: 253:2: switching pool to write mode [ 4.227481] device-mapper: thin: 253:2: growing the data device from 192 to 320 blocks [ 3.789801] lvm[446]: Logical volume vg00/thinpool_tdata successfully resized. [ 4.269802] device-mapper: thin: 253:2: reached low water mark for data device: sending event. [ 4.335903] device-mapper: thin: 253:2: switching pool to out-of-data-space (queue IO) mode [ 3.948400] lvm[446]: WARNING: Sum of all thin volume sizes (300.00 MiB) exceeds the size of thin pools and the size of whole volume group (48.00 MiB). [ 3.972514] lvm[446]: Size of logical volume vg00/thinpool_tdata changed from 20.00 MiB (5 extents) to 32.00 MiB (8 extents). [ 4.468128] device-mapper: thin: 253:2: switching pool to write mode [ 4.485914] device-mapper: thin: 253:2: growing the data device from 320 to 512 blocks [ 4.049409] lvm[446]: Logical volume vg00/thinpool_tdata successfully resized. [ 4.528457] device-mapper: thin: 253:2: reached low water mark for data device: sending event. [ 4.605303] device-mapper: thin: 253:2: switching pool to out-of-data-space (queue IO) mode [ 4.224389] lvm[446]: Insufficient free space: 4 extents needed, but only 2 available [ 4.238011] lvm[446]: Failed command for vg00-thinpool-tpool. [ 4.251287] lvm[446]: WARNING: Thin pool vg00-thinpool-tpool data is now 100.00% full. [ 4.272262] lvm[446]: Insufficient free space: 4 extents needed, but only 2 available [ 4.293131] lvm[446]: Failed command for vg00-thinpool-tpool. [ 4.339268] kdump.sh[515]: ^MChecking for memory holes : [ 0.0 %] / ^MChecking for memory holes : [100.0 %] | ^MExcluding unnecessary pages : [100.0 %] \ ^MCopying data : [ 97.8 %] - eta: 0s^MCopying data : [100.0 %] / eta: 0s^MCopying data : [100.0 %] | eta: 0s [ 4.365706] kdump.sh[515]: The dumpfile is saved to /kdumproot/mnt/var/crash/127.0.0.1-2022-05-25-00:26:53//vmcore-incomplete. [ 4.374532] kdump.sh[515]: makedumpfile Completed. [ 23.344673] lvm[446]: Insufficient free space: 4 extents needed, but only 2 available [ 23.361064] lvm[446]: Failed command for vg00-thinpool-tpool. [ 53.344919] lvm[446]: Insufficient free space: 4 extents needed, but only 2 available [ 53.372743] lvm[446]: Failed command for vg00-thinpool-tpool. [ 67.034930] device-mapper: thin: 253:2: switching pool to out-of-data-space (error IO) mode [ 67.049185] dm-3: writeback error on inode 134, offset 176128, sector 1664 [ 67.049388] dm-3: writeback error on inode 134, offset 21590016, sector 42448 [ 67.058358] dm-3: writeback error on inode 134, offset 25784320, sector 58752 [ 67.067440] dm-3: writeback error on inode 134, offset 29978624, sector 64128 [ 67.077099] dm-3: writeback error on inode 134, offset 34172928, sector 67328 [ 67.086122] dm-3: writeback error on inode 134, offset 34340864, sector 82816 [ 67.095074] dm-3: writeback error on inode 134, offset 37294080, sector 88704 [ 67.104907] dm-3: writeback error on inode 134, offset 40325120, sector 90368 [ 67.120230] dm-3: writeback error on inode 134, offset 770048, sector 1784 [ 66.670306] kdump.sh[562]: sync: error syncing '/kdumproot/mnt/var/crash/127.0.0.1-2022-05-25-00:26:53//vmcore': Input/output error [ 66.694671] kdump[570]: sync vmcore failed, exitcode:1 [ 66.709217] lvm[446]: Insufficient free space: 4 extents needed, but only 2 available [ 66.757336] kdump[572]: saving vmcore failed
And system rebooted by itself after this?
This is strange. Has xfs default behavior changed now? It used to try infinitely by default if thin pool got full.
Eric, would have any idea. What has changed.
Thanks Vivek
Thanks, Tao Liu
Thanks Vivek
[ 67.142081] Buffer I/O error on device dm-3, logical block 26625 [ 67.143062] Buffer I/O error on device dm-3, logical block 26626 [ 67.143062] Buffer I/O error on device dm-3, logical block 26627 [ 67.143062] Buffer I/O error on device dm-3, logical block 26628 [ 67.173719] Buffer I/O error on device dm-3, logical block 26629 [ 67.174703] Buffer I/O error on device dm-3, logical block 26630 [ 67.174703] Buffer I/O error on device dm-3, logical block 26631 [ 67.174703] Buffer I/O error on device dm-3, logical block 26632 [ 67.203393] Buffer I/O error on device dm-3, logical block 26633 [ 67.204375] Buffer I/O error on device dm-3, logical block 26634 [ 67.218086] EXT4-fs warning (device dm-3): ext4_end_bio:345: I/O error 3 writing to inode 32389 starting block 29184) [ 67.230461] EXT4-fs warning (device dm-3): ext4_end_bio:345: I/O error 3 writ: [ 67.243262] EXT4-fs warning (device dm-3): ext4_end_bio:345: I/O error 3 writing to inode 32389 starting block 32768) [ 67.257469] EXT4-fs warning (device dm-3): ext4_end_bio:345: I/O error 3 writing to inode 32389 starting block 33280) [ 67.270994] EXT4-fs warning (device dm-3): ext4_end_bio:345: I/O error 3 writing to inode 32389 starting block 34816) [ 67.283991] EXT4-fs warning (device dm-3): ext4_end_bio:345: I/O error 3 writing to inode 32389 starting block 36864) [ 67.296368] EXT4-fs warning (device dm-3): ext4_end_bio:345: I/O error 3 writing to inode 32389 starting block 37376) [ 67.310025] EXT4-fs warning (device dm-3): ext4_end_bio:345: I/O error 3 writing to inode 32389 starting block 38912) [ 67.323535] EXT4-fs warning (device dm-3): ext4_end_bio:345: I/O error 3 writing to inode 32389 starting block 43520) [ 67.338894] JBD2: Detected IO errors while flushing file data on dm-3-8 [ 66.856873] lvm[440]: Insufficient free space: 4 extents needed, but only 2 available [ 66.872656] lvm[440]: Failed command for vg00-thinpool-tpool. [ 66.884612] kdump.sh[512]: sync: error syncing '/kdumproot/mnt/var/crash/127.0.0.1-2022-05-24-14:42:53//vmcore': Input/output error [ 66.902511] kdump[514]: sync vmcore failed, exitcode:1 [ 66.914007] kdump[516]: saving vmcore failed
- If thin pool size-autoextend fails, the user space program will not know due to buffered IO. So "sync -f vmcore" is used during kdump in 2nd kernel, to force sync vmcore data into disk.
It would be good if this "sync -f vmcore" fix is sent as a separate patch. This is needed anyway irrespective of thin pool support.
OK, I will split it to a seperate patch.
Thanks, Tao Liu
Thanks Vivek
According to my testing, the memory consumption procedure for lvm2 thinp is the thin pool size-autoextend phase. For fedora and rhel9, the default crashkernel value is enough. But for rhel8, the default crashkernel value 1G-4G:160M is not enough, so it should be handled particularly.
v1 -> v2:
- Modified the usage of lvs cmd when check if target is lvm2 thinp device.
- Removed the sync flag way of mounting for lvm2 thinp target during kdump, use "sync -f vmcore" to force sync data, and handle the error if fails.
Tao Liu (4): Add lvm2 thin provision dump target checker Add lvm2-monitor.service for kdump when lvm2 thinp enabled lvm.conf should be check modified if lvm2 thinp enabled Fix the sync issue for dump_fs
dracut-kdump.sh | 10 ++++++++-- dracut-lvm2-monitor.service | 15 +++++++++++++++ dracut-module-setup.sh | 16 ++++++++++++++++ kdump-lib-initramfs.sh | 20 ++++++++++++++++++++ kdumpctl | 1 + kexec-tools.spec | 2 ++ 6 files changed, 62 insertions(+), 2 deletions(-) create mode 100644 dracut-lvm2-monitor.service
-- 2.33.1
On 5/25/22 6:42 AM, Vivek Goyal wrote:
Yes, I have tested with xfs, which works fine too. Please see the dmesg log:
[ 3.426025] XFS (dm-3): Mounting V5 Filesystem [ 3.599421] XFS (dm-3): Starting recovery (logdev: internal) [ 3.624443] XFS (dm-3): Ending recovery (logdev: internal) [ 3.647705] xfs filesystem being mounted at /kdumproot/mnt supports timestamps until 2038 (0x7fffffff) .... [ 3.552870] kdump[514]: saving vmcore [ 4.082238] device-mapper: thin: 253:2: reached low water mark for data device: sending event. [ 4.142192] device-mapper: thin: 253:2: switching pool to out-of-data-space (queue IO) mode [ 3.679640] lvm[446]: WARNING: Sum of all thin volume sizes (300.00 MiB) exceeds the size of thin pools and the size of whole volume group (48.00 MiB). [ 3.690712] lvm[446]: Size of logical volume vg00/thinpool_tdata changed from 12.00 MiB (3 extents) to 20.00 MiB (5 extents). [ 4.211179] device-mapper: thin: 253:2: switching pool to write mode [ 4.227481] device-mapper: thin: 253:2: growing the data device from 192 to 320 blocks [ 3.789801] lvm[446]: Logical volume vg00/thinpool_tdata successfully resized. [ 4.269802] device-mapper: thin: 253:2: reached low water mark for data device: sending event. [ 4.335903] device-mapper: thin: 253:2: switching pool to out-of-data-space (queue IO) mode [ 3.948400] lvm[446]: WARNING: Sum of all thin volume sizes (300.00 MiB) exceeds the size of thin pools and the size of whole volume group (48.00 MiB). [ 3.972514] lvm[446]: Size of logical volume vg00/thinpool_tdata changed from 20.00 MiB (5 extents) to 32.00 MiB (8 extents). [ 4.468128] device-mapper: thin: 253:2: switching pool to write mode [ 4.485914] device-mapper: thin: 253:2: growing the data device from 320 to 512 blocks [ 4.049409] lvm[446]: Logical volume vg00/thinpool_tdata successfully resized. [ 4.528457] device-mapper: thin: 253:2: reached low water mark for data device: sending event. [ 4.605303] device-mapper: thin: 253:2: switching pool to out-of-data-space (queue IO) mode [ 4.224389] lvm[446]: Insufficient free space: 4 extents needed, but only 2 available [ 4.238011] lvm[446]: Failed command for vg00-thinpool-tpool. [ 4.251287] lvm[446]: WARNING: Thin pool vg00-thinpool-tpool data is now 100.00% full. [ 4.272262] lvm[446]: Insufficient free space: 4 extents needed, but only 2 available [ 4.293131] lvm[446]: Failed command for vg00-thinpool-tpool. [ 4.339268] kdump.sh[515]: ^MChecking for memory holes : [ 0.0 %] / ^MChecking for memory holes : [100.0 %] | ^MExcluding unnecessary pages : [100.0 %] \ ^MCopying data : [ 97.8 %] - eta: 0s^MCopying data : [100.0 %] / eta: 0s^MCopying data : [100.0 %] | eta: 0s [ 4.365706] kdump.sh[515]: The dumpfile is saved to /kdumproot/mnt/var/crash/127.0.0.1-2022-05-25-00:26:53//vmcore-incomplete. [ 4.374532] kdump.sh[515]: makedumpfile Completed. [ 23.344673] lvm[446]: Insufficient free space: 4 extents needed, but only 2 available [ 23.361064] lvm[446]: Failed command for vg00-thinpool-tpool. [ 53.344919] lvm[446]: Insufficient free space: 4 extents needed, but only 2 available [ 53.372743] lvm[446]: Failed command for vg00-thinpool-tpool. [ 67.034930] device-mapper: thin: 253:2: switching pool to out-of-data-space (error IO) mode [ 67.049185] dm-3: writeback error on inode 134, offset 176128, sector 1664 [ 67.049388] dm-3: writeback error on inode 134, offset 21590016, sector 42448 [ 67.058358] dm-3: writeback error on inode 134, offset 25784320, sector 58752 [ 67.067440] dm-3: writeback error on inode 134, offset 29978624, sector 64128 [ 67.077099] dm-3: writeback error on inode 134, offset 34172928, sector 67328 [ 67.086122] dm-3: writeback error on inode 134, offset 34340864, sector 82816 [ 67.095074] dm-3: writeback error on inode 134, offset 37294080, sector 88704 [ 67.104907] dm-3: writeback error on inode 134, offset 40325120, sector 90368 [ 67.120230] dm-3: writeback error on inode 134, offset 770048, sector 1784 [ 66.670306] kdump.sh[562]: sync: error syncing '/kdumproot/mnt/var/crash/127.0.0.1-2022-05-25-00:26:53//vmcore': Input/output error [ 66.694671] kdump[570]: sync vmcore failed, exitcode:1 [ 66.709217] lvm[446]: Insufficient free space: 4 extents needed, but only 2 available [ 66.757336] kdump[572]: saving vmcore failed
And system rebooted by itself after this?
This is strange. Has xfs default behavior changed now? It used to try infinitely by default if thin pool got full.
Eric, would have any idea. What has changed.
I don't actually see any XFS errors at all.... oh, ok - the "writeback error" messages are from iomap, presumably generated by xfs calls. Data IO errors won't be critical to xfs; the (now tunable) error retry behavior is only for metadata IO failures.
It does look like the sync properly reported the failure though, yes? Which was the goal of this exercise, I think.
-Eric
Thanks Vivek
On Wed, May 25, 2022 at 09:05:11AM -0500, Eric Sandeen wrote:
On 5/25/22 6:42 AM, Vivek Goyal wrote:
Yes, I have tested with xfs, which works fine too. Please see the dmesg log:
[ 3.426025] XFS (dm-3): Mounting V5 Filesystem [ 3.599421] XFS (dm-3): Starting recovery (logdev: internal) [ 3.624443] XFS (dm-3): Ending recovery (logdev: internal) [ 3.647705] xfs filesystem being mounted at /kdumproot/mnt supports timestamps until 2038 (0x7fffffff) .... [ 3.552870] kdump[514]: saving vmcore [ 4.082238] device-mapper: thin: 253:2: reached low water mark for data device: sending event. [ 4.142192] device-mapper: thin: 253:2: switching pool to out-of-data-space (queue IO) mode [ 3.679640] lvm[446]: WARNING: Sum of all thin volume sizes (300.00 MiB) exceeds the size of thin pools and the size of whole volume group (48.00 MiB). [ 3.690712] lvm[446]: Size of logical volume vg00/thinpool_tdata changed from 12.00 MiB (3 extents) to 20.00 MiB (5 extents). [ 4.211179] device-mapper: thin: 253:2: switching pool to write mode [ 4.227481] device-mapper: thin: 253:2: growing the data device from 192 to 320 blocks [ 3.789801] lvm[446]: Logical volume vg00/thinpool_tdata successfully resized. [ 4.269802] device-mapper: thin: 253:2: reached low water mark for data device: sending event. [ 4.335903] device-mapper: thin: 253:2: switching pool to out-of-data-space (queue IO) mode [ 3.948400] lvm[446]: WARNING: Sum of all thin volume sizes (300.00 MiB) exceeds the size of thin pools and the size of whole volume group (48.00 MiB). [ 3.972514] lvm[446]: Size of logical volume vg00/thinpool_tdata changed from 20.00 MiB (5 extents) to 32.00 MiB (8 extents). [ 4.468128] device-mapper: thin: 253:2: switching pool to write mode [ 4.485914] device-mapper: thin: 253:2: growing the data device from 320 to 512 blocks [ 4.049409] lvm[446]: Logical volume vg00/thinpool_tdata successfully resized. [ 4.528457] device-mapper: thin: 253:2: reached low water mark for data device: sending event. [ 4.605303] device-mapper: thin: 253:2: switching pool to out-of-data-space (queue IO) mode [ 4.224389] lvm[446]: Insufficient free space: 4 extents needed, but only 2 available [ 4.238011] lvm[446]: Failed command for vg00-thinpool-tpool. [ 4.251287] lvm[446]: WARNING: Thin pool vg00-thinpool-tpool data is now 100.00% full. [ 4.272262] lvm[446]: Insufficient free space: 4 extents needed, but only 2 available [ 4.293131] lvm[446]: Failed command for vg00-thinpool-tpool. [ 4.339268] kdump.sh[515]: ^MChecking for memory holes : [ 0.0 %] / ^MChecking for memory holes : [100.0 %] | ^MExcluding unnecessary pages : [100.0 %] \ ^MCopying data : [ 97.8 %] - eta: 0s^MCopying data : [100.0 %] / eta: 0s^MCopying data : [100.0 %] | eta: 0s [ 4.365706] kdump.sh[515]: The dumpfile is saved to /kdumproot/mnt/var/crash/127.0.0.1-2022-05-25-00:26:53//vmcore-incomplete. [ 4.374532] kdump.sh[515]: makedumpfile Completed. [ 23.344673] lvm[446]: Insufficient free space: 4 extents needed, but only 2 available [ 23.361064] lvm[446]: Failed command for vg00-thinpool-tpool. [ 53.344919] lvm[446]: Insufficient free space: 4 extents needed, but only 2 available [ 53.372743] lvm[446]: Failed command for vg00-thinpool-tpool. [ 67.034930] device-mapper: thin: 253:2: switching pool to out-of-data-space (error IO) mode [ 67.049185] dm-3: writeback error on inode 134, offset 176128, sector 1664 [ 67.049388] dm-3: writeback error on inode 134, offset 21590016, sector 42448 [ 67.058358] dm-3: writeback error on inode 134, offset 25784320, sector 58752 [ 67.067440] dm-3: writeback error on inode 134, offset 29978624, sector 64128 [ 67.077099] dm-3: writeback error on inode 134, offset 34172928, sector 67328 [ 67.086122] dm-3: writeback error on inode 134, offset 34340864, sector 82816 [ 67.095074] dm-3: writeback error on inode 134, offset 37294080, sector 88704 [ 67.104907] dm-3: writeback error on inode 134, offset 40325120, sector 90368 [ 67.120230] dm-3: writeback error on inode 134, offset 770048, sector 1784 [ 66.670306] kdump.sh[562]: sync: error syncing '/kdumproot/mnt/var/crash/127.0.0.1-2022-05-25-00:26:53//vmcore': Input/output error [ 66.694671] kdump[570]: sync vmcore failed, exitcode:1 [ 66.709217] lvm[446]: Insufficient free space: 4 extents needed, but only 2 available [ 66.757336] kdump[572]: saving vmcore failed
And system rebooted by itself after this?
This is strange. Has xfs default behavior changed now? It used to try infinitely by default if thin pool got full.
Eric, would have any idea. What has changed.
I don't actually see any XFS errors at all.... oh, ok - the "writeback error" messages are from iomap, presumably generated by xfs calls. Data IO errors won't be critical to xfs; the (now tunable) error retry behavior is only for metadata IO failures.
It does look like the sync properly reported the failure though, yes? Which was the goal of this exercise, I think.
Yes that was one of the goals.
Will xfs not flush metadata as well as part of sync (if there is some). So it is possible that thin pool is full and metadata can not be flushed and then xfs will hang? IOW, looks like we might hang some time and not other times.
So tweaking XFS error knobs is still a good idea, IMHO to make sure we do not hang while saving dump. If we can't save dump because thin pool is full, we should give an error and reboot back.
Thanks Vivek
On Wed, May 25, 2022 at 7:43 PM Vivek Goyal vgoyal@redhat.com wrote:
On Wed, May 25, 2022 at 09:54:30AM +0800, Tao Liu wrote:
On Tue, May 24, 2022 at 11:12:23AM -0400, Vivek Goyal wrote:
On Tue, May 24, 2022 at 11:02:12PM +0800, Tao Liu wrote:
On Tue, May 24, 2022 at 10:24:32AM -0400, Vivek Goyal wrote:
On Tue, May 24, 2022 at 10:12:40PM +0800, Tao Liu wrote:
Thin provision is a mechanism that you can allocate a lvm volume which has a large virtual size for file systems but actually in a small physical size. The physical size can be autoextended in use if thin pool reached a threshold specified in /etc/lvm/lvm.conf.
There are 3 works should be handled when enable lvm2 thinp for kdump:
- Check if the dump target device or directory is thinp device.
- Monitor the thin pool and autoextend its size when it reached the threshold during kdump.
Hi Vivek,
Have you tested that auto-extend logic is working fine?
Secondly, can you please also test what happens if thin pool gets full and there is no more space for extension. Is system hanging? If yes, that's not a good situation. We want to reboot back after saving dump automatically.
If it does hang, We need to add logic to configure xfs error handling so that it does not retry infinitely.
I have tested the autoextend logic locally, which works fine. If thinpool gets full, it will first reach "device-mapper: thin: 253:2: switching pool to out-of-data-space (queue IO) mode", then 60s later it will switch to "device-mapper: thin: 253:2: switching pool to out-of-data-space (error IO) mode", then continues. So it will hang for 60s mostly, please see the dmesg log below.
As Zdenek suggested, we can use "lvchange --errorwhenfull y|n vgname/thinpoolname" to skip the waiting, but I think it is not harmful for now.
[ 3.627063] kdump[506]: saving vmcore-dmesg.txt complete [ 3.635826] kdump[508]: saving vmcore [ 4.248430] device-mapper: thin: 253:2: reached low water mark for data device: sending event. [ 3.875066] lvm[440]: Insufficient free space: 3 extents needed, but only 2 available [ 3.886365] lvm[440]: Failed command for vg00-thinpool-tpool. [ 3.896824] lvm[440]: WARNING: Thin pool vg00-thinpool-tpool data is now 95.12% full. [ 4.436433] device-mapper: thin: 253:2: switching pool to out-of-data-space (queue IO) mode [ 3.980617] lvm[440]: Insufficient free space: 4 extents needed, but only 2 available [ 3.992068] lvm[440]: Failed command for vg00-thinpool-tpool. [ 4.066058] lvm[440]: Insufficient free space: 4 extents needed, but only 2 available [ 4.083457] lvm[440]: Failed command for vg00-thinpool-tpool. [ 4.092540] lvm[440]: WARNING: Thin pool vg00-thinpool-tpool data is now 100.00% full. [ 4.271764] kdump.sh[509]: ^MChecking for memory holes : [ 0.0 %] / ^MChecking for memory s [ 4.295810] kdump.sh[509]: The dumpfile is saved to /kdumproot/mnt/var/crash/127.0.0.1-2022-05-24-14:42:53//vmcore-incomplete. [ 4.304127] kdump.sh[509]: makedumpfile Completed. [ 12.564731] lvm[440]: Insufficient free space: 4 extents needed, but only 2 available [ 12.581630] lvm[440]: Failed command for vg00-thinpool-tpool. [ 42.569627] lvm[440]: Insufficient free space: 4 extents needed, but only 2 available [ 42.587529] lvm[440]: Failed command for vg00-thinpool-tpool. [ 67.085196] device-mapper: thin: 253:2: switching pool to out-of-data-space (error IO) mode
[..]
[ 67.126206] EXT4-fs warning (device dm-3): ext4_end_bio:345: I/O error 3 writing to inode 32389 starting block 28672)
You are testing with EXT4 filesystem. Please test with XFS as well. While using xfs on top of thin devices, docker had run into issues when thin pool gets full. EXT4 seemed to be fine and I think it remoted file system read-only and continued.
Yes, I have tested with xfs, which works fine too. Please see the dmesg log:
[ 3.426025] XFS (dm-3): Mounting V5 Filesystem [ 3.599421] XFS (dm-3): Starting recovery (logdev: internal) [ 3.624443] XFS (dm-3): Ending recovery (logdev: internal) [ 3.647705] xfs filesystem being mounted at /kdumproot/mnt supports timestamps until 2038 (0x7fffffff) .... [ 3.552870] kdump[514]: saving vmcore [ 4.082238] device-mapper: thin: 253:2: reached low water mark for data device: sending event. [ 4.142192] device-mapper: thin: 253:2: switching pool to out-of-data-space (queue IO) mode [ 3.679640] lvm[446]: WARNING: Sum of all thin volume sizes (300.00 MiB) exceeds the size of thin pools and the size of whole volume group (48.00 MiB). [ 3.690712] lvm[446]: Size of logical volume vg00/thinpool_tdata changed from 12.00 MiB (3 extents) to 20.00 MiB (5 extents). [ 4.211179] device-mapper: thin: 253:2: switching pool to write mode [ 4.227481] device-mapper: thin: 253:2: growing the data device from 192 to 320 blocks [ 3.789801] lvm[446]: Logical volume vg00/thinpool_tdata successfully resized. [ 4.269802] device-mapper: thin: 253:2: reached low water mark for data device: sending event. [ 4.335903] device-mapper: thin: 253:2: switching pool to out-of-data-space (queue IO) mode [ 3.948400] lvm[446]: WARNING: Sum of all thin volume sizes (300.00 MiB) exceeds the size of thin pools and the size of whole volume group (48.00 MiB). [ 3.972514] lvm[446]: Size of logical volume vg00/thinpool_tdata changed from 20.00 MiB (5 extents) to 32.00 MiB (8 extents). [ 4.468128] device-mapper: thin: 253:2: switching pool to write mode [ 4.485914] device-mapper: thin: 253:2: growing the data device from 320 to 512 blocks [ 4.049409] lvm[446]: Logical volume vg00/thinpool_tdata successfully resized. [ 4.528457] device-mapper: thin: 253:2: reached low water mark for data device: sending event. [ 4.605303] device-mapper: thin: 253:2: switching pool to out-of-data-space (queue IO) mode [ 4.224389] lvm[446]: Insufficient free space: 4 extents needed, but only 2 available [ 4.238011] lvm[446]: Failed command for vg00-thinpool-tpool. [ 4.251287] lvm[446]: WARNING: Thin pool vg00-thinpool-tpool data is now 100.00% full. [ 4.272262] lvm[446]: Insufficient free space: 4 extents needed, but only 2 available [ 4.293131] lvm[446]: Failed command for vg00-thinpool-tpool. [ 4.339268] kdump.sh[515]: ^MChecking for memory holes : [ 0.0 %] / ^MChecking for memory holes : [100.0 %] | ^MExcluding unnecessary pages : [100.0 %] \ ^MCopying data : [ 97.8 %] - eta: 0s^MCopying data : [100.0 %] / eta: 0s^MCopying data : [100.0 %] | eta: 0s [ 4.365706] kdump.sh[515]: The dumpfile is saved to /kdumproot/mnt/var/crash/127.0.0.1-2022-05-25-00:26:53//vmcore-incomplete. [ 4.374532] kdump.sh[515]: makedumpfile Completed. [ 23.344673] lvm[446]: Insufficient free space: 4 extents needed, but only 2 available [ 23.361064] lvm[446]: Failed command for vg00-thinpool-tpool. [ 53.344919] lvm[446]: Insufficient free space: 4 extents needed, but only 2 available [ 53.372743] lvm[446]: Failed command for vg00-thinpool-tpool. [ 67.034930] device-mapper: thin: 253:2: switching pool to out-of-data-space (error IO) mode [ 67.049185] dm-3: writeback error on inode 134, offset 176128, sector 1664 [ 67.049388] dm-3: writeback error on inode 134, offset 21590016, sector 42448 [ 67.058358] dm-3: writeback error on inode 134, offset 25784320, sector 58752 [ 67.067440] dm-3: writeback error on inode 134, offset 29978624, sector 64128 [ 67.077099] dm-3: writeback error on inode 134, offset 34172928, sector 67328 [ 67.086122] dm-3: writeback error on inode 134, offset 34340864, sector 82816 [ 67.095074] dm-3: writeback error on inode 134, offset 37294080, sector 88704 [ 67.104907] dm-3: writeback error on inode 134, offset 40325120, sector 90368 [ 67.120230] dm-3: writeback error on inode 134, offset 770048, sector 1784 [ 66.670306] kdump.sh[562]: sync: error syncing '/kdumproot/mnt/var/crash/127.0.0.1-2022-05-25-00:26:53//vmcore': Input/output error [ 66.694671] kdump[570]: sync vmcore failed, exitcode:1 [ 66.709217] lvm[446]: Insufficient free space: 4 extents needed, but only 2 available [ 66.757336] kdump[572]: saving vmcore failed
And system rebooted by itself after this?
Yes, according to my test, it rebooted normally. BTW, the test system is Fedora-Cloud-Base-33-1.2.x86_64.raw.xz, whose kernel version is 5.8.15-301.fc33.x86_64. Please see the attachment for a complete dmesg log.
Thanks, Tao Liu
This is strange. Has xfs default behavior changed now? It used to try infinitely by default if thin pool got full.
Eric, would have any idea. What has changed.
Thanks Vivek
Thanks, Tao Liu
Thanks Vivek
[ 67.142081] Buffer I/O error on device dm-3, logical block 26625 [ 67.143062] Buffer I/O error on device dm-3, logical block 26626 [ 67.143062] Buffer I/O error on device dm-3, logical block 26627 [ 67.143062] Buffer I/O error on device dm-3, logical block 26628 [ 67.173719] Buffer I/O error on device dm-3, logical block 26629 [ 67.174703] Buffer I/O error on device dm-3, logical block 26630 [ 67.174703] Buffer I/O error on device dm-3, logical block 26631 [ 67.174703] Buffer I/O error on device dm-3, logical block 26632 [ 67.203393] Buffer I/O error on device dm-3, logical block 26633 [ 67.204375] Buffer I/O error on device dm-3, logical block 26634 [ 67.218086] EXT4-fs warning (device dm-3): ext4_end_bio:345: I/O error 3 writing to inode 32389 starting block 29184) [ 67.230461] EXT4-fs warning (device dm-3): ext4_end_bio:345: I/O error 3 writ: [ 67.243262] EXT4-fs warning (device dm-3): ext4_end_bio:345: I/O error 3 writing to inode 32389 starting block 32768) [ 67.257469] EXT4-fs warning (device dm-3): ext4_end_bio:345: I/O error 3 writing to inode 32389 starting block 33280) [ 67.270994] EXT4-fs warning (device dm-3): ext4_end_bio:345: I/O error 3 writing to inode 32389 starting block 34816) [ 67.283991] EXT4-fs warning (device dm-3): ext4_end_bio:345: I/O error 3 writing to inode 32389 starting block 36864) [ 67.296368] EXT4-fs warning (device dm-3): ext4_end_bio:345: I/O error 3 writing to inode 32389 starting block 37376) [ 67.310025] EXT4-fs warning (device dm-3): ext4_end_bio:345: I/O error 3 writing to inode 32389 starting block 38912) [ 67.323535] EXT4-fs warning (device dm-3): ext4_end_bio:345: I/O error 3 writing to inode 32389 starting block 43520) [ 67.338894] JBD2: Detected IO errors while flushing file data on dm-3-8 [ 66.856873] lvm[440]: Insufficient free space: 4 extents needed, but only 2 available [ 66.872656] lvm[440]: Failed command for vg00-thinpool-tpool. [ 66.884612] kdump.sh[512]: sync: error syncing '/kdumproot/mnt/var/crash/127.0.0.1-2022-05-24-14:42:53//vmcore': Input/output error [ 66.902511] kdump[514]: sync vmcore failed, exitcode:1 [ 66.914007] kdump[516]: saving vmcore failed
- If thin pool size-autoextend fails, the user space program will not know due to buffered IO. So "sync -f vmcore" is used during kdump in 2nd kernel, to force sync vmcore data into disk.
It would be good if this "sync -f vmcore" fix is sent as a separate patch. This is needed anyway irrespective of thin pool support.
OK, I will split it to a seperate patch.
Thanks, Tao Liu
Thanks Vivek
According to my testing, the memory consumption procedure for lvm2 thinp is the thin pool size-autoextend phase. For fedora and rhel9, the default crashkernel value is enough. But for rhel8, the default crashkernel value 1G-4G:160M is not enough, so it should be handled particularly.
v1 -> v2:
- Modified the usage of lvs cmd when check if target is lvm2 thinp device.
- Removed the sync flag way of mounting for lvm2 thinp target during kdump, use "sync -f vmcore" to force sync data, and handle the error if fails.
Tao Liu (4): Add lvm2 thin provision dump target checker Add lvm2-monitor.service for kdump when lvm2 thinp enabled lvm.conf should be check modified if lvm2 thinp enabled Fix the sync issue for dump_fs
dracut-kdump.sh | 10 ++++++++-- dracut-lvm2-monitor.service | 15 +++++++++++++++ dracut-module-setup.sh | 16 ++++++++++++++++ kdump-lib-initramfs.sh | 20 ++++++++++++++++++++ kdumpctl | 1 + kexec-tools.spec | 2 ++ 6 files changed, 62 insertions(+), 2 deletions(-) create mode 100644 dracut-lvm2-monitor.service
-- 2.33.1