Gitweb: https://sourceware.org/git/?p=lvm2.git;a=commitdiff;h=9e8dec2f387d8eaf48195…
Commit: 9e8dec2f387d8eaf48195ef38ab7699d4a8385ed
Parent: 50130328450d1f624d30438ca835d40e0d4f942d
Author: Jonathan Brassow <jbrassow(a)redhat.com>
AuthorDate: Thu Nov 2 09:49:35 2017 -0500
Committer: Jonathan Brassow <jbrassow(a)redhat.com>
CommitterDate: Thu Nov 2 09:49:35 2017 -0500
testsuite: Fix problem when checking RAID4/5/6 for mismatches.
The lvchange-raid[456].sh test checks that mismatches can be detected
properly. It does this by writing garbage to the back half of one of
the legs directly. When performing a "check" or "repair" of mismatches,
MD does a good job going directly to disk and bypassing any buffers that
may prevent it from seeing mismatches. However, in the case of RAID4/5/6
we have the stripe cache to contend with and this is not bypassed. Thus,
mismatches which have /just/ happened to an area that now populates the
stripe cache may be overlooked. This isn't a serious issue, however,
because the stripe cache is short-lived and reasonably small. So, while
there may be a small window of time between the disk changing underneath
the RAID array and when you run a "check"/"repair" - causing a mismatch
to be missed - that would be no worse than if a user had simply run a
"check" a few seconds before the disk changed. IOW, it simply isn't worth
making a fuss over dropping the stripe cache before beginning a "check" or
"repair" (which we actually did attempt to do a while back).
So, to get the test running smoothly, we simply deactivate and reactivate
the LV to force the stripe cache to be dropped and then proceed. We could
just as easily wait a few seconds for the stripe cache to empty also.
---
test/shell/lvchange-raid.sh | 11 +++++++++++
1 files changed, 11 insertions(+), 0 deletions(-)
diff --git a/test/shell/lvchange-raid.sh b/test/shell/lvchange-raid.sh
index 8c22481..604b7f7 100644
--- a/test/shell/lvchange-raid.sh
+++ b/test/shell/lvchange-raid.sh
@@ -43,6 +43,9 @@ run_writemostly_check() {
printf "#\n#\n#\n# %s/%s (%s): run_writemostly_check\n#\n#\n#\n" \
$vg $lv $segtype
+
+ # I've seen this sync fail. when it does, it looks like sync
+ # thread has not been started... haven't repo'ed yet.
aux wait_for_sync $vg $lv
# No writemostly flag should be there yet.
@@ -169,6 +172,14 @@ run_syncaction_check() {
dd if=/dev/urandom of="$device" bs=1k count=$size seek=$seek
sync
+ # Cycle the LV so we don't grab stripe cache buffers instead
+ # of reading disk. This can happen with RAID 4/5/6. You
+ # may think this is bad because those buffers could prevent
+ # us from seeing bad disk blocks, however, the stripe cache
+ # is not long lived. (RAID1/10 are immediately checked.)
+ lvchange -an $vg/$lv
+ lvchange -ay $vg/$lv
+
# "check" should find discrepancies but not change them
# 'lvs' should show results
lvchange --syncaction check $vg/$lv
Gitweb: https://sourceware.org/git/?p=lvm2.git;a=commitdiff;h=50130328450d1f624d304…
Commit: 50130328450d1f624d30438ca835d40e0d4f942d
Parent: 58b763c99cb7620b1cc2313a9f0dccd98def53db
Author: Jonathan Brassow <jbrassow(a)redhat.com>
AuthorDate: Thu Nov 2 08:53:48 2017 -0500
Committer: Jonathan Brassow <jbrassow(a)redhat.com>
CommitterDate: Thu Nov 2 08:53:48 2017 -0500
testsuite: Add and document a 'should' for "idle" -> "recover" RAID test
When a "recover" is just starting for a RAID LV, it is possible to get
"idle" for the sync action if the status is issued quickly enough. This
is fine, the MD thread just hasn't gotten things going yet. However,
the /need/ for a "recover" should be marked in md->recovery and it would
be simple enough to fix the kernel so this doesn't happen. May eventually
want a separate bug for this, but for now it fits with RHBZ 1507719.
---
test/shell/lvconvert-raid-status-validation.sh | 7 ++++++-
1 files changed, 6 insertions(+), 1 deletions(-)
diff --git a/test/shell/lvconvert-raid-status-validation.sh b/test/shell/lvconvert-raid-status-validation.sh
index 9ffaaf3..3e91d23 100644
--- a/test/shell/lvconvert-raid-status-validation.sh
+++ b/test/shell/lvconvert-raid-status-validation.sh
@@ -83,7 +83,12 @@ while true; do
# If the sync operation ("recover" in this case) is not
# finished, then it better be as follows:
[ "${a[5]}" = "Aa" ]
- [ "${a[7]}" = "recover" ]
+
+ # Might be transitioning from "idle" to "recover".
+ # Kernel could check mddev->recovery for the intent to
+ # begin a "recover" and report that... probably would be
+ # better. RHBZ 1507719
+ should [ "${a[7]}" = "recover" ]
else
# Tough to tell the INVALID case,
# Before starting sync thread: "Aa X/X recover"
Gitweb: https://sourceware.org/git/?p=lvm2.git;a=commitdiff;h=373372c8ab3749bc76ced…
Commit: 373372c8ab3749bc76ced37cec04b00aae6e5979
Parent: 0ba393954296ee9521fffc83fbf4507060f08ffd
Author: Zdenek Kabelac <zkabelac(a)redhat.com>
AuthorDate: Wed Nov 1 00:51:39 2017 +0100
Committer: Zdenek Kabelac <zkabelac(a)redhat.com>
CommitterDate: Wed Nov 1 00:55:24 2017 +0100
lv_manip: hide layered LV temporarily
Since vg_validate() now rejects LVs without segments and
insert_layer_for_segments_on_pv() gets just created
'layer_lv' without segment, it needs to be hidden
from vg->lvs during processing of _align_segment_boundary_to_pe_range()
as this function calls lv_validate() and now requires
vg to be consistent. LV is then put back into vg->lvs.
---
lib/metadata/lv_manip.c | 10 ++++++++++
1 files changed, 10 insertions(+), 0 deletions(-)
diff --git a/lib/metadata/lv_manip.c b/lib/metadata/lv_manip.c
index f0e492b..c4c2fdf 100644
--- a/lib/metadata/lv_manip.c
+++ b/lib/metadata/lv_manip.c
@@ -7054,9 +7054,19 @@ int insert_layer_for_segments_on_pv(struct cmd_context *cmd,
layer_lv->name, lv_where->name,
pvl ? pv_dev_name(pvl->pv) : "any");
+ /* Temporarily hide layer_lv from vg->lvs list
+ * so the lv_split_segment() passes vg_validate()
+ * since here layer_lv has empty segment list */
+ if (!(lvl = find_lv_in_vg(lv_where->vg, layer_lv->name)))
+ return_0;
+ dm_list_del(&lvl->list);
+
if (!_align_segment_boundary_to_pe_range(lv_where, pvl))
return_0;
+ /* Put back layer_lv in vg->lv */
+ dm_list_add(&lv_where->vg->lvs, &lvl->list);
+
/* Work through all segments on the supplied PV */
dm_list_iterate_items(seg, &lv_where->segments) {
for (s = 0; s < seg->area_count; s++) {