Gitweb: https://sourceware.org/git/?p=lvm2.git;a=commitdiff;h=d64377b5bbd2d170f1d63…
Commit: d64377b5bbd2d170f1d6365157e2245d90a63c1a
Parent: 0000000000000000000000000000000000000000
Author: Marian Csontos <mcsontos(a)redhat.com>
AuthorDate: 2018-10-30 09:10 +0000
Committer: Marian Csontos <mcsontos(a)redhat.com>
CommitterDate: 2018-10-30 09:10 +0000
annotated tag: v2_02_182 has been created
at d64377b5bbd2d170f1d6365157e2245d90a63c1a (tag)
tagging b93aded0212a903f8a1e9e897dada1f34aa3de43 (commit)
replaces v2_02_180
Release 2.02.182
Important bugfix release.
This addresses an issue introduced in 2.02.178 where concurrent write at the
start of the logical volume and at end of VG metadata might result in
overwriting the LV data.
Also MD RAID version 1.0 devices are correctly filtered out preventing them to
be seen as duplicate PVs.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iQIcBAABAgAGBQJb2CAQAAoJELkRJDHlCQOf9pMP/0QfI9y8xYxzrtvnMhErb/us
FBdhgIxMEWkjcs48bY00DQLDWBgxTfngWeU2i01a64yNfo092JEa+UeOel/43KsV
cxE9pYK9JyQNak8sXjNXZm2ldUYlkgenEDdw4CF3PEkEmyukdbUicwFE8pAr1bdt
ntxG8qWD7mxekurO9ybCZ9Sjhhl6ZwCxuJxfGpYmSLQYmubL5JDGUGRnSo6Xjv9l
CxhJ1yhwR0oSBdleH4OHA7zdu+9+12fDt0JQRTXYAGUo0oZXOtzcrboudbp3G5RB
I2F4e1cmDShb7nommHF0gaWvzcw1gHuuSGdyVHn4EkEGk8XHVyKTbmadbJNru9pj
pmHmX6e3+8gIP3h14QNq+KP4poHQBCCqMsdjbCv9ay1+85v4i1xPUkA20Cgc/KLA
S1oWSOAKk6o44bETBwhYNmM03oMtnSdqYDo7rsIT9Qmtd6+/Aeda32Zj4RM9z+Eb
NZx21xQ6V+N7d02GmT1Z1XN7BgYBuEzm7iQtmm5TvZBBqcqjC1g3TF8oltUW/ALA
dt/H1V5VBZGqA4yu8vPMHwn4PSAW/gh9UFET5obfxWyyBRjFDkM4wBYe5LoovwW9
fG/Bty7u8+LIgyUb/GMUBPtoPOnXCyDHeZ3CBz4BmlASMHqxEi9aap4PBiwpY48I
Jq8hEGpeAjv/vVo80xXT
=Zp35
-----END PGP SIGNATURE-----
Bryn M. Reeves (1):
dmsetup: fix error propagation in _display_info_cols()
David Teigland (12):
lvconvert: restrict command matching for no option variant
lvconvert: improve text about splitmirrors
vgcreate: close exclusive fd after pvcreate
mirrors: fix read_only_volume_list
bcache: reduce MAX_IO to 256
lvmetad: improve scan for pvscan all
lvmetad: fix pvs for many devices
WHATS_NEW: recent fixes
scan: use full md filter when md 1.0 devices are present
scan: enable full md filter when md 1.0 devices are present
tests: add new test for lvm on md devices
metadata: prevent writing beyond metadata area
Heinz Mauelshagen (8):
lvconvert: reject conversions of LVs under snapshot
lvconvert: reject conversions on raid1 split trackchanges SubLVs
lvconvert: reject conversions on raid1 split trackchanges LVs
lvconvert: fix regression preventing direct striped conversion
lvconvert: fix conversion attempts to linear
test: add striped -> raid0 test script
lvconvert: avoid superfluous interim raid type
lvconvert: fix interim segtype regression on raid6 conversions
Marian Csontos (15):
post-release
test: Check flavour is used and exists
Add BSD 2-Clause License
WHATS_NEW
build: make generate
pre-release
pre-release
post-release
Merge branch '2018-06-01-stable' of git://sourceware.org/git/lvm2 into 2018-06-01-stable
spec: Add vdo plugin for dmeventd
spec: Disable python bindings on newer versions
Update WHATS_NEW
spec: Fix python and applib interactions
Update WHATS_NEW
pre-release
Peter Rajnoha (1):
scripts: add After=rbdmap.service to {lvm2-activation-net,blk-availability}.service
Zdenek Kabelac (8):
dmeventd: base vdo plugin
dmeventd: rebase to stable branch
cache: drop metadata_format validation
mirror: fix splitmirrors for mirror type
tests: splitmirror for mirror type
tests: check policy mq can be used with format2
dmeventd: lvm2 plugin uses envvar registry
tests: check activation of many thin-pool
Gitweb: https://sourceware.org/git/?p=lvm2.git;a=commitdiff;h=8df2dd66ce53817b250f5…
Commit: 8df2dd66ce53817b250f5dd6bd05fda3a38ac26e
Parent: 16ae968d24b4fe3264dc9b46063345ff2846957b
Author: Heinz Mauelshagen <heinzm(a)redhat.com>
AuthorDate: Thu Oct 25 14:30:32 2018 +0200
Committer: Heinz Mauelshagen <heinzm(a)redhat.com>
CommitterDate: Thu Oct 25 14:35:56 2018 +0200
Revert "raid: fix left behind SubLVs"
This reverts commit 16ae968d24b4fe3264dc9b46063345ff2846957b.
We need to come up with a better fix, because we fall short
wiping all known signatures when not using the wipe_lv API.
---
lib/metadata/raid_manip.c | 140 +++++++++++++++++++++------------------------
1 files changed, 66 insertions(+), 74 deletions(-)
diff --git a/lib/metadata/raid_manip.c b/lib/metadata/raid_manip.c
index 25960a3..3944dc4 100644
--- a/lib/metadata/raid_manip.c
+++ b/lib/metadata/raid_manip.c
@@ -689,90 +689,86 @@ static int _lv_update_and_reload_list(struct logical_volume *lv, int origin_only
return r;
}
-/*
- * HM Helper
- *
- * clear first @sectors of @lv
- *
- * Presuming we are holding an exclusive lock, we can clear the first
- * @sectors of the (metadata) @lv directly on the respective PE(s) thus
- * avoiding write+commit+activation of @lv altogether and hence superfluous
- * latencies or left behind visible SubLVs on a command/system crash.
+/* Makes on-disk metadata changes
+ * If LV is active:
+ * clear first block of device
+ * otherwise:
+ * activate, clear, deactivate
*
* Returns: 1 on success, 0 on failure
- *
- * HM FIXME: share with lv_manip.c!
*/
-static int _clear_lv(struct logical_volume *lv, uint32_t sectors)
+static int _clear_lvs(struct dm_list *lv_list)
{
- struct lv_segment *seg;
- struct physical_volume *pv;
- uint64_t offset;
- uint32_t cur_sectors;
+ struct lv_list *lvl;
+ struct volume_group *vg = NULL;
+ unsigned i = 0, sz = dm_list_size(lv_list);
+ char *was_active;
+ int r = 1;
- if (test_mode())
+ if (!sz) {
+ log_debug_metadata(INTERNAL_ERROR "Empty list of LVs given for clearing.");
return 1;
+ }
- if (!sectors)
- return_0;
+ dm_list_iterate_items(lvl, lv_list) {
+ if (!lv_is_visible(lvl->lv)) {
+ log_error(INTERNAL_ERROR
+ "LVs must be set visible before clearing.");
+ return 0;
+ }
+ vg = lvl->lv->vg;
+ }
+
+ if (test_mode())
+ return 1;
/*
- * Rather than wiping lv->size, we can simply wipe the first 4KiB
- * to remove the superblock of any previous RAID devices. It is much
- * quicker than wiping a potentially larger metadata device completely.
+ * FIXME: only vg_[write|commit] if LVs are not already written
+ * as visible in the LVM metadata (which is never the case yet).
*/
- log_verbose("Clearing metadata area of %s.", display_lvname(lv));
-
- dm_list_iterate_items(seg, &lv->segments) {
- if (seg_type(seg, 0) != AREA_PV)
- return_0;
- if (seg->area_count != 1)
- return_0;
- if (!(pv = seg_pv(seg, 0)))
- return_0;
- if (!pv->pe_start) /* Be careful */
- return_0;
-
- offset = (pv->pe_start + seg_pe(seg, 0) * pv->pe_size) << 9;
- cur_sectors = min(sectors, pv->pe_size);
- sectors -= cur_sectors;
- if (!dev_set(pv->dev, offset, cur_sectors << 9, DEV_IO_LOG, 0))
- return_0;
-
- if (!sectors)
- break;
- }
-
- return 1;
-}
+ if (!vg || !vg_write(vg) || !vg_commit(vg))
+ return_0;
-/*
- * HM Helper:
- *
- * wipe all LVs first sector on @lv_list avoiding metadata commit/activation.
- *
- * Returns 1 on success or 0 on failure
- *
- * HM FIXME: share with lv_manip.c!
- */
-static int _clear_lvs(struct dm_list *lv_list)
-{
- struct lv_list *lvl;
+ was_active = alloca(sz);
- if (test_mode())
- return 1;
+ dm_list_iterate_items(lvl, lv_list)
+ if (!(was_active[i++] = lv_is_active(lvl->lv))) {
+ lvl->lv->status |= LV_TEMPORARY;
+ if (!activate_lv(vg->cmd, lvl->lv)) {
+ log_error("Failed to activate localy %s for clearing.",
+ display_lvname(lvl->lv));
+ r = 0;
+ goto out;
+ }
+ lvl->lv->status &= ~LV_TEMPORARY;
+ }
- if (!dm_list_size(lv_list)) {
- log_debug_metadata(INTERNAL_ERROR "Empty list of LVs given for clearing.");
- return 1;
+ dm_list_iterate_items(lvl, lv_list) {
+ log_verbose("Clearing metadata area %s.", display_lvname(lvl->lv));
+ /*
+ * Rather than wiping lv->size, we can simply
+ * wipe the first sector to remove the superblock of any previous
+ * RAID devices. It is much quicker.
+ */
+ if (!wipe_lv(lvl->lv, (struct wipe_params) { .do_zero = 1, .zero_sectors = 1 })) {
+ log_error("Failed to zero %s.", display_lvname(lvl->lv));
+ r = 0;
+ goto out;
+ }
}
-
- /* Walk list and clear first sector of each LV */
+out:
+ /* TODO: deactivation is only needed with clustered locking
+ * in normal case we should keep device active
+ */
+ sz = 0;
dm_list_iterate_items(lvl, lv_list)
- if (!_clear_lv(lvl->lv, 1))
- return 0;
+ if ((i > sz) && !was_active[sz++] &&
+ !deactivate_lv(vg->cmd, lvl->lv)) {
+ log_error("Failed to deactivate %s.", display_lvname(lvl->lv));
+ r = 0; /* continue deactivating */
+ }
- return 1;
+ return r;
}
/* raid0* <-> raid10_near area reorder helper: swap 2 LV segment areas @a1 and @a2 */
@@ -5507,12 +5503,8 @@ static int _takeover_upconvert_wrapper(TAKEOVER_FN_ARGS)
if (segtype_is_striped_target(initial_segtype) &&
!_convert_raid0_to_striped(lv, 0, &removal_lvs))
return_0;
- if (!dm_list_empty(&removal_lvs)) {
- if (!vg_write(lv->vg) || !vg_commit(lv->vg))
- return_0;
- if (!_eliminate_extracted_lvs(lv->vg, &removal_lvs)) /* Updates vg */
- return_0;
- }
+ if (!_eliminate_extracted_lvs(lv->vg, &removal_lvs)) /* Updates vg */
+ return_0;
return_0;
}
Gitweb: https://sourceware.org/git/?p=lvm2.git;a=commitdiff;h=16ae968d24b4fe3264dc9…
Commit: 16ae968d24b4fe3264dc9b46063345ff2846957b
Parent: fc35a9169e9f5804e38e4a6a6a7bf3555c49b636
Author: Heinz Mauelshagen <heinzm(a)redhat.com>
AuthorDate: Wed Oct 24 15:26:19 2018 +0200
Committer: Heinz Mauelshagen <heinzm(a)redhat.com>
CommitterDate: Wed Oct 24 16:35:30 2018 +0200
raid: fix left behind SubLVs
lvm metadata writes, commits and activations are performed
for (newly) allocated RAID metadata SubLVs to wipe any preexisiting
data thus avoid false raid superblock positives on RaidLV activation.
This process can be interrupted by command or system crashs
thus leaving stale SubLVs in the lvm metadata as a problem.
Because we hold an exclusive lock in this metadata SubLV wiping
process, we can address this problem by avoiding aforementioned
commits/writes/activations altogether wiping the respective first
sector of the first physical extent allocated to any metadata SubLV
directly via the existing dev_set() API.
Succeeds all LVM RAID tests.
Related: rhbz1633167
---
lib/metadata/raid_manip.c | 140 ++++++++++++++++++++++++---------------------
1 files changed, 74 insertions(+), 66 deletions(-)
diff --git a/lib/metadata/raid_manip.c b/lib/metadata/raid_manip.c
index 3944dc4..25960a3 100644
--- a/lib/metadata/raid_manip.c
+++ b/lib/metadata/raid_manip.c
@@ -689,86 +689,90 @@ static int _lv_update_and_reload_list(struct logical_volume *lv, int origin_only
return r;
}
-/* Makes on-disk metadata changes
- * If LV is active:
- * clear first block of device
- * otherwise:
- * activate, clear, deactivate
+/*
+ * HM Helper
+ *
+ * clear first @sectors of @lv
+ *
+ * Presuming we are holding an exclusive lock, we can clear the first
+ * @sectors of the (metadata) @lv directly on the respective PE(s) thus
+ * avoiding write+commit+activation of @lv altogether and hence superfluous
+ * latencies or left behind visible SubLVs on a command/system crash.
*
* Returns: 1 on success, 0 on failure
+ *
+ * HM FIXME: share with lv_manip.c!
*/
-static int _clear_lvs(struct dm_list *lv_list)
+static int _clear_lv(struct logical_volume *lv, uint32_t sectors)
{
- struct lv_list *lvl;
- struct volume_group *vg = NULL;
- unsigned i = 0, sz = dm_list_size(lv_list);
- char *was_active;
- int r = 1;
-
- if (!sz) {
- log_debug_metadata(INTERNAL_ERROR "Empty list of LVs given for clearing.");
- return 1;
- }
-
- dm_list_iterate_items(lvl, lv_list) {
- if (!lv_is_visible(lvl->lv)) {
- log_error(INTERNAL_ERROR
- "LVs must be set visible before clearing.");
- return 0;
- }
- vg = lvl->lv->vg;
- }
+ struct lv_segment *seg;
+ struct physical_volume *pv;
+ uint64_t offset;
+ uint32_t cur_sectors;
if (test_mode())
return 1;
+ if (!sectors)
+ return_0;
+
/*
- * FIXME: only vg_[write|commit] if LVs are not already written
- * as visible in the LVM metadata (which is never the case yet).
+ * Rather than wiping lv->size, we can simply wipe the first 4KiB
+ * to remove the superblock of any previous RAID devices. It is much
+ * quicker than wiping a potentially larger metadata device completely.
*/
- if (!vg || !vg_write(vg) || !vg_commit(vg))
- return_0;
+ log_verbose("Clearing metadata area of %s.", display_lvname(lv));
- was_active = alloca(sz);
+ dm_list_iterate_items(seg, &lv->segments) {
+ if (seg_type(seg, 0) != AREA_PV)
+ return_0;
+ if (seg->area_count != 1)
+ return_0;
+ if (!(pv = seg_pv(seg, 0)))
+ return_0;
+ if (!pv->pe_start) /* Be careful */
+ return_0;
- dm_list_iterate_items(lvl, lv_list)
- if (!(was_active[i++] = lv_is_active(lvl->lv))) {
- lvl->lv->status |= LV_TEMPORARY;
- if (!activate_lv(vg->cmd, lvl->lv)) {
- log_error("Failed to activate localy %s for clearing.",
- display_lvname(lvl->lv));
- r = 0;
- goto out;
- }
- lvl->lv->status &= ~LV_TEMPORARY;
- }
+ offset = (pv->pe_start + seg_pe(seg, 0) * pv->pe_size) << 9;
+ cur_sectors = min(sectors, pv->pe_size);
+ sectors -= cur_sectors;
+ if (!dev_set(pv->dev, offset, cur_sectors << 9, DEV_IO_LOG, 0))
+ return_0;
- dm_list_iterate_items(lvl, lv_list) {
- log_verbose("Clearing metadata area %s.", display_lvname(lvl->lv));
- /*
- * Rather than wiping lv->size, we can simply
- * wipe the first sector to remove the superblock of any previous
- * RAID devices. It is much quicker.
- */
- if (!wipe_lv(lvl->lv, (struct wipe_params) { .do_zero = 1, .zero_sectors = 1 })) {
- log_error("Failed to zero %s.", display_lvname(lvl->lv));
- r = 0;
- goto out;
- }
+ if (!sectors)
+ break;
}
-out:
- /* TODO: deactivation is only needed with clustered locking
- * in normal case we should keep device active
- */
- sz = 0;
+
+ return 1;
+}
+
+/*
+ * HM Helper:
+ *
+ * wipe all LVs first sector on @lv_list avoiding metadata commit/activation.
+ *
+ * Returns 1 on success or 0 on failure
+ *
+ * HM FIXME: share with lv_manip.c!
+ */
+static int _clear_lvs(struct dm_list *lv_list)
+{
+ struct lv_list *lvl;
+
+ if (test_mode())
+ return 1;
+
+ if (!dm_list_size(lv_list)) {
+ log_debug_metadata(INTERNAL_ERROR "Empty list of LVs given for clearing.");
+ return 1;
+ }
+
+ /* Walk list and clear first sector of each LV */
dm_list_iterate_items(lvl, lv_list)
- if ((i > sz) && !was_active[sz++] &&
- !deactivate_lv(vg->cmd, lvl->lv)) {
- log_error("Failed to deactivate %s.", display_lvname(lvl->lv));
- r = 0; /* continue deactivating */
- }
+ if (!_clear_lv(lvl->lv, 1))
+ return 0;
- return r;
+ return 1;
}
/* raid0* <-> raid10_near area reorder helper: swap 2 LV segment areas @a1 and @a2 */
@@ -5503,8 +5507,12 @@ static int _takeover_upconvert_wrapper(TAKEOVER_FN_ARGS)
if (segtype_is_striped_target(initial_segtype) &&
!_convert_raid0_to_striped(lv, 0, &removal_lvs))
return_0;
- if (!_eliminate_extracted_lvs(lv->vg, &removal_lvs)) /* Updates vg */
- return_0;
+ if (!dm_list_empty(&removal_lvs)) {
+ if (!vg_write(lv->vg) || !vg_commit(lv->vg))
+ return_0;
+ if (!_eliminate_extracted_lvs(lv->vg, &removal_lvs)) /* Updates vg */
+ return_0;
+ }
return_0;
}
Gitweb: https://sourceware.org/git/?p=lvm2.git;a=commitdiff;h=7498f8383397a93db9565…
Commit: 7498f8383397a93db95655ca227257836cbcac82
Parent: 9e1ee07d696fb0a1771f7dcd4490ed9ace0fa8d6
Author: David Teigland <teigland(a)redhat.com>
AuthorDate: Thu Oct 18 13:06:42 2018 -0500
Committer: David Teigland <teigland(a)redhat.com>
CommitterDate: Thu Oct 18 13:06:42 2018 -0500
tests: add new test for lvm on md devices
---
test/shell/lvm-on-md.sh | 87 +++++++++++++++++++++++++++++++++++++++++++++++
1 files changed, 87 insertions(+), 0 deletions(-)
diff --git a/test/shell/lvm-on-md.sh b/test/shell/lvm-on-md.sh
new file mode 100644
index 0000000..ec8cc23
--- /dev/null
+++ b/test/shell/lvm-on-md.sh
@@ -0,0 +1,87 @@
+#!/usr/bin/env bash
+
+# Copyright (C) 2018 Red Hat, Inc. All rights reserved.
+#
+# This copyrighted material is made available to anyone wishing to use,
+# modify, copy, or redistribute it subject to the terms and conditions
+# of the GNU General Public License v.2.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software Foundation,
+# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+
+SKIP_WITH_LVMPOLLD=1
+
+. lib/inittest
+
+test -f /proc/mdstat && grep -q raid1 /proc/mdstat || \
+ modprobe raid1 || skip
+
+aux lvmconf 'devices/md_component_detection = 1'
+aux extend_filter_LVMTEST "a|/dev/md|"
+
+aux prepare_devs 2
+
+# create 2 disk MD raid1 array
+# by default using metadata format 1.0 with data at the end of device
+aux prepare_md_dev 1 64 2 "$dev1" "$dev2"
+
+mddev=$(< MD_DEV)
+pvdev=$(< MD_DEV_PV)
+
+vgcreate $vg "$mddev"
+
+lvs $vg
+
+lvcreate -n $lv1 -l 2 $vg
+lvcreate -n $lv2 -l 2 -an $vg
+
+lvchange -ay $vg/$lv2
+
+lvs $vg
+
+pvs -vvvv 2>&1|tee pvs.out
+
+vgchange -an $vg
+
+vgchange -ay -vvvv $vg 2>&1| tee vgchange.out
+
+lvs $vg
+pvs
+
+vgchange -an $vg
+
+mdadm --stop "$mddev"
+
+# with md superblock 1.0 this pvs will report duplicates
+# for the two md legs since the md device itself is not
+# started
+pvs 2>&1 |tee out
+cat out
+grep "prefers device" out
+
+pvs -vvvv 2>&1| tee pvs2.out
+
+# should not activate from the md legs
+not vgchange -ay -vvvv $vg 2>&1|tee vgchange-fail.out
+
+# should not show an active lv
+lvs $vg
+
+# start the md dev
+mdadm --assemble "$mddev" "$dev1" "$dev2"
+
+# Now that the md dev is online, pvs can see it and
+# ignore the two legs, so there's no duplicate warning
+
+pvs 2>&1 |tee out
+cat out
+not grep "prefers device" out
+
+vgchange -ay $vg 2>&1 |tee out
+cat out
+not grep "prefers device" out
+
+vgchange -an $vg
+
+vgremove -f $vg