Gitweb: https://sourceware.org/git/?p=lvm2.git;a=commitdiff;h=aa68b898ff9c51dcbd87c…
Commit: aa68b898ff9c51dcbd87c6be34632e33f0299a18
Parent: b5be7420d947b9bfe52da73078955ed241765875
Author: Zdenek Kabelac <zkabelac(a)redhat.com>
AuthorDate: Fri Nov 24 13:57:22 2017 +0100
Committer: Zdenek Kabelac <zkabelac(a)redhat.com>
CommitterDate: Fri Nov 24 16:05:21 2017 +0100
libdm: preload propagates delayed resume
Propagate delayed resume at least for preload case in a simple way.
Currently PVMOVE depends on internal logic where 'mirror' with
corelog is 'possible' PVMOVE. In such case resume of 'created'
node is 'delayed'.
This is mostly an ugly internal hack - but for the moment being when we
add propagation for preload - it does work reasonable.
TODO: provide standard API and avoid this internal 'guessing'.
---
WHATS_NEW_DM | 1 +
libdm/libdm-deptree.c | 4 ++++
2 files changed, 5 insertions(+), 0 deletions(-)
diff --git a/WHATS_NEW_DM b/WHATS_NEW_DM
index b7f71ca..9cec953 100644
--- a/WHATS_NEW_DM
+++ b/WHATS_NEW_DM
@@ -1,5 +1,6 @@
Version 1.02.146 -
====================================
+ Propagate delayed resume for pvmove subvolumes.
Suppress integrity encryption keys in 'table' output unless --showkeys supplied.
Version 1.02.145 - 3rd November 2017
diff --git a/libdm/libdm-deptree.c b/libdm/libdm-deptree.c
index b0a48f3..547904f 100644
--- a/libdm/libdm-deptree.c
+++ b/libdm/libdm-deptree.c
@@ -2962,6 +2962,10 @@ int dm_tree_preload_children(struct dm_tree_node *dnode,
/* Preload children first */
while ((child = dm_tree_next_child(&handle, dnode, 0))) {
+ /* Propagate delay of resume from parent node */
+ if (dnode->props.delay_resume_if_new)
+ child->props.delay_resume_if_new = 1;
+
/* Skip existing non-device-mapper devices */
if (!child->info.exists && child->info.major)
continue;
Gitweb: https://sourceware.org/git/?p=lvm2.git;a=commitdiff;h=ea0463791dcc68082ecf0…
Commit: ea0463791dcc68082ecf0b6b6681e82becaffb40
Parent: bbaaf4f1d34a1a097c20dca1f36a0f6a50c5d066
Author: David Teigland <teigland(a)redhat.com>
AuthorDate: Tue Nov 21 10:37:00 2017 -0600
Committer: David Teigland <teigland(a)redhat.com>
CommitterDate: Tue Nov 21 10:37:00 2017 -0600
man: lvmlockd steps for changing lock type
were not quite correct
---
man/lvmlockd.8_main | 45 +++++++++++++++++++++++++++++++++++++--------
1 files changed, 37 insertions(+), 8 deletions(-)
diff --git a/man/lvmlockd.8_main b/man/lvmlockd.8_main
index b7eba1a..fbcdc87 100644
--- a/man/lvmlockd.8_main
+++ b/man/lvmlockd.8_main
@@ -706,19 +706,24 @@ To change the dlm cluster name in the VG when the VG is still used by the
original cluster:
.IP \[bu] 2
-Stop the VG on all hosts:
+Start the VG on the host changing the lock type
+.br
+vgchange --lock-start <vgname>
+
+.IP \[bu] 2
+Stop the VG on all other hosts:
.br
vgchange --lock-stop <vgname>
.IP \[bu] 2
-Change the VG lock type to none:
+Change the VG lock type to none on the host where the VG is started:
.br
vgchange --lock-type none <vgname>
.IP \[bu] 2
-Change the dlm cluster name on the host or move the VG to the new cluster.
-The new dlm cluster must now be active on the host. Verify the new name
-by:
+Change the dlm cluster name on the hosts or move the VG to the new
+cluster. The new dlm cluster must now be running on the host. Verify the
+new name by:
.br
cat /sys/kernel/config/dlm/cluster/cluster_name
@@ -735,13 +740,14 @@ vgchange --lock-start <vgname>
.P
To change the dlm cluster name in the VG when the dlm cluster name has
-already changed, or the VG has already moved to a different cluster:
+already been changed on the hosts, or the VG has already moved to a
+different cluster:
.IP \[bu] 2
Ensure the VG is not being used by any hosts.
.IP \[bu] 2
-The new dlm cluster must be active on the host making the change.
+The new dlm cluster must be running on the host making the change.
The current dlm cluster name can be seen by:
.br
cat /sys/kernel/config/dlm/cluster/cluster_name
@@ -768,21 +774,44 @@ All LVs must be inactive to change the lock type.
lvmlockd must be configured and running as described in USAGE.
+.IP \[bu] 2
Change a local VG to a lockd VG with the command:
.br
vgchange --lock-type sanlock|dlm <vgname>
+.IP \[bu] 2
Start the VG on hosts to use it:
.br
vgchange --lock-start <vgname>
+.P
.SS changing a lockd VG to a local VG
-Stop the lockd VG on all hosts, then run:
+All LVs must be inactive to change the lock type.
+
+.IP \[bu] 2
+Start the VG on the host making the change:
+.br
+vgchange --lock-start <vgname>
+
+.IP \[bu] 2
+Stop the VG on all other hosts:
+.br
+vgchange --lock-stop <vgname>
+
+.IP \[bu] 2
+Change the VG lock type to none on the host where the VG is started:
.br
vgchange --lock-type none <vgname>
+.P
+
+If the VG cannot be started with the previous lock type, then the lock
+type can be forcibly changed to none with:
+
+vgchange --lock-type none --lock-opt force <vgname>
+
To change a VG from one lockd type to another (i.e. between sanlock and
dlm), first change it to a local VG, then to the new type.
Gitweb: https://sourceware.org/git/?p=lvm2.git;a=commitdiff;h=e52d2e3bd86a006bcc322…
Commit: e52d2e3bd86a006bcc322649ac9cb7c52117b787
Parent: 115e66e9bedaa5d6edfd436fb78aba2c753deeb7
Author: David Teigland <teigland(a)redhat.com>
AuthorDate: Wed Nov 15 15:34:42 2017 -0600
Committer: David Teigland <teigland(a)redhat.com>
CommitterDate: Fri Nov 17 10:59:12 2017 -0600
lvmlockd: retry on other sanlock errors
These less common errors returned from sanlock should
also cause sanlock to retry the lock acquire:
- i/o timeout occurs during sanlock_acquire().
other i/o on the same disk as the leases can cause
sanlock i/o timeouts.
- low level disk paxos contention between hosts naturally
causes one host to not acquire the lease. There are a
couple special error numbers associated with these cases
that should just be recognized as a normal failure to
acquire the lease.
---
daemons/lvmlockd/lvmlockd-sanlock.c | 20 ++++++++++++++++++++
1 files changed, 20 insertions(+), 0 deletions(-)
diff --git a/daemons/lvmlockd/lvmlockd-sanlock.c b/daemons/lvmlockd/lvmlockd-sanlock.c
index 0e81915..acec7dc 100644
--- a/daemons/lvmlockd/lvmlockd-sanlock.c
+++ b/daemons/lvmlockd/lvmlockd-sanlock.c
@@ -1528,6 +1528,26 @@ int lm_lock_sanlock(struct lockspace *ls, struct resource *r, int ld_mode,
return -EAGAIN;
}
+ if (rv == SANLK_AIO_TIMEOUT) {
+ /*
+ * sanlock got an i/o timeout when trying to acquire the
+ * lease on disk.
+ */
+ log_debug("S %s R %s lock_san acquire mode %d rv %d", ls->name, r->name, ld_mode, rv);
+ *retry = 0;
+ return -EAGAIN;
+ }
+
+ if (rv == SANLK_DBLOCK_LVER || rv == SANLK_DBLOCK_MBAL) {
+ /*
+ * There was contention with another host for the lease,
+ * and we lost.
+ */
+ log_debug("S %s R %s lock_san acquire mode %d rv %d", ls->name, r->name, ld_mode, rv);
+ *retry = 0;
+ return -EAGAIN;
+ }
+
if (rv == SANLK_ACQUIRE_OWNED_RETRY) {
/*
* The lock is held by a failed host, and will eventually