Changes to 'io_timeout4'
by David Teigland
New branch 'io_timeout4' available with the following commits:
commit 4e6e204bf29b63a17bfa421c377bf7003287198a
Author: David Teigland <teigland(a)redhat.com>
Date: Mon Jul 23 08:58:57 2012 -0500
sanlock: adjustable io timeouts
New sanlock_add_lockspace_timeout() api to allow
the timeout to be specified per lockspace.
Also correctly handle nodes that may be using
different timeouts in the same lockspace.
Signed-off-by: David Teigland <teigland(a)redhat.com>
11 years, 7 months
4 commits - src/cmd.c src/lockspace.c src/main.c src/sanlock_internal.h src/sanlock_rv.h src/timeouts.h src/watchdog.c src/watchdog.h wdmd/main.c
by David Teigland
src/cmd.c | 2
src/lockspace.c | 4
src/main.c | 37 ++----
src/sanlock_internal.h | 3
src/sanlock_rv.h | 1
src/timeouts.h | 291 ++++++++++++++++++++++++++++---------------------
src/watchdog.c | 123 +++++++-------------
src/watchdog.h | 3
wdmd/main.c | 129 +++++++++++++++++++--
9 files changed, 352 insertions(+), 241 deletions(-)
New commits:
commit f24325cf8505e810329178c7c4575ede4f256a7f
Author: David Teigland <teigland(a)redhat.com>
Date: Mon Aug 6 17:18:56 2012 -0500
wdmd: preemptive close before test fails
Instead of closing the device when a test fails, close
it TEST_INTERVAL (10 sec) before the test fails. This
is done so that the watchdog will fire at most 60 sec
after the expire time (between 50 and 60 seconds instead
of between 60 and 70 seconds which would be the case
if we close at the expiration time; see previous commit).
The timeouts in sanlock have been based on the assumption
that the watchdog device fires at most 60 seconds after
the expiration time, so it's best to maintain that
expectation.
The pre-emptive close and re-open generate pings, so
they are used in place of ordinary pings.
If the expire time is at T45, and is renewed/extended
at T46, then the sequence of pings would be:
T10 - ping from ioctl
T20 - ping from ioctl
T30 - ping from ioctl
T40 - ping from close
T50 - ping from re-open
T60 - ping from ioctl
...
If the expire time was *not* renewed, then the watchdog
would fire at T100; which is 55 seconds after the
expiration time. 55 is between the desired 50-60 second
interveral.
Signed-off-by: David Teigland <teigland(a)redhat.com>
diff --git a/src/timeouts.h b/src/timeouts.h
index 92034b3..f62bb6f 100644
--- a/src/timeouts.h
+++ b/src/timeouts.h
@@ -58,14 +58,17 @@
*
* 100: sanlock fails to renew host_id on disk -> no wdmd_test_live
* wdmd test_client sees now 100 < expire 120 ok -> keepalive
+ * messages: check_our_lease warning (sanlock)
*
* 110: sanlock fails to renew host_id on disk -> no wdmd_test_live
- * wdmd test_client sees now 110 < expire 120 ok -> keepalive
+ * wdmd test_client sees now 110 < expire 120 ok -> keepalive (from dev close)
+ * messages: watchdog closed unclean (wdmd), test warning (wdmd)
*
* 120: sanlock fails to renew host_id on disk -> no wdmd_test_live
- * sanlock enters recovery mode and starts killing pids
+ * sanlock enters recovery mode and starts killing pids because we have reached
+ * now (120) is id_renewal_fail_seconds (80) after last renewal (40)
* wdmd test_client sees now 120 >= expire 120 fail -> no keepalive
- * wdmd starts logging error messages every 10 sec
+ * messages: check_our_lease failed (sanlock), test failed (wdmd)
*
* . /dev/watchdog will fire at last keepalive + watchdog_fire_timeout =
* T110 + 60 = T170
diff --git a/wdmd/main.c b/wdmd/main.c
index eafbf03..e289f44 100644
--- a/wdmd/main.c
+++ b/wdmd/main.c
@@ -407,6 +407,38 @@ static int test_clients(void)
(unsigned long long)client[i].expire,
client[i].name);
fail_count++;
+ continue;
+ }
+
+ /*
+ * If we can patch the kernel to avoid a close-ping,
+ * then we can remove this early/preemptive fail/close
+ * of the device, but instead just not pet the device
+ * when the expiration time is reached. Also see
+ * close_watchdog_unclean() below.
+ *
+ * We do this fail/close (which generates a ping)
+ * TEST_INTERVAL before the expire time because we want
+ * the device to fire at most 60 seconds after the
+ * expiration time. That means we need the last ping
+ * (from close) to be TEST_INTERVAL before to the
+ * expiration time.
+ *
+ * If we did the close at/after the expiration time,
+ * then the ping from the close would mean that the
+ * device would fire between 60 and 70 seconds after the
+ * expiration time.
+ */
+
+ if (t >= client[i].expire - DEFAULT_TEST_INTERVAL) {
+ log_error("test warning pid %d now %llu keepalive %llu renewal %llu expire %llu",
+ client[i].pid,
+ (unsigned long long)t,
+ (unsigned long long)last_keepalive,
+ (unsigned long long)client[i].renewal,
+ (unsigned long long)client[i].expire);
+ fail_count++;
+ continue;
}
}
@@ -890,7 +922,7 @@ static int test_loop(void)
/* If we can patch the kernel so that close
does not generate a ping, then we can skip
this close, and just not pet the device in
- this case. */
+ this case. Also see test_client above. */
close_watchdog_unclean();
}
}
commit 148e37e0f0d71abd4d0060959a1e8a9323eb173d
Author: David Teigland <teigland(a)redhat.com>
Date: Mon Aug 6 15:38:39 2012 -0500
wdmd: close device when test fails
Instead of just not petting the device after a test fails,
close the device. Because the close generates a ping, we
want to get it done early, otherwise if wdmd exited (e.g.
crash or sigkill) just before the device was ready to fire,
the close generated by the kernel extends the life of the
machine by an extra 60 sec. This means we need to re-open
the device if we want to resume petting it.
So, depending on whether the tests happen just prior
to the expiry or just after the expiry, the watchdog
will fire between 60 and 70 seconds after the expiry
time.
It would be 70 seconds if:
we do the check just before the expiration, the client
expires, 10 seconds (TEST_INTERVAL) later, we see the
expiration, close the device, which generates a ping,
which causes the firing to be 60 seconds after the close,
which is already 10 seconds after the expiration.
It would be 60 seconds if:
we do the check just after the expiration, we see
the expiration, close the device, which generates a
ping, which causes the firing to be 60 seconds after
the close, which is just after at the expiration
time.
Previously, the assumption was that the host would
be reset between 50 and 60 seconds from the expiration
time, but this did not account for the fact that
the daemon could exit just before the host reset,
which would lead the kernel to generate a new ping.
If we can patch the kernel so that a device close
does not generate a ping, then we do not need to
close the device when a test fails, but we can
simply not pet the device, as we've been doing.
Signed-off-by: David Teigland <teigland(a)redhat.com>
diff --git a/wdmd/main.c b/wdmd/main.c
index 5ed2cd6..eafbf03 100644
--- a/wdmd/main.c
+++ b/wdmd/main.c
@@ -58,7 +58,7 @@ static int daemon_debug;
static int socket_gid;
static time_t last_keepalive;
static char lockfile_path[PATH_MAX];
-static int dev_fd;
+static int dev_fd = -1;
static int shm_fd;
struct script_status {
@@ -657,10 +657,46 @@ static int test_scripts(void) { return 0; }
#endif /* TEST_SCRIPTS */
+static int open_dev(void)
+{
+ int fd;
+
+ if (dev_fd != -1) {
+ log_error("/dev/watchdog already open fd %d", dev_fd);
+ return -1;
+ }
+
+ fd = open("/dev/watchdog", O_WRONLY | O_CLOEXEC);
+ if (fd < 0) {
+ log_error("no /dev/watchdog, load a watchdog driver");
+ return fd;
+ }
+
+ dev_fd = fd;
+ return 0;
+}
+
+static void close_watchdog_unclean(void)
+{
+ if (dev_fd == -1) {
+ log_debug("close_watchdog_unclean already closed");
+ return;
+ }
+
+ log_error("/dev/watchdog closed unclean");
+ close(dev_fd);
+ dev_fd = -1;
+}
+
static void close_watchdog(void)
{
int rv;
+ if (dev_fd == -1) {
+ log_error("close_watchdog already closed");
+ return;
+ }
+
rv = write(dev_fd, "V", 1);
if (rv < 0)
log_error("/dev/watchdog disarm write error %d", errno);
@@ -668,17 +704,16 @@ static void close_watchdog(void)
log_error("/dev/watchdog disarmed");
close(dev_fd);
+ dev_fd = -1;
}
static int setup_watchdog(void)
{
int rv, timeout;
- dev_fd = open("/dev/watchdog", O_WRONLY | O_CLOEXEC);
- if (dev_fd < 0) {
- log_error("no /dev/watchdog, load a watchdog driver");
- return dev_fd;
- }
+ rv = open_dev();
+ if (rv < 0)
+ return -1;
timeout = 0;
@@ -844,8 +879,20 @@ static int test_loop(void)
fail_count += test_scripts();
fail_count += test_clients();
- if (!fail_count)
- pet_watchdog();
+ if (!fail_count) {
+ if (dev_fd == -1) {
+ log_error("/dev/watchdog reopen");
+ open_dev();
+ } else {
+ pet_watchdog();
+ }
+ } else {
+ /* If we can patch the kernel so that close
+ does not generate a ping, then we can skip
+ this close, and just not pet the device in
+ this case. */
+ close_watchdog_unclean();
+ }
}
sleep_seconds = test_time + test_interval - monotime();
commit c6f3cd55cfe588ee89d38b44f805f712a30512c5
Author: David Teigland <teigland(a)redhat.com>
Date: Tue Jul 31 13:41:28 2012 -0500
daemon: extend grace time
Increase the default grace time for a killpath instance
from 30 to 40 seconds based on a corrected analysis of
the recovery sequence. The period during which the
watchdog may fire is determined by the wdmd check
interval (10 seconds), not the sanlock renewal interval.
Signed-off-by: David Teigland <teigland(a)redhat.com>
diff --git a/src/sanlock_internal.h b/src/sanlock_internal.h
index c8301eb..9950ebd 100644
--- a/src/sanlock_internal.h
+++ b/src/sanlock_internal.h
@@ -238,7 +238,7 @@ EXTERN struct client *client;
#define WATCHDOG_FIRE_TIMEOUT 60
#define DEFAULT_USE_AIO 1
#define DEFAULT_IO_TIMEOUT 10
-#define DEFAULT_GRACE_SEC 30
+#define DEFAULT_GRACE_SEC 40
#define DEFAULT_USE_WATCHDOG 1
#define DEFAULT_HIGH_PRIORITY 1
#define DEFAULT_SOCKET_UID 0
diff --git a/src/timeouts.h b/src/timeouts.h
index 2e3ba0d..92034b3 100644
--- a/src/timeouts.h
+++ b/src/timeouts.h
@@ -13,88 +13,7 @@
*
*
* Using these values in the example
- * watchdog_fire_timeout = 60 (constant)
- * io_timeout_seconds = 2 (defined by us)
- * id_renewal_seconds = 10 (defined by us)
- * id_renewal_fail_seconds = 30 (defined by us)
- * host_dead_seconds = 90 (derived below)
- *
- * (FIXME: 2/10/30 is not a combination we'd actually create,
- * but the example still works)
- *
- * T time in seconds
- *
- * 0: sanlock renews host_id on disk
- * sanlock calls wdmd_test_live(0, 30)
- * wdmd test_client sees now 0 < expire 30 ok
- * wdmd /dev/watchdog keepalive
- *
- * 10: sanlock renews host_id on disk ok
- * sanlock calls wdmd_test_live(10, 40)
- * wdmd test_client sees now 10 < expire 30 or 40 ok
- * wdmd /dev/watchdog keepalive
- *
- * 20: sanlock fails to renew host_id on disk
- * sanlock does not call wdmd_test_live
- * wdmd test_client sees now 20 < expire 40 ok
- * wdmd /dev/watchdog keepalive
- *
- * 30: sanlock fails to renew host_id on disk
- * sanlock does not call wdmd_test_live
- * wdmd test_client sees now 30 < expire 40 ok
- * wdmd /dev/watchdog keepalive
- *
- * 40: sanlock fails to renew host_id on disk
- * sanlock does not call wdmd_test_live
- * wdmd test_client sees now 40 >= expire 40 fail
- * wdmd no keepalive
- *
- * . /dev/watchdog will fire at last keepalive + watchdog_fire_timeout =
- * T30 + 60 = T90
- * . host_id will expire at
- * last disk renewal ok + id_renewal_fail_seconds + watchdog_fire_timeout
- * T10 + 30 + 60 = T100
- * (aka last disk renewal ok + host_dead_seconds)
- * . the wdmd test at T30 could have been at T39, so wdmd would have
- * seen the client unexpired/ok just before the expiry time at T40,
- * which would lead to /dev/watchdog firing at 99 instead of 90
- *
- * 50: sanlock fails to renew host_id on disk -> does not call wdmd_test_live
- * wdmd test_client sees now 50 > expire 40 fail -> no keepalive
- * 60: sanlock fails to renew host_id on disk -> does not call wdmd_test_live
- * wdmd test_client sees now 60 > expire 40 fail -> no keepalive
- * 70: sanlock fails to renew host_id on disk -> does not call wdmd_test_live
- * wdmd test_client sees now 70 > expire 40 fail -> no keepalive
- * 80: sanlock fails to renew host_id on disk -> does not call wdmd_test_live
- * wdmd test_client sees now 80 > expire 40 fail -> no keepalive
- * 90: sanlock fails to renew host_id on disk -> does not call wdmd_test_live
- * wdmd test_client sees now 90 > expire 40 fail -> no keepalive
- * /dev/watchdog fires, machine reset
- * 100: another host takes over leases held by host_id
- *
- *
- * A more likely recovery scenario when a host_id cannot be renewed
- * (probably caused by loss of storage connection):
- *
- * The sanlock daemon fails to renew its host_id from T20 to T40.
- * At T40, after failing to renew within id_renewal_fail_seconds (30),
- * the sanlock daemon begins trying to kill all pids that were using
- * leases under this host_id. As soon as all those pids exit, the sanlock
- * daemon will call wdmd_test_live(0, 0) to disable the wdmd testing for
- * this client/host_id. If it's able to call wdmd_test_live(0, 0) before T90,
- * the wdmd test will no longer see this client's expiry time of 40,
- * so the wdmd tests will succeed, wdmd will immediately go back to
- * /dev/watchdog keepalive's, and the machine will not be reset.
- *
- */
-
-/*
- * Example of watchdog behavior when host_id renewals fail, assuming
- * that sanlock cannot successfully kill the pids it is supervising that
- * depend on the given host_id.
- *
- *
- * Using these values in the example
+ * wdmd test interval = 10 (defined in wdmd/main.c)
* watchdog_fire_timeout = 60 (constant)
* io_timeout_seconds = 10 (defined by us)
* id_renewal_seconds = 20 (= delta_renew_max = 2 * io_timeout_seconds)
@@ -105,65 +24,192 @@
*
* 0: sanlock renews host_id on disk
* sanlock calls wdmd_test_live(0, 80) [0 + 80]
- * wdmd test_client sees now 0 < expire 80 ok
- * wdmd /dev/watchdog keepalive
+ * wdmd test_client sees now 0 < expire 80 ok -> keepalive
+ *
+ * 10: wdmd test_client sees now 10 < expire 80 ok -> keepalive
*
* 20: sanlock renews host_id on disk ok
* sanlock calls wdmd_test_live(20, 100) [20 + 80]
- * wdmd test_client sees now 20 < expire 100 or 80 ok
- * wdmd /dev/watchdog keepalive
+ * wdmd test_client sees now 20 < expire 100 or 80 ok -> keepalive
+ *
+ * 30: wdmd test_client sees now 30 < expire 100 ok -> keepalive
*
* 40: sanlock renews host_id on disk ok
* sanlock calls wdmd_test_live(40, 120) [40 + 80]
- * wdmd test_client sees now 40 < expire 120 or 100 ok
- * wdmd /dev/watchdog keepalive
+ * wdmd test_client sees now 40 < expire 120 or 100 ok -> keepalive
+ *
+ * 50: wdmd test_client sees now 50 < expire 120 ok -> keepalive
*
* all normal until 59
* ---------------------------------------------------------
* problems begin at 60
*
- * 60: sanlock fails to renew host_id on disk
- * sanlock does not call wdmd_test_live
- * wdmd test_client sees now 60 < expire 120 ok
- * wdmd /dev/watchdog keepalive
+ * 60: sanlock fails to renew host_id on disk -> no wdmd_test_live
+ * wdmd test_client sees now 60 < expire 120 ok -> keepalive
+ *
+ * 70: sanlock fails to renew host_id on disk -> no wdmd_test_live
+ * wdmd test_client sees now 70 < expire 120 ok -> keepalive
+ *
+ * 80: sanlock fails to renew host_id on disk -> no wdmd_test_live
+ * wdmd test_client sees now 80 < expire 120 ok -> keepalive
+ *
+ * 90: sanlock fails to renew host_id on disk -> no wdmd_test_live
+ * wdmd test_client sees now 90 < expire 120 ok -> keepalive
*
- * 80: sanlock fails to renew host_id on disk
- * sanlock does not call wdmd_test_live
- * wdmd test_client sees now 80 < expire 120 ok
- * wdmd /dev/watchdog keepalive
+ * 100: sanlock fails to renew host_id on disk -> no wdmd_test_live
+ * wdmd test_client sees now 100 < expire 120 ok -> keepalive
*
- * 100: sanlock fails to renew host_id on disk
- * sanlock does not call wdmd_test_live
- * wdmd test_client sees now 100 < expire 120 ok
- * wdmd /dev/watchdog keepalive
+ * 110: sanlock fails to renew host_id on disk -> no wdmd_test_live
+ * wdmd test_client sees now 110 < expire 120 ok -> keepalive
*
- * 120: sanlock fails to renew host_id on disk
- * sanlock does not call wdmd_test_live
+ * 120: sanlock fails to renew host_id on disk -> no wdmd_test_live
* sanlock enters recovery mode and starts killing pids
- * wdmd test_client sees now 120 >= expire 120 fail
- * wdmd no keepalive
+ * wdmd test_client sees now 120 >= expire 120 fail -> no keepalive
* wdmd starts logging error messages every 10 sec
*
* . /dev/watchdog will fire at last keepalive + watchdog_fire_timeout =
- * T100 + 60 = T160
+ * T110 + 60 = T170
* . host_id will expire at
* last disk renewal ok + id_renewal_fail_seconds + watchdog_fire_timeout
* T40 + 80 + 60 = T180
* (aka last disk renewal ok + host_dead_seconds, T40 + 140 = T180)
- * . the wdmd test at T100 could have been at T119, so wdmd would have
+ * . the wdmd test at T110 could have been at T119, so wdmd would have
* seen the client unexpired/ok and done keepalive at 119 just before the
* expiry at 120, which would lead to /dev/watchdog firing at 119+60 = T179
- * . so, the watchdog could fire as early as T160 or as late as T179, but
+ * . so, the watchdog could fire as early as T170 or as late as T179, but
* the host_id will not expire until T180
*
- * 140: sanlock fails to renew host_id on disk -> does not call wdmd_test_live
+ * 130: sanlock fails to renew host_id on disk -> no wdmd_test_live
+ * wdmd test_client sees now 130 > expire 120 fail -> no keepalive
+ *
+ * 140: sanlock fails to renew host_id on disk -> no wdmd_test_live
* wdmd test_client sees now 140 > expire 120 fail -> no keepalive
*
- * 160: sanlock fails to renew host_id on disk -> does not call wdmd_test_live
+ * 150: sanlock fails to renew host_id on disk -> no wdmd_test_live
+ * wdmd test_client sees now 150 > expire 120 fail -> no keepalive
+ *
+ * 160: sanlock fails to renew host_id on disk -> no wdmd_test_live
* wdmd test_client sees now 160 > expire 120 fail -> no keepalive
- * /dev/watchdog fires because last keepalive was T100, 60 seconds ago
*
- * 180: another host can acquire leases held by host_id
+ * 170: sanlock fails to renew host_id on disk -> no wdmd_test_live
+ * wdmd test_client sees now 170 > expire 120 fail -> no keepalive
+ * /dev/watchdog fires because last keepalive was T110, 60 seconds ago
+ * (earliest possible /dev/watchdog firing due to wdmd checking expiry just
+ * after sanlock calls wdmd_test_live at T110 and just after the expiry at T120)
+ *
+ * 179: (latest possible /dev/watchdog firing due to wdmd checking expiry just
+ * before the expiry at T119)
+ *
+ * 180: another host can acquire leases held by host_id.
+ * This is host_dead_seconds (140) after the last successful renewal (T40)
+ */
+
+/*
+ * Example of watchdog behavior when host_id renewals fail, assuming
+ * that sanlock cannot successfully kill the pids it is supervising that
+ * depend on the given host_id.
+ *
+ *
+ * Using these values in the example
+ * wdmd test interval = 10 (defined in wdmd/main.c)
+ * watchdog_fire_timeout = 60 (constant)
+ * io_timeout_seconds = 20 (defined by us)
+ * id_renewal_seconds = 40 (= delta_renew_max = 2 * io_timeout_seconds)
+ * id_renewal_fail_seconds = 160 (= 4 * delta_renew_max = 8 * io_timeout_seconds)
+ * host_dead_seconds = 220 (id_renewal_fail_seconds + watchdog_fire_timeout)
+ *
+ * T time in seconds
+ *
+ * 0: sanlock renews host_id on disk
+ * sanlock calls wdmd_test_live(0, 160) [0 + 160]
+ * wdmd test_client sees now 0 < expire 160 ok -> keepalive
+ *
+ * 10: wdmd test_client sees now < expire 160 ok -> keepalive
+ * 20: wdmd test_client sees now < expire 160 ok -> keepalive
+ * 30: wdmd test_client sees now < expire 160 ok -> keepalive
+ *
+ * 40: sanlock renews host_id on disk ok
+ * sanlock calls wdmd_test_live(40, 200) [40 + 160]
+ * wdmd test_client sees now 40 < expire 200 or 160 ok -> keepalive
+ *
+ * 50: wdmd test_client sees now < expire 200 ok -> keepalive
+ * 60: wdmd test_client sees now < expire 200 ok -> keepalive
+ * 70: wdmd test_client sees now < expire 200 ok -> keepalive
+ *
+ * 80: sanlock renews host_id on disk ok
+ * sanlock calls wdmd_test_live(80, 240) [80 + 160]
+ * wdmd test_client sees now 80 < expire 240 or 200 ok -> keepalive
+ *
+ * 90: wdmd test_client sees now < expire 240 ok -> keepalive
+ * 100: wdmd test_client sees now < expire 240 ok -> keepalive
+ * 110: wdmd test_client sees now < expire 240 ok -> keepalive
+ *
+ * all normal until 119
+ * ---------------------------------------------------------
+ * problems begin at 120
+ *
+ * 120: sanlock fails to renew host_id on disk -> no wdmd_test_live
+ * wdmd test_client sees now 120 < expire 240 ok -> keepalive
+ *
+ * 130: sanlock fails to renew host_id on disk -> no wdmd_test_live
+ * wdmd test_client sees now < expire 240 ok -> keepalive
+ * 140: sanlock fails to renew host_id on disk -> no wdmd_test_live
+ * wdmd test_client sees now < expire 240 ok -> keepalive
+ * 150: sanlock fails to renew host_id on disk -> no wdmd_test_live
+ * wdmd test_client sees now < expire 240 ok -> keepalive
+ * 160: sanlock fails to renew host_id on disk -> no wdmd_test_live
+ * wdmd test_client sees now < expire 240 ok -> keepalive
+ * 170: sanlock fails to renew host_id on disk -> no wdmd_test_live
+ * wdmd test_client sees now < expire 240 ok -> keepalive
+ * 180: sanlock fails to renew host_id on disk -> no wdmd_test_live
+ * wdmd test_client sees now < expire 240 ok -> keepalive
+ * 190: sanlock fails to renew host_id on disk -> no wdmd_test_live
+ * wdmd test_client sees now < expire 240 ok -> keepalive
+ * 200: sanlock fails to renew host_id on disk -> no wdmd_test_live
+ * wdmd test_client sees now < expire 240 ok -> keepalive
+ * 210: sanlock fails to renew host_id on disk -> no wdmd_test_live
+ * wdmd test_client sees now < expire 240 ok -> keepalive
+ * 220: sanlock fails to renew host_id on disk -> no wdmd_test_live
+ * wdmd test_client sees now < expire 240 ok -> keepalive
+ * 230: sanlock fails to renew host_id on disk -> no wdmd_test_live
+ * wdmd test_client sees now < expire 240 ok -> keepalive
+ *
+ * 240: sanlock fails to renew host_id on disk -> no wdmd_test_live
+ * sanlock enters recovery mode and starts killing pids
+ * wdmd test_client sees now 240 >= expire 240 fail -> no keepalive
+ * wdmd starts logging error messages every 10 sec
+ *
+ * . /dev/watchdog will fire at last keepalive + watchdog_fire_timeout =
+ * T230 + 60 = T290
+ * . host_id will expire at
+ * last disk renewal ok + id_renewal_fail_seconds + watchdog_fire_timeout
+ * T80 + 160 + 60 = T300
+ * (aka last disk renewal ok + host_dead_seconds, T80 + 220 = T300)
+ * . the wdmd test at T230 could have been at T239, so wdmd would have
+ * seen the client unexpired/ok and done keepalive at 239 just before the
+ * expiry at 240, which would lead to /dev/watchdog firing at 239+60 = T299
+ * . so, the watchdog could fire as early as T290 or as late as T299, but
+ * the host_id will not expire until T300
+ *
+ * 250: sanlock fails to renew host_id on disk -> no wdmd_test_live
+ * wdmd test_client sees now > expire 240 fail -> no keepalive
+ * 260: sanlock fails to renew host_id on disk -> no wdmd_test_live
+ * wdmd test_client sees now > expire 240 fail -> no keepalive
+ * 270: sanlock fails to renew host_id on disk -> no wdmd_test_live
+ * wdmd test_client sees now > expire 240 fail -> no keepalive
+ * 280: sanlock fails to renew host_id on disk -> no wdmd_test_live
+ * wdmd test_client sees now > expire 240 fail -> no keepalive
+ * 290: sanlock fails to renew host_id on disk -> no wdmd_test_live
+ * wdmd test_client sees now > expire 240 fail -> no keepalive
+ * /dev/watchdog fires because last keepalive was T230, 60 seconds ago
+ * (earliest possible /dev/watchdog firing due to wdmd checking expiry
+ * just after sanlock calls wdmd_test_live at T230 and just after expiry at T240)
+ *
+ * 299: (latest possible /dev/watchdog firing due to wdmd checking expiry just
+ * before the expiry at T239)
+ *
+ * 300: another host can acquire leases held by host_id
+ * This is host_dead_seconds (220) after last successful renewal (T80)
*/
@@ -171,19 +217,19 @@
* killing pids
*
* From the time sanlock enters recovery mode and starts killing pids at T120,
- * until /dev/watchdog fires between T160 and T179, we need to attempt to
+ * until /dev/watchdog fires between T170 and T179, we need to attempt to
* gracefully kill pids for some time, and then leave around 10 seconds to
* escalate to SIGKILL and clean up leases from the exited pids.
*
- * Working backward from the earlier watchdog firing at T160, leaving 10 seconds
- * for SIGKILL to succeed, we need to begin SIGKILL at T150. This means we
- * have from T120 to T150 to allow graceful kill to complete. So, kill_count_grace
- * should be set to 30 by default (T120 to T150).
+ * Working backward from the earlier watchdog firing at T170, leaving 10 seconds
+ * for SIGKILL to succeed, we need to begin SIGKILL at T160. This means we
+ * have from T120 to T160 to allow graceful kill to complete. So, kill_count_grace
+ * should be set to 40 by default (T120 to T160).
*
* T40: last successful disk renewal
- * T120 - T149: graceful pid shutdown (30 sec)
- * T150 - T159: SIGKILL once per second (10 sec)
- * T160 - T179: watchdog fires sometime (SIGKILL continues)
+ * T120 - T159: graceful pid shutdown (40 sec)
+ * T160 - T169: SIGKILL once per second (10 sec)
+ * T170 - T179: watchdog fires sometime (SIGKILL continues)
* T180: other hosts acquire our leases
*
* The interval between each kill count/attempt is approx 1 sec,
commit 00115728fbc5da88c6330ccac4ded2275bef69bc
Author: David Teigland <teigland(a)redhat.com>
Date: Tue Aug 7 15:57:22 2012 -0500
sanlock/wdmd: remove global connection
As long as the sanlock daemon was running, it kept a
constant connection to wdmd, even when no lockspaces
existed. This prevented sanlock and wdmd from being
restarted independently, even when they were unused.
Independent restarting is necessary for upgrades, so
remove the global connection from sanlock to wdmd and
leave only the per-lockspace connections. The lockspace
connections now need to hold a refcount on wdmd which
prevents wdmd restarts.
Also in wdmd, if a client connection is closed, the
refcount must not be cleared on it, otherwise wdmd
could possibly be cleanly shutdown from sigterm while
an expired connection was awaiting a watchdog reset.
Also log at the error level when we kill a pid for
recovery, when that pid exits, and when all pids
are clear for recovery. This makes it much simpler
to see exactly what led up to a watchdog reset
after the fact.
Signed-off-by: David Teigland <teigland(a)redhat.com>
diff --git a/src/cmd.c b/src/cmd.c
index bb9b08c..bc7d7da 100644
--- a/src/cmd.c
+++ b/src/cmd.c
@@ -1422,6 +1422,7 @@ static int print_state_lockspace(struct space *sp, char *str, const char *list_n
"list=%s "
"space_id=%u "
"host_generation=%llu "
+ "renew_fail=%d "
"space_dead=%d "
"killing_pids=%d "
"corrupt_result=%d "
@@ -1434,6 +1435,7 @@ static int print_state_lockspace(struct space *sp, char *str, const char *list_n
list_name,
sp->space_id,
(unsigned long long)sp->host_generation,
+ sp->renew_fail,
sp->space_dead,
sp->killing_pids,
sp->lease_status.corrupt_result,
diff --git a/src/lockspace.c b/src/lockspace.c
index 6618f16..3b4063c 100644
--- a/src/lockspace.c
+++ b/src/lockspace.c
@@ -450,7 +450,7 @@ static void *lockspace_thread(void *arg_in)
rv = create_watchdog_file(sp, last_success);
if (rv < 0) {
log_erros(sp, "create_watchdog failed %d", rv);
- acquire_result = SANLK_ERROR;
+ acquire_result = SANLK_WD_ERROR;
}
}
@@ -704,7 +704,7 @@ int add_lockspace_wait(struct space *sp)
/* the thread exits right away if acquire fails */
pthread_join(sp->thread, NULL);
rv = result;
- log_space(sp, "add_lockspace fail lease_status %d", result);
+ log_erros(sp, "add_lockspace fail result %d", result);
goto fail_del;
}
diff --git a/src/main.c b/src/main.c
index 247b3f4..e5f0885 100644
--- a/src/main.c
+++ b/src/main.c
@@ -109,7 +109,7 @@ static void close_helper(void)
* msgs before getting EAGAIN.
*/
-static void send_helper_kill(struct client *cl, int sig)
+static void send_helper_kill(struct space *sp, struct client *cl, int sig)
{
struct helper_msg hm;
int rv;
@@ -140,6 +140,8 @@ static void send_helper_kill(struct client *cl, int sig)
hm.pid = cl->pid;
}
+ log_erros(sp, "kill %d sig %d count %d", cl->pid, sig, cl->kill_count);
+
retry:
rv = write(helper_kill_fd, &hm, sizeof(hm));
if (rv == -1 && errno == EINTR)
@@ -148,21 +150,21 @@ static void send_helper_kill(struct client *cl, int sig)
/* pipe is full, we'll try again in a second */
if (rv == -1 && errno == EAGAIN) {
helper_full_count++;
- log_debug("send_helper_kill pid %d sig %d full_count %u",
+ log_space(sp, "send_helper_kill pid %d sig %d full_count %u",
cl->pid, sig, helper_full_count);
return;
}
/* helper exited or closed fd, quit using helper */
if (rv == -1 && errno == EPIPE) {
- log_error("send_helper_kill EPIPE");
+ log_erros(sp, "send_helper_kill EPIPE");
close_helper();
return;
}
if (rv != sizeof(hm)) {
/* this shouldn't happen */
- log_error("send_helper_kill pid %d error %d %d",
+ log_erros(sp, "send_helper_kill pid %d error %d %d",
cl->pid, rv, errno);
close_helper();
return;
@@ -445,6 +447,9 @@ void client_pid_dead(int ci)
log_debug("client_pid_dead %d,%d,%d cmd_active %d suspend %d",
ci, cl->fd, cl->pid, cl->cmd_active, cl->suspend);
+ if (cl->kill_count)
+ log_error("dead %d ci %d count %d", cl->pid, ci, cl->kill_count);
+
cmd_active = cl->cmd_active;
pid = cl->pid;
cl->pid = -1;
@@ -608,15 +613,7 @@ static void kill_pids(struct space *sp)
if (!do_kill)
continue;
- if (cl->kill_count == kill_count_max) {
- log_erros(sp, "kill %d,%d,%d sig %d count %d final attempt",
- ci, fd, pid, sig, cl->kill_count);
- } else {
- log_space(sp, "kill %d,%d,%d sig %d count %d",
- ci, fd, pid, sig, cl->kill_count);
- }
-
- send_helper_kill(cl, sig);
+ send_helper_kill(sp, cl, sig);
}
}
@@ -654,7 +651,11 @@ static int all_pids_dead(struct space *sp)
if (stuck || check)
return 0;
- log_space(sp, "used by no pids");
+ if (sp->renew_fail)
+ log_erros(sp, "all pids clear");
+ else
+ log_space(sp, "all pids clear");
+
return 1;
}
@@ -765,6 +766,8 @@ static int main_loop(void)
check_all = 0;
rv = check_our_lease(&main_task, sp, &check_all, check_buf);
+ if (rv)
+ sp->renew_fail = 1;
if (rv || sp->external_remove || (external_shutdown > 1)) {
log_space(sp, "set killing_pids check %d remove %d",
@@ -1578,10 +1581,6 @@ static int do_daemon(void)
if (rv < 0)
goto out_logging;
- rv = setup_watchdog();
- if (rv < 0)
- goto out_threads;
-
rv = setup_listener();
if (rv < 0)
goto out_threads;
@@ -1594,8 +1593,6 @@ static int do_daemon(void)
close_token_manager();
- close_watchdog();
-
out_threads:
thread_pool_free();
out_logging:
diff --git a/src/sanlock_internal.h b/src/sanlock_internal.h
index a69b367..c8301eb 100644
--- a/src/sanlock_internal.h
+++ b/src/sanlock_internal.h
@@ -144,6 +144,7 @@ struct space {
uint64_t host_generation;
struct sync_disk host_id_disk;
int align_size;
+ int renew_fail;
int space_dead;
int killing_pids;
int external_remove;
diff --git a/src/sanlock_rv.h b/src/sanlock_rv.h
index 95234c7..686603a 100644
--- a/src/sanlock_rv.h
+++ b/src/sanlock_rv.h
@@ -14,6 +14,7 @@
#define SANLK_NONE 0 /* unused */
#define SANLK_ERROR -201
#define SANLK_AIO_TIMEOUT -202
+#define SANLK_WD_ERROR -203
/* run_ballot */
diff --git a/src/watchdog.c b/src/watchdog.c
index 3387eb9..296d735 100644
--- a/src/watchdog.c
+++ b/src/watchdog.c
@@ -39,8 +39,6 @@
#include "../wdmd/wdmd.h"
-static int daemon_wdmd_con;
-
void update_watchdog_file(struct space *sp, uint64_t timestamp)
{
int rv;
@@ -50,12 +48,15 @@ void update_watchdog_file(struct space *sp, uint64_t timestamp)
rv = wdmd_test_live(sp->wd_fd, timestamp, timestamp + main_task.id_renewal_fail_seconds);
if (rv < 0)
- log_erros(sp, "wdmd_test_live failed %d", rv);
+ log_erros(sp, "wdmd_test_live %llu failed %d",
+ (unsigned long long)timestamp, rv);
}
int create_watchdog_file(struct space *sp, uint64_t timestamp)
{
char name[WDMD_NAME_SIZE];
+ int test_interval, fire_timeout;
+ uint64_t last_keepalive;
int con, rv;
if (!com.use_watchdog)
@@ -63,30 +64,52 @@ int create_watchdog_file(struct space *sp, uint64_t timestamp)
con = wdmd_connect();
if (con < 0) {
- log_erros(sp, "wdmd connect failed %d", con);
+ log_erros(sp, "wdmd_connect failed %d", con);
goto fail;
}
memset(name, 0, sizeof(name));
- snprintf(name, WDMD_NAME_SIZE - 1, "sanlock_%s_hostid%llu",
+ snprintf(name, WDMD_NAME_SIZE - 1, "sanlock_%s:%llu",
sp->space_name, (unsigned long long)sp->host_id);
rv = wdmd_register(con, name);
if (rv < 0) {
- log_erros(sp, "wdmd register failed %d", rv);
+ log_erros(sp, "wdmd_register failed %d", rv);
goto fail_close;
}
- rv = wdmd_test_live(con, timestamp, timestamp + main_task.id_renewal_fail_seconds);
+ /* the refcount tells wdmd that it should not cleanly exit */
+
+ rv = wdmd_refcount_set(con);
if (rv < 0) {
- log_erros(sp, "wdmd_test_live failed %d", rv);
+ log_erros(sp, "wdmd_refcount_set failed %d", rv);
goto fail_close;
}
+ rv = wdmd_status(con, &test_interval, &fire_timeout, &last_keepalive);
+ if (rv < 0) {
+ log_erros(sp, "wdmd_status failed %d", rv);
+ goto fail_clear;
+ }
+
+ if (fire_timeout != WATCHDOG_FIRE_TIMEOUT) {
+ log_erros(sp, "wdmd invalid fire_timeout %d vs %d",
+ fire_timeout, WATCHDOG_FIRE_TIMEOUT);
+ goto fail_clear;
+ }
+
+ rv = wdmd_test_live(con, timestamp, timestamp + main_task.id_renewal_fail_seconds);
+ if (rv < 0) {
+ log_erros(sp, "wdmd_test_live in create failed %d", rv);
+ goto fail_clear;
+ }
+
sp->wd_fd = con;
return 0;
+ fail_clear:
+ wdmd_refcount_clear(con);
fail_close:
close(con);
fail:
@@ -103,85 +126,27 @@ void unlink_watchdog_file(struct space *sp)
log_space(sp, "wdmd_test_live 0 0 to disable");
rv = wdmd_test_live(sp->wd_fd, 0, 0);
- if (rv < 0)
- log_erros(sp, "wdmd_test_live failed %d", rv);
-}
+ if (rv < 0) {
+ log_erros(sp, "wdmd_test_live in unlink failed %d", rv);
-void close_watchdog_file(struct space *sp)
-{
- if (!com.use_watchdog)
- return;
+ /* We really want this to succeed to avoid a reset, so retry
+ after a short delay in case the problem was transient... */
- close(sp->wd_fd);
-}
+ usleep(500000);
-void close_watchdog(void)
-{
- if (!com.use_watchdog)
- return;
+ rv = wdmd_test_live(sp->wd_fd, 0, 0);
+ if (rv < 0)
+ log_erros(sp, "wdmd_test_live in unlink 2 failed %d", rv);
+ }
- wdmd_refcount_clear(daemon_wdmd_con);
- close(daemon_wdmd_con);
+ wdmd_refcount_clear(sp->wd_fd);
}
-/* TODO: add wdmd connection as client so poll detects if it fails? */
-
-int setup_watchdog(void)
+void close_watchdog_file(struct space *sp)
{
- char name[WDMD_NAME_SIZE];
- int test_interval, fire_timeout;
- uint64_t last_keepalive;
- int con, rv;
-
if (!com.use_watchdog)
- return 0;
-
- memset(name, 0, sizeof(name));
-
- snprintf(name, WDMD_NAME_SIZE - 1, "%s", "sanlock_daemon");
-
- con = wdmd_connect();
- if (con < 0) {
- log_error("wdmd connect failed for watchdog handling");
- goto fail;
- }
-
- rv = wdmd_register(con, name);
- if (rv < 0) {
- log_error("wdmd register failed");
- goto fail_close;
- }
-
- rv = wdmd_refcount_set(con);
- if (rv < 0) {
- log_error("wdmd refcount failed");
- goto fail_close;
- }
-
- rv = wdmd_status(con, &test_interval, &fire_timeout, &last_keepalive);
- if (rv < 0) {
- log_error("wdmd status failed");
- goto fail_clear;
- }
-
- log_debug("wdmd test_interval %d fire_timeout %d last_keepalive %llu",
- test_interval, fire_timeout,
- (unsigned long long)last_keepalive);
-
- if (fire_timeout != WATCHDOG_FIRE_TIMEOUT) {
- log_error("invalid watchdog fire_timeout %d vs %d",
- fire_timeout, WATCHDOG_FIRE_TIMEOUT);
- goto fail_clear;
- }
-
- daemon_wdmd_con = con;
- return 0;
+ return;
- fail_clear:
- wdmd_refcount_clear(con);
- fail_close:
- close(con);
- fail:
- return -1;
+ close(sp->wd_fd);
}
diff --git a/src/watchdog.h b/src/watchdog.h
index 06d3c67..ec3c853 100644
--- a/src/watchdog.h
+++ b/src/watchdog.h
@@ -14,7 +14,4 @@ int create_watchdog_file(struct space *sp, uint64_t timestamp);
void unlink_watchdog_file(struct space *sp);
void close_watchdog_file(struct space *sp);
-int setup_watchdog(void);
-void close_watchdog(void);
-
#endif
diff --git a/wdmd/main.c b/wdmd/main.c
index 17687be..5ed2cd6 100644
--- a/wdmd/main.c
+++ b/wdmd/main.c
@@ -182,6 +182,9 @@ static void client_pid_dead(int ci)
close(client[ci].fd);
+ /* refcount automatically dropped if a client with
+ no expiration is closed */
+
client[ci].used = 0;
memset(&client[ci], 0, sizeof(struct client));
@@ -189,16 +192,32 @@ static void client_pid_dead(int ci)
pollfd[ci].fd = -1;
pollfd[ci].events = 0;
} else {
- /* test_clients() needs to continue watching this ci so
- it can expire */
-
- log_debug("client_pid_dead ci %d expire %llu", ci,
- (unsigned long long)client[ci].expire);
+ /*
+ * Leave used and expire set so that test_clients will continue
+ * monitoring this client and expire if necessary.
+ *
+ * Leave refcount set so that the daemon will not cleanly shut
+ * down if it gets a sigterm.
+ *
+ * This case of a client con with an expire time being closed
+ * is a fatal condition; there's no way to clear or extend the
+ * expire time and no way to cleanly shut down the daemon.
+ * This should never happen.
+ *
+ * (We don't enforce that a client with an expire time also has refcount
+ * set, but I can't think of case where setting expire but not refcount
+ * would be useful.)
+ */
+
+ log_error("client dead ci %d fd %d pid %d renewal %llu expire %llu %s",
+ ci, client[ci].fd, client[ci].pid,
+ (unsigned long long)client[ci].renewal,
+ (unsigned long long)client[ci].expire,
+ client[ci].name);
close(client[ci].fd);
client[ci].pid_dead = 1;
- client[ci].refcount = 0;
client[ci].fd = -1;
pollfd[ci].fd = -1;
@@ -380,12 +399,13 @@ static int test_clients(void)
continue;
if (t >= client[i].expire) {
- log_error("test failed pid %d now %llu keepalive %llu renewal %llu expire %llu",
- client[i].pid,
+ log_error("test failed ci %d pid %d now %llu keepalive %llu renewal %llu expire %llu %s",
+ i, client[i].pid,
(unsigned long long)t,
(unsigned long long)last_keepalive,
(unsigned long long)client[i].renewal,
- (unsigned long long)client[i].expire);
+ (unsigned long long)client[i].expire,
+ client[i].name);
fail_count++;
}
}
11 years, 7 months
Changes to 'test-timeout'
by David Teigland
New branch 'test-timeout' available with the following commits:
commit 633414afcc7fbdf748a543d5e3e700106f1d790a
Author: Federico Simoncelli <fsimonce(a)redhat.com>
Date: Fri Aug 3 12:55:17 2012 -0400
python: support sanlock_add_lockspace_timeout
Signed-off-by: Federico Simoncelli <fsimonce(a)redhat.com>
commit cfdd3f9b41354b9952e9582c4dff9206e4918a00
Author: David Teigland <teigland(a)redhat.com>
Date: Mon Jul 23 08:58:57 2012 -0500
sanlock: adjustable io timeouts
New sanlock_add_lockspace_timeout() api to allow
the timeout to be specified per lockspace.
Also correctly handle nodes that may be using
different timeouts in the same lockspace.
Signed-off-by: David Teigland <teigland(a)redhat.com>
11 years, 7 months
Changes to 'test'
by David Teigland
New branch 'test' available with the following commits:
commit 2eab3d08b7a5aee49b0fad2234c8ddbd14c47d0e
Author: David Teigland <teigland(a)redhat.com>
Date: Mon Aug 6 17:18:56 2012 -0500
wdmd: preemptive close before test fails
Instead of closing the device when a test fails, close
it TEST_INTERVAL (10 sec) before the test fails. This
is done so that the watchdog will fire at most 60 sec
after the expire time (between 50 and 60 seconds instead
of between 60 and 70 seconds which would be the case
if we close at the expiration time; see previous commit).
The timeouts in sanlock have been based on the assumption
that the watchdog device fires at most 60 seconds after
the expiration time, so it's best to maintain that
expectation.
The pre-emptive close and re-open generate pings, so
they are used in place of ordinary pings.
If the expire time is at T45, and is renewed/extended
at T46, then the sequence of pings would be:
T10 - ping from ioctl
T20 - ping from ioctl
T30 - ping from ioctl
T40 - ping from close
T50 - ping from re-open
T60 - ping from ioctl
...
If the expire time was *not* renewed, then the watchdog
would fire at T100; which is 55 seconds after the
expiration time. 55 is less than the 60 second limit
we want.
Signed-off-by: David Teigland <teigland(a)redhat.com>
commit 15ca80d82e619de84a3b365bd6400380a51bc0a3
Author: David Teigland <teigland(a)redhat.com>
Date: Mon Aug 6 15:38:39 2012 -0500
wdmd: close device when test fails
Instead of just not petting the device after a test fails,
close the device. Because the close generates a ping, we
want to get it done early, otherwise if wdmd exited (e.g.
crash or sigkill) just before the device was ready to fire,
the close generated by the kernel extends the life of the
machine by an extra 60 sec. This means we need to re-open
the device if we want to resume petting it.
So, depending on whether the tests happen just prior
to the expiry or just after the expiry, the watchdog
will fire between 60 and 70 seconds after the expiry
time.
It would be 70 seconds if:
we do the check just before the expiration, the client
expires, 10 seconds (TEST_INTERVAL) later, we see the
expiration, close the device, which generates a ping,
which causes the firing to be 60 seconds after the close,
which is already 10 seconds after the expiration.
It would be 60 seconds if:
we do the check just after the expiration, we see
the expiration, close the device, which generates a
ping, which causes the firing to be 60 seconds after
the close, which is just after at the expiration
time.
Previously, the assumption was that the host would
be reset between 50 and 60 seconds from the expiration
time, but this did not account for the fact that
the daemon could exit just before the host reset,
which would lead the kernel to generate a new ping.
If we can patch the kernel so that a device close
does not generate a ping, then we do not need to
close the device when a test fails, but we can
simply not pet the device, as we've been doing.
Signed-off-by: David Teigland <teigland(a)redhat.com>
commit c92595469f65cffd8807c32abcb2e1af1733e462
Author: David Teigland <teigland(a)redhat.com>
Date: Tue Jul 31 13:41:28 2012 -0500
daemon: extend grace time
Increase the default grace time for a killpath instance
from 30 to 40 seconds based on a corrected analysis of
the recovery sequence. The period during which the
watchdog may fire is determined by the wdmd check
interval (10 seconds), not the sanlock renewal interval.
Signed-off-by: David Teigland <teigland(a)redhat.com>
commit e7eb4d0fd34fc643213452cb76f5c64fb86063bb
Author: David Teigland <teigland(a)redhat.com>
Date: Tue Aug 7 15:57:22 2012 -0500
sanlock/wdmd: remove global connection
As long as the sanlock daemon was running, it kept a
constant connection to wdmd, even when no lockspaces
existed. This prevented sanlock and wdmd from being
restarted independently, even when they were unused.
Independent restarting is necessary for upgrades, so
remove the global connection from sanlock to wdmd and
leave only the per-lockspace connections. The lockspace
connections now need to hold a refcount on wdmd which
prevents wdmd restarts.
Also in wdmd, if a client connection is closed, the
refcount must not be cleared on it, otherwise wdmd
could possibly be cleanly shutdown from sigterm while
an expired connection was awaiting a watchdog reset.
Also log at the error level when we kill a pid for
recovery, when that pid exits, and when all pids
are clear for recovery. This makes it much simpler
to see exactly what led up to a watchdog reset
after the fact.
Signed-off-by: David Teigland <teigland(a)redhat.com>
11 years, 7 months
[PATCH 1/2] init: use checkpid when stopping the services
by Federico Simoncelli
When the pid file wasn't removed (eg: forced reboot, etc...) the
services were printing confusing warnings during restart/condrestart.
Signed-off-by: Federico Simoncelli <fsimonce(a)redhat.com>
---
init.d/sanlock | 15 ++++++++-------
init.d/wdmd | 49 ++++++++++++++++++++++++++++++++++++-------------
2 files changed, 44 insertions(+), 20 deletions(-)
diff --git a/init.d/sanlock b/init.d/sanlock
index bd8dccb..83b35e8 100644
--- a/init.d/sanlock
+++ b/init.d/sanlock
@@ -48,7 +48,9 @@ start() {
}
stop() {
- echo -n $"Sending stop signal $prog: "
+ PID=$(pidofproc -p $runfile $prog)
+
+ echo -n $"Sending stop signal $prog ($PID): "
killproc -p $runfile $prog -TERM
retval=$?
echo
@@ -57,9 +59,10 @@ stop() {
return $retval
fi
- echo -n $"Waiting for $prog to stop:"
+ echo -n $"Waiting for $prog ($PID) to stop:"
+
timeout=10
- while [ -e $runfile ]; do
+ while checkpid $PID; do
sleep 1
timeout=$((timeout - 1))
if [ "$timeout" -le 0 ]; then
@@ -74,9 +77,8 @@ stop() {
}
restart() {
- stop && start
- retval=$?
- return $retval
+ rh_status_q && stop
+ start
}
reload() {
@@ -122,4 +124,3 @@ case "$1" in
exit 2
esac
exit $?
-
diff --git a/init.d/wdmd b/init.d/wdmd
index 19fc3ae..af45561 100644
--- a/init.d/wdmd
+++ b/init.d/wdmd
@@ -1,4 +1,4 @@
-#!/bin/sh
+#!/bin/bash
#
# wdmd - watchdog multiplexing daemon
#
@@ -31,20 +31,24 @@ WDMDOPTS="-G $WDMDGROUP"
[ -f /etc/sysconfig/$prog ] && . /etc/sysconfig/$prog
-start() {
- [ -x $exec ] || exit 5
-
- if [ ! -d /var/run/$prog ]; then
- mkdir -p /var/run/$prog
- [ -x /sbin/restorecon ] && restorecon /var/run/$prog
- fi
-
+watchdog_check() {
if [ ! -c /dev/watchdog ]; then
echo -n $"Loading the softdog kernel module: "
modprobe softdog && udevadm settle
[ -c /dev/watchdog ] && success || failure
echo
fi
+}
+
+start() {
+ watchdog_check
+
+ [ -x $exec ] || exit 5
+
+ if [ ! -d /var/run/$prog ]; then
+ install -d -g $WDMDGROUP -m 775 /var/run/$prog
+ [ -x /sbin/restorecon ] && restorecon /var/run/$prog
+ fi
echo -n $"Starting $prog: "
daemon $prog $WDMDOPTS
@@ -55,16 +59,36 @@ start() {
}
stop() {
- echo -n $"Stopping $prog: "
+ PID=$(pidofproc -p $runfile $prog)
+
+ echo -n $"Sending stop signal $prog ($PID): "
killproc -p $runfile $prog -TERM
retval=$?
echo
- [ $retval -eq 0 ] && rm -f $lockfile
+
+ if [ $retval -ne 0 ]; then
+ return $retval
+ fi
+
+ echo -n $"Waiting for $prog ($PID) to stop:"
+
+ timeout=10
+ while checkpid $PID; do
+ sleep 1
+ timeout=$((timeout - 1))
+ if [ "$timeout" -le 0 ]; then
+ failure; echo
+ return 1
+ fi
+ done
+
+ success; echo
+ rm -f $lockfile
return $retval
}
restart() {
- stop
+ rh_status_q && stop
start
}
@@ -111,4 +135,3 @@ case "$1" in
exit 2
esac
exit $?
-
--
1.7.1
11 years, 7 months
[PATCHv2] python: support sanlock_add_lockspace_timeout
by Federico Simoncelli
Signed-off-by: Federico Simoncelli <fsimonce(a)redhat.com>
---
python/sanlock.c | 16 ++++++++++------
1 files changed, 10 insertions(+), 6 deletions(-)
diff --git a/python/sanlock.c b/python/sanlock.c
index bef236e..e32fab3 100644
--- a/python/sanlock.c
+++ b/python/sanlock.c
@@ -267,26 +267,30 @@ exit_fail:
/* add_lockspace */
PyDoc_STRVAR(pydoc_add_lockspace, "\
-add_lockspace(lockspace, host_id, path, offset=0, async=False)\n\
+add_lockspace(lockspace, host_id, path, offset=0, iotimeout=0, async=False)\n\
Add a lockspace, acquiring a host_id in it. If async is True the function\n\
-will return immediatly and the status can be checked using inq_lockspace.");
+will return immediatly and the status can be checked using inq_lockspace.\n\
+The iotimeout option configures the io timeout for the specific lockspace,\n\
+overriding the default value (see the sanlock daemon parameter -o).");
static PyObject *
py_add_lockspace(PyObject *self __unused, PyObject *args, PyObject *keywds)
{
int rv, async = 0, flags = 0;
+ uint32_t iotimeout = 0;
const char *lockspace, *path;
struct sanlk_lockspace ls;
static char *kwlist[] = {"lockspace", "host_id", "path", "offset",
- "async", NULL};
+ "iotimeout", "async", NULL};
/* initialize lockspace structure */
memset(&ls, 0, sizeof(struct sanlk_lockspace));
/* parse python tuple */
- if (!PyArg_ParseTupleAndKeywords(args, keywds, "sks|ki", kwlist,
- &lockspace, &ls.host_id, &path, &ls.host_id_disk.offset, &async)) {
+ if (!PyArg_ParseTupleAndKeywords(args, keywds, "sks|kIi", kwlist,
+ &lockspace, &ls.host_id, &path, &ls.host_id_disk.offset, &iotimeout,
+ &async)) {
return NULL;
}
@@ -301,7 +305,7 @@ py_add_lockspace(PyObject *self __unused, PyObject *args, PyObject *keywds)
/* add sanlock lockspace (gil disabled) */
Py_BEGIN_ALLOW_THREADS
- rv = sanlock_add_lockspace(&ls, flags);
+ rv = sanlock_add_lockspace_timeout(&ls, flags, iotimeout);
Py_END_ALLOW_THREADS
if (rv != 0) {
--
1.7.1
11 years, 7 months