Hi David/All,
I have a CentOS 6 KVM host that is utilizing Sanlock and Wdmd on a shared NFS mount that contains both the VM disks and the Lockspace. I experienced an NFS outage that of course resulted in Sanlock entering recovery and terminating all of the VM PID's. I would have expected at that point for Sanlock to disarm the watchdog, however that did not happen and the watchdog did eventually reset the host.
I am having trouble making sense of the logs to determine if the watchdog was not disarmed because the Lockspace could not be renewed and Sanlock is not going to terminate itself, or if it's because there was a VM PID hanging around that would not exit. Below is a snippet of the log messages. Some clarity about what the ultimate case was for the watchdog firing is much appreciated!
Thank you!
<NFS server died> Aug 5 02:41:14 vmhost sanlock[3111]: 1964804 __LIBVIR aio timeout 0x7f43300008c0:0x7f43300008d0:0x7f434296a000 sec 10 to_count 6 Aug 5 02:41:14 vmhost sanlock[3111]: 1964804 s1 delta_renew read rv -202 offset 0 /vmstore02/sanlock/__LIBVIRT__DISKS__ Aug 5 02:41:14 vmhost sanlock[3111]: 1964804 s1 renewal error -202 delta_length 10 last_success 1964773 Aug 5 02:41:25 vmhost sanlock[3111]: 1964815 __LIBVIR aio timeout 0x7f4330000910:0x7f4330000920:0x7f4342767000 sec 10 to_count 7 Aug 5 02:41:25 vmhost sanlock[3111]: 1964815 s1 delta_renew read rv -202 offset 0 /vmstore02/sanlock/__LIBVIRT__DISKS__ Aug 5 02:41:25 vmhost sanlock[3111]: 1964815 s1 renewal error -202 delta_length 11 last_success 1964773 Aug 5 02:41:36 vmhost sanlock[3111]: 1964826 __LIBVIR aio timeout 0x7f4330000960:0x7f4330000970:0x7f4342665000 sec 10 to_count 8 Aug 5 02:41:36 vmhost sanlock[3111]: 1964826 s1 delta_renew read rv -202 offset 0 /vmstore02/sanlock/__LIBVIRT__DISKS__ Aug 5 02:41:36 vmhost sanlock[3111]: 1964826 s1 renewal error -202 delta_length 11 last_success 1964773 Aug 5 02:41:43 vmhost sanlock[3111]: 1964833 s1 check_our_lease warning 60 last_success 1964773 Aug 5 02:41:44 vmhost sanlock[3111]: 1964834 s1 check_our_lease warning 61 last_success 1964773 Aug 5 02:41:45 vmhost sanlock[3111]: 1964835 s1 check_our_lease warning 62 last_success 1964773 Aug 5 02:41:46 vmhost sanlock[3111]: 1964836 s1 check_our_lease warning 63 last_success 1964773 Aug 5 02:41:47 vmhost sanlock[3111]: 1964837 __LIBVIR aio timeout 0x7f43300009b0:0x7f43300009c0:0x7f4342563000 sec 10 to_count 9 Aug 5 02:41:47 vmhost sanlock[3111]: 1964837 s1 delta_renew read rv -202 offset 0 /vmstore02/sanlock/__LIBVIRT__DISKS__ Aug 5 02:41:47 vmhost sanlock[3111]: 1964837 s1 renewal error -202 delta_length 11 last_success 1964773 Aug 5 02:41:47 vmhost sanlock[3111]: 1964837 s1 check_our_lease warning 64 last_success 1964773 Aug 5 02:41:48 vmhost sanlock[3111]: 1964838 s1 check_our_lease warning 65 last_success 1964773 Aug 5 02:41:49 vmhost sanlock[3111]: 1964839 s1 check_our_lease warning 66 last_success 1964773 Aug 5 02:41:50 vmhost sanlock[3111]: 1964840 s1 check_our_lease warning 67 last_success 1964773 Aug 5 02:41:51 vmhost sanlock[3111]: 1964841 s1 check_our_lease warning 68 last_success 1964773 Aug 5 02:41:52 vmhost sanlock[3111]: 1964842 s1 check_our_lease warning 69 last_success 1964773 Aug 5 02:41:53 vmhost sanlock[3111]: 1964843 s1 check_our_lease warning 70 last_success 1964773 Aug 5 02:41:54 vmhost sanlock[3111]: 1964844 s1 check_our_lease warning 71 last_success 1964773 Aug 5 02:41:55 vmhost sanlock[3111]: 1964845 s1 check_our_lease warning 72 last_success 1964773 Aug 5 02:41:56 vmhost sanlock[3111]: 1964846 s1 check_our_lease warning 73 last_success 1964773 Aug 5 02:41:57 vmhost sanlock[3111]: 1964847 s1 check_our_lease warning 74 last_success 1964773 Aug 5 02:41:58 vmhost sanlock[3111]: 1964848 s1 delta_renew read rv -2 offset 0 /vmstore02/sanlock/__LIBVIRT__DISKS__ Aug 5 02:41:58 vmhost sanlock[3111]: 1964848 s1 renewal error -2 delta_length 11 last_success 1964773 Aug 5 02:41:58 vmhost sanlock[3111]: 1964848 s1 check_our_lease warning 75 last_success 1964773 Aug 5 02:41:59 vmhost sanlock[3111]: 1964849 s1 check_our_lease warning 76 last_success 1964773 Aug 5 02:42:00 vmhost sanlock[3111]: 1964850 s1 check_our_lease warning 77 last_success 1964773 Aug 5 02:42:01 vmhost sanlock[3111]: 1964851 s1 check_our_lease warning 78 last_success 1964773 Aug 5 02:42:02 vmhost sanlock[3111]: 1964852 s1 check_our_lease warning 79 last_success 1964773 Aug 5 02:42:03 vmhost sanlock[3111]: 1964853 s1 check_our_lease failed 80 <VMs are being terminated here> Aug 5 02:42:04 vmhost kernel: br0: port 5(vnet4) entering disabled state Aug 5 02:42:04 vmhost kernel: device vnet4 left promiscuous mode Aug 5 02:42:04 vmhost kernel: br0: port 5(vnet4) entering disabled state Aug 5 02:42:04 vmhost kernel: br0: port 3(vnet1) entering disabled state Aug 5 02:42:04 vmhost kernel: device vnet1 left promiscuous mode Aug 5 02:42:04 vmhost kernel: br0: port 3(vnet1) entering disabled state Aug 5 02:42:04 vmhost kernel: br0: port 4(vnet3) entering disabled state Aug 5 02:42:04 vmhost kernel: device vnet3 left promiscuous mode Aug 5 02:42:04 vmhost kernel: br0: port 4(vnet3) entering disabled state Aug 5 02:42:04 vmhost kernel: br1: port 2(vnet2) entering disabled state Aug 5 02:42:04 vmhost kernel: device vnet2 left promiscuous mode Aug 5 02:42:04 vmhost kernel: br1: port 2(vnet2) entering disabled state Aug 5 02:42:06 vmhost ntpd[2340]: Deleting interface #11 vnet1, fe80::fc52:ff:fe5d:4afd#123, interface stats: received=0, sent=0, dropped=0, active_time=1964539 secs Aug 5 02:42:06 vmhost ntpd[2340]: Deleting interface #14 vnet4, fe80::fc54:ff:fe60:244e#123, interface stats: received=0, sent=0, dropped=0, active_time=1964539 secs Aug 5 02:42:06 vmhost ntpd[2340]: Deleting interface #15 vnet3, fe80::fc52:ff:fe38:2713#123, interface stats: received=0, sent=0, dropped=0, active_time=1964539 secs Aug 5 02:42:06 vmhost ntpd[2340]: Deleting interface #16 vnet2, fe80::fc54:ff:fee8:c0fb#123, interface stats: received=0, sent=0, dropped=0, active_time=1964539 secs Aug 5 02:42:08 vmhost wdmd[3001]: test failed pid 3111 renewal 1964773 expire 1964853 Aug 5 02:42:08 vmhost sanlock[3111]: 1964858 s1 delta_renew read rv -2 offset 0 /vmstore02/sanlock/__LIBVIRT__DISKS__ Aug 5 02:42:08 vmhost sanlock[3111]: 1964858 s1 renewal error -2 delta_length 10 last_success 1964773 Aug 5 02:42:18 vmhost wdmd[3001]: test failed pid 3111 renewal 1964773 expire 1964853 Aug 5 02:42:19 vmhost sanlock[3111]: 1964869 s1 delta_renew read rv -2 offset 0 /vmstore02/sanlock/__LIBVIRT__DISKS__ Aug 5 02:42:19 vmhost sanlock[3111]: 1964869 s1 renewal error -2 delta_length 10 last_success 1964773 Aug 5 02:42:28 vmhost wdmd[3001]: test failed pid 3111 renewal 1964773 expire 1964853 Aug 5 02:42:29 vmhost sanlock[3111]: 1964879 s1 delta_renew read rv -2 offset 0 /vmstore02/sanlock/__LIBVIRT__DISKS__ Aug 5 02:42:29 vmhost sanlock[3111]: 1964879 s1 renewal error -2 delta_length 10 last_success 1964773 Aug 5 02:42:38 vmhost wdmd[3001]: test failed pid 3111 renewal 1964773 expire 1964853 Aug 5 02:42:40 vmhost sanlock[3111]: 1964890 s1 delta_renew read rv -2 offset 0 /vmstore02/sanlock/__LIBVIRT__DISKS__ Aug 5 02:42:40 vmhost sanlock[3111]: 1964890 s1 renewal error -2 delta_length 10 last_success 1964773 Aug 5 02:42:48 vmhost wdmd[3001]: test failed pid 3111 renewal 1964773 expire 1964853 Aug 5 02:42:50 vmhost sanlock[3111]: 1964900 s1 delta_renew read rv -2 offset 0 /vmstore02/sanlock/__LIBVIRT__DISKS__ Aug 5 02:42:50 vmhost sanlock[3111]: 1964900 s1 renewal error -2 delta_length 10 last_success 1964773 <host is reset>
On Tue, Aug 05, 2014 at 10:59:35AM -0500, Russell Jones wrote:
am having trouble making sense of the logs to determine if the watchdog was not disarmed because the Lockspace could not be renewed and Sanlock is not going to terminate itself, or if it's because there was a VM PID hanging around that would not exit. Below is a snippet of the log messages. Some clarity about what the ultimate case was for the watchdog firing is much appreciated!
The logs here don't say if there were any pids that hadn't exited, but that's the expected cause. /var/log/sanlock.log might have some more information to confirm that.
Thanks!
Are there any features/code within Sanlock that would cause it to stop petting the watchdog if it can't renew/reach a lockspace?
On 8/5/2014 11:18 AM, David Teigland wrote:
On Tue, Aug 05, 2014 at 10:59:35AM -0500, Russell Jones wrote:
am having trouble making sense of the logs to determine if the watchdog was not disarmed because the Lockspace could not be renewed and Sanlock is not going to terminate itself, or if it's because there was a VM PID hanging around that would not exit. Below is a snippet of the log messages. Some clarity about what the ultimate case was for the watchdog firing is much appreciated!
The logs here don't say if there were any pids that hadn't exited, but that's the expected cause. /var/log/sanlock.log might have some more information to confirm that.
On Tue, Aug 05, 2014 at 11:20:52AM -0500, Russell Jones wrote:
Are there any features/code within Sanlock that would cause it to stop petting the watchdog if it can't renew/reach a lockspace?
sanlock tries its best to ensure that the watchdog will trigger if lockspace access is lost, so long as processes are running that are using it. Once all pids have exited (or been suspended if that is configured), sanlock tries its best to prevent the watchdog from firing.
There are about 50 seconds for all the pids to exit (or suspend themselves and release their leases). There are a couple of simple explanations for why one or more pids may not be able to do this within 50 seconds:
- If the pids are configured to suspend themselves or shut down cleanly, this can take more than the allowed time. Without this "graceful" shutdown period, sanlock would immediatley use SIGTERM/SIGKILL on them, which is more likely to complete in time.
- If the pids were using the lost storage, they can get stuck doing i/o. This could either block a clean shutdown, or make it unkillable if the i/o path is stuck in an uninterruptible sleep.
sanlock-devel@lists.fedorahosted.org