Hi all/David,
Hi,
Apologies if this is not the correct list - didn't see a user specific one.
I'm a bit confused regarding what storage options I have regarding the lockspace directory with my particular setup. I have a 2 node RHCS cluster with shared iscsi-backed GFS2 storage for the VM disk images - fairly standard stuff. I am wanting to add Sanlock into the mix, but do not want to introduce a dependency on NFS for the Lockspace directory. It is also my understanding from reading various mailing list posts that it is also not considered good practice to have the Lockspace directory on the GFS2 storage due to blocked I/O on the GFS2 volume while fencing is taking place.
I am not sure what my other options are for where to host the Lockspace directory at. The man page seems to show creating and initializing storage pools from a SAN, but it's not clear exactly what is being configured and how configuration, and performance, would differ with it vs using an NFS directory.
Any help and direction towards what my options are for a 2 node setup with this is greatly appreciated!
We do something similar in a 2 node DRBD/OCFS2 setup in which the vm images and the sanlock lockspace reside in the OCFS2 volume. We did a thorough testing and didn't run into any major problems with fencing. Problematic are of course all split-brain situations. The setup has now ran since over a year. AFAIK OCFS2 handles fencing in a different way than GFS2 so I don't know if this really helps you.
You could take a look at virtlockd which was included into libvirt 1.0.1. It is perhaps sufficient to your needs and doesn't need a lockspace. See: http://www.redhat.com/archives/libvir-list/2011-July/msg00337.html
Cheers David
Thanks!
_______________________________________________
sanlock-devel mailing list
sanlock-devel@lists.fedorahosted.org https://
lists.fedorahosted.org/mailman/listinfo/sanlock-devel
On 4/23/2013 2:39 AM, David Weber wrote:
We do something similar in a 2 node DRBD/OCFS2 setup in which the vm images and the sanlock lockspace reside in the OCFS2 volume. We did a thorough testing and didn't run into any major problems with fencing. Problematic are of course all split-brain situations. The setup has now ran since over a year. AFAIK OCFS2 handles fencing in a different way than GFS2 so I don't know if this really helps you.
You could take a look at virtlockd which was included into libvirt 1.0.1. It is perhaps sufficient to your needs and doesn't need a lockspace. See:http://www.redhat.com/archives/libvir-list/2011-July/msg00337.html
Cheers David
Thanks! I have seen issues with sanlock running ontop of GFS2 (like I mentioned earlier), and have seen emails from dev's recommending not hosting the Lockspace dir on the GFS2 volume. So, I'm just trying to Do The Right Thing ©
Virtlockd looks interesting! Looks like it does host a lockspace under/var/lib/virtlock, and it mentions you can just move this to your shared storage to gain protection across multiple machines. I wonder how well it handles a node being fenced and the locks left behind, and if it handles I/O being blocked better than Sanlock.
Either way, CentOS is only up to 0.10.2 for libvirt, so that option isn't available for our usage unfortunately.
On Tue, Apr 23, 2013 at 09:16:39AM -0500, Russell Jones wrote:
On 4/23/2013 2:39 AM, David Weber wrote:
We do something similar in a 2 node DRBD/OCFS2 setup in which the vm images and the sanlock lockspace reside in the OCFS2 volume. We did a thorough testing and didn't run into any major problems with fencing. Problematic are of course all split-brain situations. The setup has now ran since over a year. AFAIK OCFS2 handles fencing in a different way than GFS2 so I don't know if this really helps you.
You could take a look at virtlockd which was included into libvirt 1.0.1. It is perhaps sufficient to your needs and doesn't need a lockspace. See:http://www.redhat.com/archives/libvir-list/2011-July/msg00337.html
Cheers David
Thanks! I have seen issues with sanlock running ontop of GFS2 (like I mentioned earlier), and have seen emails from dev's recommending not hosting the Lockspace dir on the GFS2 volume. So, I'm just trying to Do The Right Thing ??
Using sanlock on top of gfs2 or ocfs2 files is a very "sub optimal" configuration, and I don't expect it would work very well. The fs is doing precise locking under the files that sanlock is using to do "rough" locking. What you really want to do is use the precise (dlm) locks which are there already. There are a couple ways to do that...
Virtlockd looks interesting! Looks like it does host a lockspace under/var/lib/virtlock, and it mentions you can just move this to your shared storage to gain protection across multiple machines. I wonder how well it handles a node being fenced and the locks left behind, and if it handles I/O being blocked better than Sanlock.
The key feature of virtlockd is that it will use file locks on the shared file system. This is a very direct way of using the precise dlm locks. Edit qemu.conf and specify lockd instead of sanlock, and you can set the shared fs location in qemu-lockd.conf.
A second way is to write a dlm plugin for virtlockd that uses libdlm directly.
Finally, if you really do need to use sanlock, I'd create some small lvs from the shared storage for sanlock to use directly, rather than trying to run it over the fs. I'm just not sure how to configure libvirt to do this (but I know it's possible because RHEV/ovirt uses sanlock and libvirt in this way.)
Dave
On 4/23/2013 10:13 AM, David Teigland wrote:
Finally, if you really do need to use sanlock, I'd create some small lvs from the shared storage for sanlock to use directly, rather than trying to run it over the fs. I'm just not sure how to configure libvirt to do this (but I know it's possible because RHEV/ovirt uses sanlock and libvirt in this way.)
Dave
Thanks! I figured this seemed like my only option, and we have the means of doing it. I just don't know how to configure it properly within Sanlock and cannot find documentation on doing this.
This man page (https://fedorahosted.org/sanlock/) appears to show utilizing LVs from shared storage, but it's not very clear why you need two LUNs (or if you even need two for a 2 node setup), why it has to use the "direct init" command, if you have to manually add leases now instead of using the auto lease feature if you're using that, etc. One of the things I liked about using Sanlock on NFS is the auto lease creation feature.
Anywhere you can think of to point me towards more information on going down this route with a 2 node setup?
On Tue, Apr 23, 2013 at 10:56:58AM -0500, Russell Jones wrote:
Thanks! I figured this seemed like my only option, and we have the means of doing it. I just don't know how to configure it properly within Sanlock and cannot find documentation on doing this.
This man page (https://fedorahosted.org/sanlock/) appears to show utilizing LVs from shared storage, but it's not very clear why you need two LUNs (or if you even need two for a 2 node setup),
I'm not sure why I used two different vgs/lockspaces in the example. If you have only one shared vg, you'd use only one lockspace. The number of hosts doesn't matter.
why it has to use the "direct init" command,
"direct init" and "client init" do the same thing. The client init uses the sanlock daemon to do the init, the former doesn't require the daemon to be running.
if you have to manually add leases now instead of using the auto lease feature if you're using that, etc. One of the things I liked about using Sanlock on NFS is the auto lease creation feature.
Yes, the libvirt auto leases and RHEV/ovirt do all of the sanlock setup for you, so you'll need to replace that automation with manual steps:
- to create lease lvs and initialize them for sanlock - configure the lease lvs in the libvirt config - start wdmd and sanlock services - run the sanlock add_lockspace command
Anywhere you can think of to point me towards more information on going down this route with a 2 node setup?
I'm looking at the libvirt syntax is on this page under "Device leases": http://libvirt.org/formatdomain.html#elementsEvents
An example similar to what the sanlock man page has. I'll show all leases at different offsets on a single 1GB lv (instead of using one lv per lease, which is also possible). (Sorry, I haven't actually tried this myself.)
shared storage for vms and leases: /dev/sdb1 shared vg for vms and leases: pool1 shared lv for all leases: /dev/pool1/leases lockspace name: LS1 three vms: A, B, C lease names: leaseA, leaseB, leaseC
vgcreate pool1 /dev/sdb1 lvcreate -n leases -L 1GB pool1 sanlock direct init -s LS1:0:/dev/pool1/leases:0 sanlock direct init -r LS1:leaseA:/dev/pool1/leases:1048576 sanlock direct init -r LS1:leaseB:/dev/pool1/leases:2097152 sanlock direct init -r LS1:leaseC:/dev/pool1/leases:3145728
The libvirt syntax for vm A:
<lease> <lockspace>LS1</lockspace> <key>leaseA</key> <target path='/dev/pool1/leases' offset='1048576'/> </lease>
The libvirt syntax for vm B:
<lease> <lockspace>LS1</lockspace> <key>leaseB</key> <target path='/dev/pool1/leases' offset='2097152'/> </lease>
Running this would be roughly:
all hosts: service wdmd start all hosts: service sanlock start all hosts: service libvirt start host 1: sanlock add_lockspace -s LS1:1:/dev/pool1/leases:0 host 2: sanlock add_lockspace -s LS1:2:/dev/pool1/leases:0 (Note that each uses a different host_id there.) Then, libvirt should acquire the leases when you run the vms.
Dave
On 4/23/2013 11:54 AM, David Teigland wrote:
I'm looking at the libvirt syntax is on this page under "Device leases": http://libvirt.org/formatdomain.html#elementsEvents
An example similar to what the sanlock man page has. I'll show all leases at different offsets on a single 1GB lv (instead of using one lv per lease, which is also possible). (Sorry, I haven't actually tried this myself.)
shared storage for vms and leases: /dev/sdb1 shared vg for vms and leases: pool1 shared lv for all leases: /dev/pool1/leases lockspace name: LS1 three vms: A, B, C lease names: leaseA, leaseB, leaseC
vgcreate pool1 /dev/sdb1 lvcreate -n leases -L 1GB pool1 sanlock direct init -s LS1:0:/dev/pool1/leases:0 sanlock direct init -r LS1:leaseA:/dev/pool1/leases:1048576 sanlock direct init -r LS1:leaseB:/dev/pool1/leases:2097152 sanlock direct init -r LS1:leaseC:/dev/pool1/leases:3145728
The libvirt syntax for vm A:
<lease> <lockspace>LS1</lockspace> <key>leaseA</key> <target path='/dev/pool1/leases' offset='1048576'/> </lease>
The libvirt syntax for vm B:
<lease> <lockspace>LS1</lockspace> <key>leaseB</key> <target path='/dev/pool1/leases' offset='2097152'/> </lease>
Running this would be roughly:
all hosts: service wdmd start all hosts: service sanlock start all hosts: service libvirt start host 1: sanlock add_lockspace -s LS1:1:/dev/pool1/leases:0 host 2: sanlock add_lockspace -s LS1:2:/dev/pool1/leases:0 (Note that each uses a different host_id there.) Then, libvirt should acquire the leases when you run the vms.
Dave
Excellent, this was very helpful. Thank you!
So it looks like the only difference as far as keeping this automated on startup of each host (after the initial configuration is completed of course) is that each host needs to now run add_lockspace manually on boot? Can this be safely automated through the daemon or an init script?
On Tue, Apr 23, 2013 at 12:47:34PM -0500, Russell Jones wrote:
So it looks like the only difference as far as keeping this automated on startup of each host (after the initial configuration is completed of course) is that each host needs to now run add_lockspace manually on boot?
Right
Can this be safely automated through the daemon or an init script?
I think an init script should work fine, you could probably insert the add_lockspace at the end of the sanlock init script. You'd also want to add the matching rem_lockspace at the beginning of stop.
sanlock-devel@lists.fedorahosted.org