On Tue, Jun 18, 2013 at 09:20:07AM +0000, Qixiaozhen wrote:
My idea about the cluster lock:
1) Initialize the lockspace and resource at first.
2) Add the hostId of nodes into the lockspace after step 1.
3) Loop:
4) If the node wants to acquire the cluster lock, "sanlock.acquire()"
can be called. "sanlock.release()" will be called once the operation finished.
What about the feasibility of this idea?
I hava debuged the demo code of sanlock named 'python/example.py' step by step in
two nodes(Node1, Node2). Demo code likes the following segment:
I'm not sure exactly how you are running this, but you're probably doing
it wrong.
Here is an example using the command line and a simple test program I've
attached (compile with -lsanlock).
1. set up shared storage (/dev/sdb)
host1: vgcreate test /dev/sdb
host1: lvcreate -n leases -L 1G test
host2: vgscan
host2: lvchange -ay /dev/test/leases
2. start the daemons
host1: modprobe softdog
host2: modprobe softdog
host1: wdmd
host2: wdmd
host1: sanlock daemon
host2: sanlock daemon
(it's best to use a real watchdog driver instead of softdog)
3. initialize the lockspace (named "LS") and the resource (named
"RX")
host1: sanlock client init -s LS:0:/dev/test/leases:0
host1: sanlock client init -r LS:RX:/dev/test/leases:1048576
(done from only one host)
4. add the lockspace
host1: sanlock client add_lockspace -s LS:1:/dev/test/leases:0
host2: sanlock client add_lockspace -s LS:2:/dev/test/leases:0
(this will take 20+ seconds)
(each host uses a different host id)
5. verify that both hosts have joined the lockspace
host1: sanlock client host_status
lockspace LS
1 timestamp 1687
2 timestamp 1582
host2: sanlock client host_status
lockspace LS
1 timestamp 1687
2 timestamp 1561
6. acquire/release lock on RX
host1: sanlk_lockr LS RX /dev/test/leases 1048576 5
host2: sanlk_lockr LS RX /dev/test/leases 1048576 5