On Tue, 16 Aug 2011 10:25:57 -0400
Jeff Darcy <jdarcy(a)redhat.com> wrote:
hfs_start_volume command. That will assemble the proper server-side
volfile, assign a port, and start the server daemon(s). On the client
side, all you need is hfs_mount; this fetches the client-side volfile,
does port mapping, rewrites the volfile appropriately, and uses it to
start the client process. Actually there's a bit of tenant setup too,
but that's mostly orthogonal; to the volume setup. Here's a full
example:
server# gluster volume create test1 server:/bricks/test1
server# hfs_add_tenant pete badpw 1000 1999 1000 1999
server# hfs_enable_tenant test1 pete
server# hfs_start_volume test1
client# hfs_mount server test1 pete badpw /mnt/test1
Well, it was a fun weekend of trying to follow these instructions.
No, I have not succeeded. There were a few snags.
First, curl
http://elanor:8080/testvol/fetch threw a 500, due to
/var/lib/glusterfs/vols/testvol/testvol-fuse.vol being absent. This
happens because glusterfs-server-3.2.1-2.fc15 uses /var/lib/glusterd,
not /var/lib/glusterfs. I created that directory and copied the server
and fuse volfiles there (had also restore the original volfile that
the cloudfs script rewrote). With that, we get something like:
[root@lembas zaitcev]# curl
http://elanor:8080/testvol/fetch
volume testvol-client-0
type protocol/client
option remote-host elanor
option remote-subvolume /q/brick1
option transport-type tcp
end-volume
(I'm quoting it because it seems like a problem downstream, keep
reading :-))
Second, curl
http://elanor:8080/testvol/map was returning {}. That
turned out to be because one must create volumes with hfs_add_volume.
The gluster command above is not sufficient. Using naked gluster for it
does not create db_obj["vt_testvol"] and friends, and eventually that
causes hfs_start_volume to traceback.
By that time I was so tired that I did not want to dismantle everything
merely to run hfs_add_volume, so I just ran this:
$ python
>> import hfs_utils
>> db = hfs_utils.open_db()
>> db["vt_testvol"] = ""
>> db["vs_testvol"] = ""
This permits hfs_start_volume to do its job, but unfortunately in the
end it created /var/lib/hekafs/testvol/testvol.elanor.q-brick1.vol
that referred one level too deep with:
volume zaitcev-posix
type storage/posix
option directory /q/brick1/junk/zaitcev
end-volume
It all ends with a mismatch on the client:
[root@lembas zaitcev]# curl
http://elanor:8080/testvol/map
{"/q/brick1/junk": "24029"}[root@lembas zaitcev]#
[root@lembas zaitcev]# hfs_mount elanor testvol zaitcev volpass /mnt/testvol
Could not find port for elanor:/q/brick1
[root@lembas zaitcev]#
So yeah. I think I may be getting obstinate about it, because I'm
plotting to edit the volfiles yet again instead of just creating
new bricks and volumes from scratch.
-- Pete