Sorry for the delay getting back to you.
On Fri, 1 Jun 2012 15:48:48 +0200
Yves Pagani <ypagani(a)aps.edu.pl> wrote:
- man 8 hekafs :
for creating keys, it is written "openssl genrsa 1024 -out
server.key". In fact the number of bits must be at the end of the
command or the output file will not be created so the line has to be
"openssl genrsa -out server.key 1024".
Correct.
- following the link given in fedora wiki
(
https://fedoraproject.org/wiki/Features/CloudFS), I have access to
the file named README.ssl. But when I cloned the git repository, this
file does not exist. I tried to search in the git historybut I can
not find it (but my knowledge of git is (very) low so I could have
missed something).
http://git.fedorahosted.org/git/?p=CloudFS.git;a=blob;f=scripts/README.ss...
works for me. Did it not work for you? I've attached the file, but be
aware that it's slightly out of date. Some of the updated information
is now in the main man page, and some is implicit in the following
commands so see theirs:
hfs_update_cert
hfs_start_volume
hfs_mount
For my configuration, I got the information that the different
files server*.pem must be concatened in a file named "root.pem". But,
this information is not in the man of hekafs. So does the file
containing the different (server) certificate must be named
"root.pem" or can we set another name for it ?
The combination of certificates (specified with hfs_update_cert of
through the GUI) is done automatically in hfs_start_volume.
- from the hfs_mount manpage, I was not able to use the data key
parameter. Does this parameter work only with branch aes from the git
repository ?
What error did you get? The data key must be in a specific format, and
our diagnostics for an incorrect format are probably not very good.
During my testing, maybe the most annoying thing was that if
something went wrong during a hfs_mount command, no information that
a problem occured is given. For example, if you launch a hfs_mount
command with a wrong password or bad client's certificate, you always
get 0 with echo $? . The log file contains something like this :
" [2012-06-01 13:37:39.634914] E
[graph.c:526:glusterfs_graph_activate] 0-graph: init failed
[2012-06-01 13:37:39.635219] W [glusterfsd.c:727:cleanup_and_exit]
(-->/usr/sbin/glusterfs(main+0x295) [0x405e85]
(-->/usr/sbin/glusterfs(glusterfs_volumes_init+0x145) [0x404d45]
(-->/usr/sbin/glusterfs(glusterfs_process_volfp+0x198) [0x404bf8])))
0-: received signum (0), shutting down [2012-06-01 13:37:39.635290] I
[fuse-bridge.c:3727:fini] 0-fuse: Unmounting '/gluster/'. " So fuse
silently unmounts "/gluster" (which succeeds), and $? contains then
the 0 value instead of a code error. Maybe, in this case the function
which unmounts the folder should "remember" that something went
wrong ?
I'll look into that.
- when I want to do a hfs_add_node (cli), I get a SSH error
"The
authenticity of host '192.168.1.199 (192.168.1.199)' can't be
established". So I create a ssh key and copy it to the host by a
ssh-copy-id command and after that the hfs_add_node works without any
problem.
This should only affect first-time setup. We need to be able to enable
the HekaFS daemon on the remote node before we can do that. The
default implementation of make_remote (in hfs_utils.py) does this using
ssh, but it should be easy to make it use any other method you want.
-in order to test the self-healing facility, I shutted down a
server
and copy some file from the client. When i turn back on the server,
the "missing" files do not appear (I waited 10 minutes since I read
that glusterfs runs a self-healing every 600s). On the client, I need
to unmount and mount the folder (then files appears on the server but
have 0 size) and then launch a "find /gluster/ -noleaf -print0 |
xargs --null stat >/dev/null" for "really" having the files on the
server.
Yes, in GlusterFS 3.2.x (on which HekaFS is based), there's no
automatic self-heal so an explicit find/ls/whatever is necessary to
touch the files. GlusterFS 3.3 does have automatic self-heal, but is
currently incompatible with HekaFS. We're trying to avoid doing a 3.3
version of HekaFS, because all of that work would become obsolete when
the functionality is fully integrated into GlusterFS itself (probably
in 3.4 but I can't promise that).
- I try to compile hekafs from source via the fedora-ize script but
it fails. It seems that some folders have beem renamed
(pkg->packaging ?). A make fedora in the packaging folder does the
trick.
Yes, Kaleb sort of went a different route with the packaging/* stuff,
so "fedora-ize" is basically obsolete.
- each server is running django at the 8080 port. But I could not
find a way to turn it off (except by a iptable rule). I think it
could be a security problem because a local attaker can run an nmap
scan on then local network and then can have access to the
configuration of the servers.
Yes, this is an issue we've inherited from GlusterFS (which is
similarly insecure). A while ago I did some work to enable SSL for the
management interfaces, and Pete Zaitcev did something similar, but
neither has made it into the master branch yet.
- In glusterfs, you could share files via acls to "few
people" . Is
it possible to do such kinds of thing with different tenants ? ( I
suspect the answer to be no since on the server side each tenant has
its own folder).
Tenants are *fully* isolated from one another, by design. Thus, there
is no sharing at all between them.
I am writing some documentation of my setup. Do you think it could
be
interesting to other users?
I'm sure it would.
Sorry for this quite long mail and many thanks to all for this
great
peace of software .
You're quite welcome, and thank you for your feedback.