As I've discussed with many people, the current methods of setting up
and maintaining CloudFS on top of GlusterFS are cumbersome and
error-prone. Mostly this is the result of trying too hard to re-use too
much of the GlusterFS infrastructure for tasks such as distributing
volfiles and starting servers. Here are some basic requirements for
what a "next gen" management interface for CloudFS should be able to do:
* "Import" a GlusterFS volume, creating an equivalent set of server and
(generic) client volfiles which include the CloudFS translators.
* Handle addition and removal of tenants by updating our own database
and generating tenant-specific client volfiles from the generic ones.
When we implement stronger authentication, this might also involve some
key distribution.
* Ensure that all CloudFS volfiles remain in sync across servers and
clients, including regeneration when either the CloudFS volfiles or the
underlying GlusterFS volfiles are changed.
* Start/stop server daemons using the CloudFS volfiles.
* Mount/unmount on clients using the CloudFS volfiles.
The way I propose to handle these requirements is to create "cloudfsd"
which largely corresponds to glusterd (the management daemon, not
glusterfsd which is the server daemon). This would be responsible for
distributing volfiles among servers and starting/stopping server daemons
using those volfiles. We should rely on the existing glusterd to deal
with cluster ("pool") membership issues, instead of inventing our own
infrastructure for that. Similarly, we should use glusterd's
port-mapping infrastructure instead of our own. When we start servers,
we point them to glusterd for registration. When we mount on clients,
they'll get the information they need from there.
Importing volumes, or adding/removing tenants, should still be done by
the cloudfs script/executable, which can poke the local cloudfsd as
appropriate to handle distribution etc. Note that we don't actually
need to deal with starting up new translators in live server processes
and so on initially. For now it's sufficient to deal with the
volfile-distribution issues, and possibly restart server daemons
entirely. In fact, we might want to stick with that approach for quite
a while. Actually inserting and removing translators in a running
server process is tricky and probably not well tested yet; starting a
new process to serve new tenants would work very nearly as well without
those problems. The options for the CloudFS CLI should be a proper
superset of those for the GlusterFS CLI. If we can parse the command
and recognize it as one of our own, then we can handle it internally.
Otherwise, we should pass the command verbatim to "glusterfs" and then
take any necessary actions (e.g. regenerating our volfiles) when that
returns.
The last part is client mounts. Currently, mount.glusterfs will contact
a server to fetch its volfile - the same for anyone - and then use that
to mount. We can sort of do that, but we have to deal with issues of
having volfiles be tenant-specific and carry authentication information
*which should never be on the server(s)*. One way to do this would be
to have mount.cloudfs fetch a generic CloudFS client volfile (still not
the same as the original GlusterFS client volfile) from the server, then
post-process locally to add tenant identity and credentials before
passing the result to glusterfs for actual mounting. After that,
mapping ports and making connections can be handled by the existing
GlusterFS methods.
Show replies by date