Some of you might have heard me talk about the hypothetical-future
"Dynamo" translator. For those who haven't already seen my Summit
slides, the basic idea is to create a translator that (1) does
distribution in a more scalable way than the current DHT translator and
(2) combines distribution with replication and/or striping and/or
erasure codes. Some problems with the current DHT translator include:
* Global directory operations which limit scalability and performance.
* Placement of new files based on "stale" xattr data if the server set
has changed, until an global (and expensive!) rebalance operation is
triggered manually.
* Requirement that underlying stripe/replicate translators be statically
defined and not use overlapping sets of bare bricks, precluding many
otherwise-valid configurations (e.g. with "odd" numbers of bricks or
bricks with different capacities).
* Server-to-global inode number mapping which is based on the current
number of bricks and therefore does not guarantee stability when bricks
are added/removed (which might even result in clients attempting to
access the wrong file).
My goal is (eventually) to create a new translator which does
essentially the same things that DHT does, but does them in a way that
addresses the above issues. The basic design principles are mostly those
outlined in this document, which has become the basis for several
successful projects in the NoSQL space.
http://www.allthingsdistributed.com/2007/10/amazons_dynamo.html
I actually have a working 2.x translator (which I have at different
times called "swipe" or "scatter") which addresses most of the above
problems. It currently combines distribution and striping internally,
but I've since concluded that it would be better to combine distribution
and replication instead since the replication code is more in need of a
rewrite for performance reasons. Even better would be to have a coherent
mechanism that allows arbitrary combinations of striping and
replication, and that's where I run into a problem. The distribution
translator has to control the dynamic placement of data onto sets of
bricks which are then combined into stripe/replica groups, which creates
a coupling between these functions and is also at odds with the more
static way GlusterFS constructs the translator tree. I don't want to
create one big uber-translator which subsumes all three kinds of
functionality and might be harder to extend e.g. if/when we do erasure
codes or information dispersal instead. I also don't want to create a
private "plugin" API that programmers (i.e. those reading this email)
would have to learn in addition to the existing translator API.
On the drive in today, I think I came up with a way to satisfy this need
to keep things separate and uniform. The key is to take translators
which are written using the current translator API, but use them in a
very different way. Instead of configuring stripe/replicate translators
directly from the volfile, Dynamo would *dynamically* create
stripe/replicate translators corresponding to the sets of bricks or
layers of other stripe/replicate translators that it needs according to
its parameters and the current server topology. For example, if it needs
to do three-way striping across two-way replica sets, it might
dynamically create the following translators:
X = replicate(A,B)
Y = replicate(C,D)
Z = replicate(E,F)
S = stripe(X,Y,Z)
Other assignments are possible. The main point is that Dynamo would then
call S, which would call X/Y/Z, which would call lower-level translators
even though those are actually children of the original Dynamo
translator. This kind of "side call" is not something that GlusterFS
explicitly supports or does itself, but AFAICT there's nothing in the
way the translator "stack" or inode contexts etc. are managed that
prevent it from working. If necessary, we could work around some
differences by compiling regular translator code with redefined versions
of certain macros (especially STACK_WIND and STACK_UNWIND) to generate
"conformant" calls instead. That might even give us an opportunity to
perfom certain optimizations that the standard macros forego, so it
might be worthwhile even if it's not strictly necessary. The important
thing is that we should be able, using these techniques, to keep
striping/replication (and similar) functionality separate even under a
more dynamic distribution scheme.