This is mostly a re-statement of things I've said before, but as these
ideas continue to evolve it's worth capturing "snapshots" periodically.
I'll keep doing it until I get some feedback.
Problem #1: the GlusterFS replication ("afr") translator sucks. In
normal operation, it brackets each write with lock and setxattr calls,
resulting in five or more round trips. The effect on performance has
been clearly visible in every test I've run involving synchronous and/or
small-block I/O, to the point where it's simply not acceptable for many
important workloads (e.g. virtual-machine-image storage for RHEV). The
locking is also problematic from a scalability and fault-recovery
standpoint. With regard to fault recovery, it's also a problem that
ensuring full recovery requires a full scan of the entire filesystem,
which takes time proportional to the number of files.
Problem #2: the GlusterFS distribution ("dht") isn't so great either.
It fundamentally can't scale that well because it depends on directories
existing on every node. It doesn't handle adding and removing nodes
very gracefully because of the way "layouts" are stored on every
directory when it's created; new nodes won't even be used for *new*
files without an explicit and expensive "rebalance" operation to
regenerate the layouts. The mapping between server vs. global inode
numbers (using the current server count as part of the calculation) can
lead to all sorts of consistency and aliasing problems. The whole
system of lookups (including inefficient broadcast lookups) and
linkfiles and gfids and so on is enormously complicated and has lately
proven impossible to maintain.
Problem #3: the *relationship between* afr and dht precludes all sorts
of interesting and valuable features. For example, allowing different
files or groups of files to use different replica counts, or placing
replicas based on geographic concerns (e.g. different racks), fails
because the configurations are too static and the code that decides
placement (dht) is practically oblivious to the code that handles
multiple replicas once those decisions have been made (afr). Even
handling servers with non-uniform capacities or performance
characteristics is way more painful than it needs to be.
As a result of all this, I think we need a fundamentally different
distribution/replication setup for CloudFS. Here is a rough list of
* Better replication performance by doing writes with only one or at
most two round trips in the normal (no-failure case).
* Faster replica recovery, proportional to I/O rate instead of data volume.
* No requirement that directories exist on every server.
* Handle adding and removing nodes more gracefully, with new servers
automatically used for new files as soon as they join the server pool.
* Stable mapping between server-specific and global inode numbers,
regardless of subsequent server-membership changes.
* Support dynamic per-file decisions about number and placement of
replicas, including placement on heterogeneous servers.
The basic structure that I've proposed to do all of this still has
separate distribution and replication translators, but with the
distribution translator more fully "in charge" and creating the
replication translators that it needs instead of having them statically
configured. So, for example, the distribution translator might decide
that a file X should be replicated with one copy on server A and one on
server B. It would therefore create a new replication translator across
A and B, and use that to handle requests for X. A moment later it might
decide that file Y should be replicated onto servers A/G/P, and
dynamically create another replication translator representing that
overlapping set. The subsystem to manage replica-set translators would
be a key component of the distribution translator, not the core
GlusterFS infrastructure. More details about each of these translators
== Distribution Translator ("dynamo"). This could be based more closely
on Amazon's Dynamo key/value store - hence the name. Servers are each
assigned one or more virtual node IDs, which are used to determine the
ranges in a consistent-hashing ring each for which each is responsible.
More capable nodes (higher capacity or bandwidth) might have more
virtual node IDs, or node IDs covering larger sections of the ring.
Files are looked up by hashing to points on the consistent-hashing ring,
and by searching "around the ring" probing servers at their virtual node
IDs until one returns an answer - no broadcast, no static "layouts"
stored on directories. For replicated files, the next replica will be
found as a "natural" consequence of searching around the ring. New
servers will also "naturally" be found as new files probe them first.
Rebalancing can be done efficiently by assigning new or different
virtual node IDs, and servers "pushing away" files hashing to ranges for
which they're no longer responsible. Linkfiles will still be used, but
strictly as performance-enhancing hints. Inode numbers are permanently
assigned to files when they're created, in a way guaranteed to ensure
their continuing uniqueness. Linkfiles will always carry the version of
the object they point to, and stale linkfiles will always be removed as
soon as they're encountered; neither the presence nor absence of a
linkfile (with staleness being identical to absence) should affect the
final answer of where a file is currently located. There's a lot more,
some of it straight out of the Dynamo papers and some of it in previous
designs or code. For anyone who might want think or claim I'm spending
too much time at academic conferences and not enough time looking at
code, I actually implemented 90% of this well over a year ago and tested
it enough to determine that it actually supports real workloads such as
building and running standard benchmarks. Go look in
if you don't believe me, and let's
not hear any more of that nonsense.
== Replication Translator ("sfr" for Simple File Replication). In
contrast to the current "afr" translator, this could be based on an
operations log or journal. In normal operation, data is simply written
to all N replicas with no additional overhead. The act of writing marks
the affected data region as "dirty" in a log on each server. After the
writes are done, the write is returned to the user and *asynchronously*
calls are made to each server to clear the dirty markers. In case of a
failure, a separate call is made *to each surviving replica* to make the
dirty markers persistent along with time/version information to support
future recovery. Before the failed server can come all the way back up
(i.e. start serving requests) it must contact its peers as a special
client, to retrieve failed-operation information and bring itself up to
date. Yes, there are all sorts of split-brain conditions to worry
about, and edge conditions involving files which have been
removed/renamed/recreated since they were modified, but for the most
part standard replica-repair mechanisms from Dynamo or Coda or in other
relevant systems can be applied.
Once we have a full design that addresses these needs, we can start
thinking about the asynchronous multi-site replication that will form
the basis for the third CloudFS release. Even though that's different
in a lot of ways than sfr's synchronous/local replication, some of the
replica-repair code in both might be usable as common code so it's worth
thinking about that kind of future re-use. Even further out beyond that
are things like Reed-Solomon/erasure codes, automatic tiering between
nodes equipped with SSDs and those equipped with Plain Old Disks,
striping huge directories across servers, etc. There are plenty of
directions to go with this, but for now we still need to focus on 1.0
(multi-tenancy, encryption, management) and 2.0 (improved distribution,
replication, and scale). Hoepfully this will serve as a reference point
for how we're likely to reach those goals.