CloudFS vs. ecryptfs vs. encfs vs. Tahoe-LAFS

Jeff Darcy jdarcy at redhat.com
Fri Jun 10 17:11:27 UTC 2011


On 06/10/2011 12:52 PM, Zooko O'Whielacronx wrote:
> First of all, I'd like to emphasize that GlusterFS and Tahoe-LAFS are
> very different beasts. (Jeff probably already knows this, but...)

Yep.  I recognize that a comparison between something that's natively a
filesystem vs. something that requires an adaptation layer on top of
something that's fundamentally different is a bit unfair.  I did it
anyway because I know people are going to make this comparison whether
they should or not, and I wanted to know where we stood.  Besides, it
gave me a chance to actually *run* Tahoe-LAFS instead of just talking
about it, and that's something I'd been meaning to do for a while.  ;)

> Could you please give more details about what the workload was? If you
> used the SFTP interface, then I assume it consisted of merely
> uploading and downloading files, right?

Correct.  It was just iozone with eight threads (matching the number of
cores on these machines), doing synchronous sequential 1MB writes.  For
reads, I flushed both client and server caches, then read back the same
files in the same 1MB chunks.

> Also, could you run the same experiment with vanilla GlusterFS sans
> all encryption, or with CloudFS with encryption turned off? Those
> would be very interesting numbers to see.

I've done that a few times (overhead for encryption is something I'm
very concerned about).  On this set of machines I can basically achieve
wire speed for both reads and writes.  Even on the beefier test machines
we have - each 24 cores, 48GB, connected via 10GbE - I can get a pretty
respectable 850MB/s between a single client and single server.  The
numbers in the table are actually more "vanilla GlusterFS plus the
CloudFS at-rest-encryption module" than true CloudFS (which would
include as many as three other modules).

BTW, I should reiterate that these numbers were obtained using
unoptimized GlusterFS and CloudFS builds (-g instead of -O2) and
GlusterFS generally is pretty sensitive to that, so they should be
treated as lower bounds.  Also, because I was only using one "brick"
this test didn't take advantage of the multi-threading I've added to the
GlusterFS transport layer.  I should re-do that test with three or so
bricks (even though they're using the same server volume) and see if it
makes a difference.

> I don't know if this fits your API/use cases at all, but a mature and
> widely cited benchmark is the Yahoo! Cloud Serving Benchmark. I would
> love to see CloudFS compared to some of the ten or so databases tested
> by that framework:

It's kind of an ill fit for CloudFS the same way this test was for
Tahoe-LAFS, but it's still something I'd like to do some day.


More information about the cloudfs-devel mailing list