CloudFS vs. ecryptfs vs. encfs vs. Tahoe-LAFS

Jeff Darcy jdarcy at redhat.com
Fri Jun 10 19:32:30 UTC 2011


On 06/10/2011 02:41 PM, Zooko O'Whielacronx wrote:
> On Fri, Jun 10, 2011 at 11:11 AM, Jeff Darcy <jdarcy at redhat.com> wrote:
>>
>> It was just iozone with eight threads (matching the number of
>> cores on these machines), doing synchronous sequential 1MB writes.  For
>> reads, I flushed both client and server caches, then read back the same
>> files in the same 1MB chunks.
> 
> Could you tell me more -- I don't actually know what iozone does. Does
> it open eight files and then run one thread for each file, and each
> thread sits in a loop calling write() and passing as argument a 1MB
> buffer? When does it call close()?

What you describe is approximately correct.  Iozone has almost as many
actions as ls, but the per-thread sequence is:

	open (with O_SYNC if -o)
	write or read * N
	fsync (if -e)
	close

Timing begins before the open and ends after the close (if -c, which is
set for these tests).  There are options to do things like parallel
async I/O and such, but I just use threads instead because what I'm
usually interested in simulating (somewhat) is multiple unrelated I/O
streams.

> Is there some good documentation of what iozone does that I can read,
> or do I have to read the source or run it myself to see?

Sure, http://www.iozone.org has plenty.  Iozone is sort of the "swiss
army knife" of file system performance testing; it's the first thing
most folks are likely to reach for, and it has enough options that it's
possible to stick with it for a good long while before upgrading to a
steam shovel like ffsb.

> What? Vanilla GlusteFS goes at 850 MB/s and "plus the CloudFS
> at-rest-encryption module" goes at more like 20-40 MB/s?

Those are on two separate infrastructures.  The 850MB/s is on the 10GbE
"pcloud" machines that are owned by the performance group and which I
get to use somewhat rarely.  The other numbers in this thread are all
from the 1GbE "gfs" machines owned by the filesystem group (the name is
from their original use to test GFS2).  Between the fact that they're on
a slower network and the fact that they have fewer disks, the gfs
machines are expected to give ~1/10 the throughput of the pcloud machines.

> Judging from xlators/encryption/crypt/src/crypt.c [1] CloudFS uses
> AES. According to the Crypto++ benchmarks [2], you should be able to
> process AES-128-CTR at about 200 MB/s. (As far as I know Crypto++ and
> OpenSSL tend to be roughly comparable.)

That seems about right.  The numbers I generated for Red Hat Summit were
actually a little higher than that, but certainly in the same ballpark.

FYI, we're still discussing some of the crypto stuff for what will
actually go into the first release (what's there is basically a
placeholder).  One option is to stick with a stream mode and deal with
generating/storing new IVs all the time.  The other is to use a block
mode and deal with the ragged-EOF problem.  Either way, we'll need to
deal with MACs and atomic read-modify-write cycles to deal with
concurrency, so performance is likely to get significantly worse.
That's one of the reasons I wanted to do these tests, so that we could
have meaningful discussions about whether e.g. a 2x or 3x increase in
overhead would leave performance at an unacceptable level.  Based on the
fact that people do use these alternatives and don't seem to complain
too much about performance, the answer would seem to be no.


More information about the cloudfs-devel mailing list