Handling EOBs in CloudFS
by Edward Shishkin
Hello everyone,
any comments, suggestions are welcome..
Handling EOBs (end-of-blocks) for transparent
encryption, checking integrity and data authentication
DRAFT
This was designed for CloudFS, which uses 2-level protocol (high and
low) supported by xlators which reside on server and client sides
respectively.
Definition of EOB. Storage class
If file size isn't a multiple of cblock (cipher block) size, then we
also need to store special padding needed to decrypt its last block
with some cipher modes like CBC. This padding contains a part of
ciphertext and must be considered as a part of this file. We'll call
this padding end-of-file (EOF). If plain text has size a multiple of
cblock size, then encrypted file won't have (or will have empty) EOF.
Signatures (HMACs, etc) for checking integrity, data authentication,
etc. have the same nature as EOF. Every such signature is created
for some logical block in a file. This is not a padding though, as
in the case of EOF, but anyway such signatures are associated with
file's data, and we'll consider a class of object, which includes
EOFs, HMACs, etc, and call them EOBs (end-of-block).
We define storage class of EOBs as "data", i.e. this can be considered
as part of file's data: we can not read/write data block without
reading/writing its EOB.
Storing EOBs. Approaches and Issues
Approach 1: Storing EOBs as xattr values.
In this case we store a file in parts which are not adjacent
from the standpoint of Cloudfs. That said we need to split
read, and this makes this operation inatomic. This means
that read(2) will return data compound of parts of different
"versions".
Example:
Suppose we have a file F stored in 2 different parts F1 and F2.
Process A writes a file F (to be of version 1);
Process B reads a file F (part F1);
Process C writes a file F (to be of version 2);
Process B reads a file F (part F2);
As the result process B returns data compound of
parts of different versions 1 and 2.
This non-atomicity is different from the non-atomicity that takes
place in the kernel (local file systems): kernel guarantees
that all PAGE_SIZE reads with PAGE_SIZE-aligned offsets are
atomic (this is because reads and writes in kernel acquire
page locks). Whereas, in our case we'll have that F2 doesn't
necessarily have PAGE_SIZE-aligned offset.
That said it can happen that we'll get complaints from users,
who don't expect such non-atomicity. Moreover, in the case when
EOBs are HMACs for checking integrity, or authentication we'll
have false positives, as nobody guarantees that versions of HMAC
and respective data block will coincide.
Solution:
In this approach we need to serialize truncates, appending
writes and sequences RbRe (read block, read EOB).
Approach 2: Storing in file's body.
In this case EOBs are stored in file's body (via appending to
a file in the case of EOF, or interspacing a file with HMACs,
etc). So file with his EOBs is the whole from the standpoint
of Cloudfs, and there is no problems with atomicity specific
to Approach 1.
However, in this case all our files maintained by low-level
local fs will have increased sizes (added total size of all EOBs).
So that actual file size must be stored as additional attribute
(e.g. as xattr value).
->open() method of the high-level translator loads actual
file size to the cloudfs-specific part of inode via fetching
->getxattr(), so that it is persistent in the memory on server.
Any ->truncate() and appending ->write() of the high-level
xlator update in-core and on-disk actual sizes simultaneously
(via fetching ->setxattr() for the last one). This actual size
is what should be returned to user by ->fstat(), ->lookup(),
etc. as st_size.
12 years, 2 months
Tree cleanup
by Jeff Darcy
Is there any reason I shouldn't delete the following branches?
uidmap
cloudfsd
ui_work
AFAIK everything in these has been subsumed by other branches and/or
merged into master, so they're not even of historical interest. This
leaves the following live branches.
master
aes (Edward's encryption-related changes)
cbc (my prototype oplock-protocol changes)
uidmap-management (Kaleb's packaging work)
12 years, 2 months
CloudFS vs. ecryptfs vs. encfs vs. Tahoe-LAFS
by Zooko Wilcox-O'Hearn
re: https://fedorahosted.org/pipermail/cloudfs-devel/2011-June/000097.html
Hi, folks:
I'm a contributor to the Tahoe-LAFS project (http://tahoe-lafs.org ),
and Jeff Darcy (whom I have shared technical chat with for many a
year) pointed this thread out to me.
First of all, I'd like to emphasize that GlusterFS and Tahoe-LAFS are
very different beasts. (Jeff probably already knows this, but...)
Tahoe-LAFS is almost more like a "nosql" key-value store or like a
cloud storage system like S3 than like a filesystem. Maybe I shouldn't
have even put the "FS" in its name! But, back when Tahoe-LAFS started,
the words "nosql" and "cloud" had not yet been coined. :-) Also, we
*do* provide a files-and-directories graph structure, unlike most
nosql-ish/cloud-ish things, and we do attempt to layer on a (possibly
very inefficient) POSIX API through the SFTP interface, which is what
Jeff benchmarked.
So, all of this is to say that I'm not surprised if Tahoe-LAFS
performance for filesystem-flavored tasks is much worse than
GlusterFS's.
I am really pleased to see these numbers. Thanks for doing this! I've
been wishing for some measurements like this for a long time.
Could you please give more details about what the workload was? If you
used the SFTP interface, then I assume it consisted of merely
uploading and downloading files, right? Because SFTP doesn't really
support POSIXy things like editing the contents of files in place
AFAIK. "Just uploading and downloading files" is what Tahoe-LAFS has
always been primarily intended for, so if it is this much slower even
for that use case, then I can't place the blame for the slowness on
the POSIX API layer but instead it rests on the core Tahoe-LAFS
protocols themselves.
Also, could you run the same experiment with vanilla GlusterFS sans
all encryption, or with CloudFS with encryption turned off? Those
would be very interesting numbers to see.
I don't know if this fits your API/use cases at all, but a mature and
widely cited benchmark is the Yahoo! Cloud Serving Benchmark. I would
love to see CloudFS compared to some of the ten or so databases tested
by that framework:
https://github.com/brianfrankcooper/YCSB/wiki
Regards,
Zooko
12 years, 2 months