memory leak (?) in crypt_launch
by Edward Shishkin
...
local->xattr = get_new_dict();
if (!local->xattr) {
op_errno = ENOMEM;
goto err;
}
if (dict_set_str(local->xattr,"trusted.glusterfs.lock","fubar") != 0) {
op_errno = EIO;
dict_unref(local->xattr);
goto err;
}
...
I found this incorrect: dict_unref decrements the refcount,
whereas get_new_dict didn't set it. If we really need refcounts,
I think we should use the pair (dict_new, dict_unref).
Otherwise,(get_new_dict, dict_destroy) will be enough.
Any ideas?
Thanks,
Edward.
12 years, 6 months
New encryption code
by Jeff Darcy
I've posted some new encryption code to the "aes" branch of the git repo.
http://git.fedorahosted.org/git/?p=CloudFS.git;a=summary
This isn't intended to be a final authoritative anything, or to displace
any of the things Edward is doing. It's just a demonstration that the
CTR-mode approach can be made to work. Besides avoiding the
read-modify-write and contention issues of the previous approach, it's
much simpler and seems much faster (though I haven't done complete
performance testing yet). It supports AES-128, AES-192, or AES-256,
with the key given as a text string, hex string, or path to a
hex-encoded file. The basic mechanism sort of combines CTR mode and
ESSIV, and thus shares a couple of important weaknesses.
(1) Like all such methods, the result is vulnerable to a known-plaintext
attack. If someone knows both the plaintext and ciphertext for a range
of a file then they can easily derive the keystream for that range and
use it to view that range for the rest of the file's lifetime even if
the contents change.
(2) More seriously, CTR mode is vulnerable to tampering. Without even
being able to read the file contents, the storage provider can flip any
bit in the plaintext merely by flipping that bit in the ciphertext.
Note that neither of these vulnerabilities should be exploitable by
anyone but the storage provider; they both require access to the
ciphertext, and the in-flight encryption (separate from this) should
prevent anyone except them and the user from having access to that. I
think that's sufficient for many environments, especially in a private
cloud where the storage provider shares organizational accountability
with the tenants. If it's not, then the only way to solve (1) is to
store per-file keys in a separate distributed store that's secure enough
for that purpose and make the result fast/reliable enough for general
use. For (2) I think we can reduce a "flip one specific bit" problem to
a "flip some random bit in the same block" problem pretty easily, but
for a real solution we need to detect the tampering and reject the
tampered result. That would require storing checksums or similar with
similar constraints to the per-file keys for (1). This puts us into the
same problem space that Tahoe-LAFS tries to address, and despite many
more people working on that for many more years they have yet to come up
with a solution that's anywhere near adequate for primary storage. I
propose that we consider only small incremental improvements for now,
and defer the ultra-secure stuff until version two at least.
So, let me know what you think, either of the code or the commentary,
and we'll go from there.
12 years, 6 months
Changes to 'aes'
by Jeff Darcy
New branch 'aes' available with the following commits:
commit 77cbe28984a7fad097575f6d7e95bb62cacf3115
Author: Jeff Darcy <jdarcy(a)redhat.com>
Date: Thu Mar 17 15:41:46 2011 -0400
Add support for hex keys (inline or file) up to 256 bits.
commit 2bd0dff0e743f7ed01c3c1bd797b59f894d234ae
Author: Jeff Darcy <jdarcy(a)redhat.com>
Date: Wed Mar 16 16:29:35 2011 -0400
New code using AES, non-constant IV, no read/modify/write.
12 years, 6 months
Re: Fwd: Encryption
by Edward Shishkin
On 02/24/2011 06:05 PM, Jeff Darcy wrote:
>
> There are three basic issues that need to be addressed in the encryption
> module: type of cipher used, initialization-vector handling, and
> conflict management. Each is non-trivial, so I'll address them in turn.
>
> = Cipher
>
> The main factor affecting our choice of ciphers (or APIs to them) is
> that we need to be able to deal efficiently with updates both in the
> middle of the file and at the end. At EOF, the problem is that we need
> a whole cipher-block in order to decrypt, but the file might actually
> end at any byte boundary within that cipher-block. Therefore, we have
> to deal with the "residue" somehow.
>
> * Store the residue in an xattr.
>
> * Store a whole cipher-block at the end, record the amount of padding in
> an xattr.
>
> * Use a stream cipher (or block cipher converted to a stream cipher).
Nup. "Block cipher converted to a stream cipher" doesn't resolve the EOF
problem. Moreover, this sounds bad, so I suggest to adhere the following
terminology.
There are block ciphers and stream ciphers.
. Stream cipher translates 1 byte of plain text to 1 byte of cipher
text. Example of stream cipher: RC4.
. Block cipher translates 1 byte of plain text to 1 block (of key size)
of cipher text. Examples of block cipher: DES, AES.
Block cipher can have stream (chaining) mode, which requires an initial
vector (IV). Regardless of modes block cipher has output of size, which
is multiple to key size.
So stream ciphers don't have the EOF problem, and block ciphers do have
such problem regardless of stream modes.
Block ciphers with non-stream modes sucks (are not stable to attacks),
and we won't consider them.
So eventually let's speak about "block ciphers" and "stream ciphers".
>
> This problem is further compounded by the striping case, where EOF for a
> stripe component (local file stored on one brick) might not be EOF for
> the entire file (union of all stripe components).
>
> Since the two xattr-based approaches both require extra calls, the
> stream-cipher approach has been used, with the cipher resetting at block
> (e.g. 4KB) boundaries to allow efficient middle-of-file updates. As it
> turns out, pure stream ciphers are relatively uncommon.
Stream ciphers are EU standard, whereas block ciphers are US standard.
In short, stream ciphers are not worse then block ones meaning speed and
stability, but they require more care. I know openssl supports a stream
cipher algorithm (RC4), but I can not recommend or refuse it for now
(didn't have a chance to make a marketing analysis of existing stream
algos).
More often,
> CFB/OFB/CTR methods are used to convert a block cipher into a stream
> cipher. The OpenSSL documentation is *amazingly* bad, but it looks like
> it should be pretty easy to use any of these techniques with AES as well
> as with DES.
>
> = Initialization vector
>
> Right now, the code uses a constant IV, which is totally unacceptable
> from a security standpoint and was always meant to be changed before
> release. The question is: what should we use for an IV? GlusterFS does
> attach a supposedly unique "gfid" as an xattr on each file, so that
> might be usable as a basis for the IV so long as we can verify that it's
> universal and stable enough to be sure that data won't become
> unrecoverable because a gfid is missing or changed.
There is an ESSIV technique of assigning IV (used in linux DMcrypt
subsystem):
IV(sector) = E_s(sector), where s = hash_K,
see http://en.wikipedia.org/wiki/Disk_encryption_theory for details
I think we can use it with the following modification: instead of
"sector" we should take (N + gfid), where N is a number of logical
cluster in a file (*). This is needed to make sure that files with
identical plain content will have different cipher text.
(*) Logical clusters are file's chunks ciphered independently for
good random access. I suggest to consider only 4K clusters.
Edward.
>
> = Conflict management
>
> For partial-block writes, the encryption module needs to do the
> following atomically.
>
> * Read the current block contents.
>
> * Decrypt.
>
> * Overlay the new partial block on the old whole block.
>
> * Encrypt.
>
> * Write the entire block.
>
> There's some additional complexity to do with EOF, but that's the basic
> idea. The current code eschews locks in favor of "optimistic"
> concurrency control in which a server-side "oplock" translator maintains
> a generation number for each inode. Clients can start a "transaction"
> before they read, associating the current inode generation with their
> connection. The next write on that connection will compare the stored
> generation number vs. the current one. If they're not the same, that
> means there was another write since the transaction started, and the
> write is rejected so the client can start over. Unfortunately, this
> does not account for "self conflicts" when one client sends multiple
> writes to the same file in parallel. The standard
> performance/write-behind translator does this constantly, which is why
> it has to be disabled when using cloudfs encryption, and there are many
> other ways for it to happen.
>
> My first inclination would be to add client code which detects and
> avoids such self-conflict, but I have a sneaking suspicion that will be
> pretty complex and have to be tweaked a lot to avoid compromising
> performance. I kind of suspect that server-side queuing might be the
> right answer here. If a transaction is begun which conflicts with
> another already in progress, then the new one is simply queued behind
> the old one and the transaction-begin call (actually a special setxattr)
> will be resumed when the old ones complete. This also addresses
> fairness/forward-progress issues inherent in both the locking and retry
> models, though we'll need to put some thought into recovery from faults.
> _______________________________________________
> cloudfs-devel mailing list
> cloudfs-devel(a)lists.fedorahosted.org
> https://fedorahosted.org/mailman/listinfo/cloudfs-devel
12 years, 6 months