On 02/24/2011 07:23 PM, Edward Shishkin wrote:
Nup. "Block cipher converted to a stream cipher"
doesn't resolve the
EOF problem. Moreover, this sounds bad, so I suggest to adhere the
following terminology.
There are block ciphers and stream ciphers.
. Stream cipher translates 1 byte of plain text to 1 byte of cipher
text. Example of stream cipher: RC4.
. Block cipher translates 1 byte of plain text to 1 block (of key
size) of cipher text. Examples of block cipher: DES, AES.
Block cipher can have stream (chaining) mode, which requires an
initial vector (IV). Regardless of modes block cipher has output of
size, which is multiple to key size.
So stream ciphers don't have the EOF problem, and block ciphers do
have such problem regardless of stream modes.
That is incorrect. See e.g. the OpenSSL man page for DES
(
http://www.openssl.org/docs/crypto/des.html#) which is
uncharacteristically clear on the issue.
DES_cfb_encrypt() encrypt/decrypts using cipher feedback mode. This
method takes an array of characters as input and outputs and array of
characters. It does not require any padding to 8 character groups.
Clearer (though perhaps less authoritative) explanations can be found in
the following places.
http://etutorials.org/Programming/secure+programming/Chapter+5.+Symmetric...
http://en.wikipedia.org/wiki/Block_cipher_modes_of_operation#Cipher_feedb...
http://www.di-mgt.com.au/cryptopad.html#cfbofbmodes
What is true for CFB is also mostly true for OFB and CTR; likewise for
DES to AES. For example, RFC3686 suggests using AES-CTR for IPsec. I
also have empirical evidence from using *this code* that encrypting and
decrypting odd-size files works fine without padding.
There is an ESSIV technique of assigning IV (used in linux DMcrypt
subsystem):
IV(sector) = E_s(sector), where s = hash_K,
see
http://en.wikipedia.org/wiki/Disk_encryption_theory for details
I think we can use it with the following modification: instead of
"sector" we should take (N + gfid), where N is a number of logical
cluster in a file (*).
AFAICT all of the methods described on that page assume that the
data-block size is a multiple of the cipher-block size. This is a
reasonable assumption when dealing with block storage, but at the
filesystem level we don't have that luxury. We still need to deal
*somehow* with the issue of odd offsets (including but not limited to EOF).
I suggest to consider only 4K clusters.
I think that will be fine to start with (it's how the code currently
works) but in future I think we need to support other block sizes as
well. Since we have to do an atomic read-modify-write for partial
blocks, using 4KB chunks might unnecessarily sacrifice performance for
applications that typically write <1KB. This is true not so much
because of the bandwidth, but because of the increased potential for
contention. The larger the block size, the more likely false contention
- same block but not really same byte range - becomes.