Ordered writes in HekaFS encryption layer
DRAFT
Transparent encryption layer brings in specific problems inherent for stackable file systems (i.e. intermediary layers between user and local fs). One of such problems is that such layers are not aware about metadata of local file systems that indicates "holes". So if such holes exists, then it will be represented to user as a "garbage" (decrypted set of zeros), and this would mean posix non-compliance.
The single reasonable way for us is to not allow holes at local fs, i.e. to detect all moments of hole creation and mandatory convert it to a set of (encrypted) zeros.
A hole is created every time when local file system is asked to write from offset, which is larger than file size. So the first idea is to compare file size and offset that user wants to write from. If the offset is larger than file size, we convert hole before write. However, it wouldn't be enough only to follow user's instructions.
Encryption layer writes data by chunks (usually of atom size). This means that in a common case we (encryption layer) must split a user request into many chunks and writes them separately. This is because:
1) Linux VFS doesn't accept too large chunks (write of chunk larger than MAX_INT will be incomplete, and we can not allow such "truncation" in encryption layer). 2) splitting large writes will improve things in the case of concurrent access, as writing to different parts of file requires to acquire different "shared locks".
However, splitting writes without any additional efforts from encryption layer is prone to appearing short-lived holes on a local fs. For example, user asks to append 20K to 10K file. Suppose we write by 4K chunks and the first chunk that hits local fs has offset 12K. It means that 2K hole will be created on the local fs.
We need to avoid such short-lived holes even in spite of their short lifespan: after a system crash we'll have already persistent holes (and everything will be consistent from the standpoint of local fs).
We avoid such short-lived holes by using so-called ordering technique: the encryption layer provides a guarantee that any "appending" sequence of requests will be written in ordered fashion.
Glossary ---------
Chunk of data is a sequence of (logical) bytes B = {b1, b2, ..., bm} in a file at some offset off. For every chunk B we'll denote offset(B) = off, size(B) = m.
Request is an order for a local fs to write some chunk of data (see above).
Submit a request means to ask an upper server-side manager (oplock xlator in our case) to write a respective chunk of data.
Sequence of requests {R0, R1, ..., Rn} is any sequence of chunks so that offset(R_i) + size(R_i) == offset (R_(i+1)). Request R_i is direct parent of R_(i+1). Request R_s, (s < i) is indirect parent of R_(i+1).
Sequence of requests {R0, R1, ..., Rn} is appending iff offset(R_i) > file_size for some i, 0 <= i <= n. In particular, appending sequence changes file size.
Sequence of requests is overwriting, iff it is not appending.
Appending sequence is minimal, iff offset(R0) > file_size
Lemma --------
Every sequence can be split into an overwriting and a minimal appending sub-sequences.
So we split every sequence of requests into 2 sub-sequences (overwriting and appending ones). An overwriting sub-sequence is written in parallel fashion. An appending subsequence is written in ordered fashion (see below for definitions).
Every sequence has
. block of HEAD_ATOM type (<= 1), . block of TAIL_ATOM type (<= 1), . blocks of FULL_ATOM type (>= 0).
We define a linear order on a set of blocks of any sequence by the following rule:
(A < B) iff (offset(A) < offset(B)).
All requests {R1, R2, ...} of appending sequence are written in ordered ("parent first") fashion. This means that:
A1. On a client side
R_(i+1) is written by the callback function ->writev_cbk() of ->writev() spawned to write its direct parent (R_i). Since we acquire an exclusive access to write the whole appending sequence, all its requests are written immediately in ordered fashion (we don't ask server-side manager to write a separate R_j). See do_ordered_submit().
B1. On a server side
A special server-side manager (oplock xlator) queues requests and grants (or decline) exclusive access to write the whole appending sequence.
All requests {R1, R2, ...} of overwriting sequence are written in parallel fashion. This means that:
A2. On a client side
We submit all R_j in a loop (see do_parallel_submit). I.e. for every request R_j we ask the server-side manager (oplock xlator) for "shared access". If the shared access is not granted, then we try again.
B2. On a server side
A special server-side manager (oplock xlator) queues requests and grants (or decline) shared access to write a separate request R_j of overwriting sequence. (Definitions of exclusive and shared access, and the policy of their granting will be defined separately).
Such technique allows to simplify things (i.e. to not involve additional sorting means at server side).
Implementation details.
The order HEAD_ATOM < FULL_BLOCK_ATOM < HEAD_ATOM is hardcoded (see function do_ordered_submit). The order on blocks of the same FULL_BLOCK_ATOM type is provided by maintaining a special cursor at local area (see crypt_local_t, avec_config).
Recap -----
We ask for exclusive access for the whole appending sequence. Once it is granted, all requests of the sequence are written one-by-one in ordered fashion.
All requests of any overwrite sequence are submitted in parallel fashion. We ask for shared access for every separate request of an overwrite sequence.
All comments, suggestions are welcome.
Edward.
On Wed, 09 Nov 2011 19:48:35 +0100 Edward Shishkin edward@redhat.com wrote:
Ordered writes in HekaFS encryption layer DRAFT
Transparent encryption layer brings in specific problems inherent for stackable file systems (i.e. intermediary layers between user and local fs). One of such problems is that such layers are not aware about metadata of local file systems that indicates "holes". So if such holes exists, then it will be represented to user as a "garbage" (decrypted set of zeros), and this would mean posix non-compliance.
The single reasonable way for us is to not allow holes at local fs, i.e. to detect all moments of hole creation and mandatory convert it to a set of (encrypted) zeros.
A hole is created every time when local file system is asked to write from offset, which is larger than file size. So the first idea is to compare file size and offset that user wants to write from. If the offset is larger than file size, we convert hole before write. However, it wouldn't be enough only to follow user's instructions.
Encryption layer writes data by chunks (usually of atom size). This means that in a common case we (encryption layer) must split a user request into many chunks and writes them separately. This is because:
- Linux VFS doesn't accept too large chunks (write of chunk larger than MAX_INT will be incomplete, and we can not allow such "truncation" in encryption layer).
- splitting large writes will improve things in the case of concurrent access, as writing to different parts of file requires to acquire different "shared locks".
However, splitting writes without any additional efforts from encryption layer is prone to appearing short-lived holes on a local fs. For example, user asks to append 20K to 10K file. Suppose we write by 4K chunks and the first chunk that hits local fs has offset 12K. It means that 2K hole will be created on the local fs.
We need to avoid such short-lived holes even in spite of their short lifespan: after a system crash we'll have already persistent holes (and everything will be consistent from the standpoint of local fs).
We avoid such short-lived holes by using so-called ordering technique: the encryption layer provides a guarantee that any "appending" sequence of requests will be written in ordered fashion.
Glossary
Chunk of data is a sequence of (logical) bytes B = {b1, b2, ..., bm} in a file at some offset off. For every chunk B we'll denote offset(B) = off, size(B) = m. Request is an order for a local fs to write some chunk of data (see above). Submit a request means to ask an upper server-side manager (oplock xlator in our case) to write a respective chunk of data. Sequence of requests {R0, R1, ..., Rn} is any sequence of chunks so that offset(R_i) + size(R_i) == offset (R_(i+1)). Request R_i is direct parent of R_(i+1). Request R_s, (s < i) is indirect parent of R_(i+1).
Your discussion of how clients handle appending vs. overwriting sequences seems to indicate that sequences are recognized as such on the client side. How? If the client receives a write, it has no way of knowing more will follow, and so it can't issue a lock (really lease) request for more than the extent of that one write. Does the client apply a Nagle-like algorithm to detect contiguous sequences? That would allow some operations to be performed across the entire sequence instead of piece by piece, but would also induce latency.
Sequence of requests {R0, R1, ..., Rn} is appending iff offset(R_i) > file_size for some i, 0 <= i <= n. In particular, appending sequence changes file size.
Then shouldn't this be (offset(R_i) + size(R_i)) > file_size?
Sequence of requests is overwriting, iff it is not appending. Appending sequence is minimal, iff offset(R0) > file_size
Lemma
Every sequence can be split into an overwriting and a minimal appending sub-sequences.
So we split every sequence of requests into 2 sub-sequences (overwriting and appending ones). An overwriting sub-sequence is written in parallel fashion. An appending subsequence is written in ordered fashion (see below for definitions).
Every sequence has
. block of HEAD_ATOM type (<= 1), . block of TAIL_ATOM type (<= 1), . blocks of FULL_ATOM type (>= 0).
We define a linear order on a set of blocks of any sequence by the following rule:
(A < B) iff (offset(A) < offset(B)).
All requests {R1, R2, ...} of appending sequence are written in ordered ("parent first") fashion. This means that:
If the client has grouped the writes it receives into a sequence, why not coalesce the entire sequence into a single writev?
If a sequence is *only* appending (i.e. completely beyond current EOF), does the client synthesize an encrypted-zero-byte write to fill the hole? Does it lock (actually lease) the entire region from current EOF to the end of the sequence all at once?
A1. On a client side
R_(i+1) is written by the callback function ->writev_cbk() of ->writev() spawned to write its direct parent (R_i). Since we acquire an exclusive access to write the whole appending sequence, all its requests are written immediately in ordered fashion (we don't ask server-side manager to write a separate R_j). See do_ordered_submit().
B1. On a server side
A special server-side manager (oplock xlator) queues requests and grants (or decline) exclusive access to write the whole appending sequence.
All requests {R1, R2, ...} of overwriting sequence are written in parallel fashion. This means that:
A2. On a client side
We submit all R_j in a loop (see do_parallel_submit). I.e. for every request R_j we ask the server-side manager (oplock xlator) for "shared access". If the shared access is not granted, then we try again.
B2. On a server side
A special server-side manager (oplock xlator) queues requests and grants (or decline) shared access to write a separate request R_j of overwriting sequence. (Definitions of exclusive and shared access, and the policy of their granting will be defined separately).
That explanation will be helpful, since it's not clear what kind of "shared" lease/lock would be needed or even valid here.
Such technique allows to simplify things (i.e. to not involve additional sorting means at server side).
Implementation details.
The order HEAD_ATOM < FULL_BLOCK_ATOM < HEAD_ATOM is hardcoded
I assume you mean HEAD_ATOM < FULL_BLOCK_ATOM < TAIL_ATOM here.
(see function do_ordered_submit). The order on blocks of the same FULL_BLOCK_ATOM type is provided by maintaining a special cursor at local area (see crypt_local_t, avec_config).
Recap
We ask for exclusive access for the whole appending sequence. Once it is granted, all requests of the sequence are written one-by-one in ordered fashion.
All requests of any overwrite sequence are submitted in parallel fashion. We ask for shared access for every separate request of an overwrite sequence.
All comments, suggestions are welcome.
Let me play devil's advocate here. Why fill holes at all? An alternative would be for the server to store information about holes, e.g. in one or more xattrs, and keep an up-to-date version of that information in memory for any open file. Any read involving a hole would return a unique error. The crypt translator receiving such an error could then issue a query roughly equivalent to FIEMAP, to get the hole information for the to-be-read region, and actually read only the filled parts. This might even allow hole-aware programs to avoid transferring encrypted zero bytes for the regions that were never actually written. Both approaches involve some complexity, but delaying or over-serializing writes seems like something we should try hard to avoid.
On 11/09/2011 08:42 PM, Jeff Darcy wrote:
On Wed, 09 Nov 2011 19:48:35 +0100 Edward Shishkinedward@redhat.com wrote:
Ordered writes in HekaFS encryption layer DRAFT
Transparent encryption layer brings in specific problems inherent for stackable file systems (i.e. intermediary layers between user and local fs). One of such problems is that such layers are not aware about metadata of local file systems that indicates "holes". So if such holes exists, then it will be represented to user as a "garbage" (decrypted set of zeros), and this would mean posix non-compliance.
The single reasonable way for us is to not allow holes at local fs, i.e. to detect all moments of hole creation and mandatory convert it to a set of (encrypted) zeros.
A hole is created every time when local file system is asked to write from offset, which is larger than file size. So the first idea is to compare file size and offset that user wants to write from. If the offset is larger than file size, we convert hole before write. However, it wouldn't be enough only to follow user's instructions.
Encryption layer writes data by chunks (usually of atom size). This means that in a common case we (encryption layer) must split a user request into many chunks and writes them separately. This is because:
- Linux VFS doesn't accept too large chunks (write of chunk larger than MAX_INT will be incomplete, and we can not allow such "truncation" in encryption layer).
- splitting large writes will improve things in the case of concurrent access, as writing to different parts of file requires to acquire different "shared locks".
However, splitting writes without any additional efforts from encryption layer is prone to appearing short-lived holes on a local fs. For example, user asks to append 20K to 10K file. Suppose we write by 4K chunks and the first chunk that hits local fs has offset 12K. It means that 2K hole will be created on the local fs.
We need to avoid such short-lived holes even in spite of their short lifespan: after a system crash we'll have already persistent holes (and everything will be consistent from the standpoint of local fs).
We avoid such short-lived holes by using so-called ordering technique: the encryption layer provides a guarantee that any "appending" sequence of requests will be written in ordered fashion.
Glossary
Chunk of data is a sequence of (logical) bytes B = {b1, b2, ..., bm} in a file at some offset off. For every chunk B we'll denote offset(B) = off, size(B) = m. Request is an order for a local fs to write some chunk of data (see above). Submit a request means to ask an upper server-side manager (oplock xlator in our case) to write a respective chunk of data. Sequence of requests {R0, R1, ..., Rn} is any sequence of chunks so that offset(R_i) + size(R_i) == offset (R_(i+1)). Request R_i is direct parent of R_(i+1). Request R_s, (s< i) is indirect parent of R_(i+1).
Your discussion of how clients handle appending vs. overwriting sequences seems to indicate that sequences are recognized as such on the client side. How?
Oplock xlator accumulates events (file size changes) in special maintained data-structures and evaluates every arrived request as append-truncate, or overwrite. This is a part of locking protocol. I'll post the design document a bit later.
If the client receives a write, it has no way
of knowing more will follow, and so it can't issue a lock (really lease) request for more than the extent of that one write. Does the client apply a Nagle-like algorithm to detect contiguous sequences? That would allow some operations to be performed across the entire sequence instead of piece by piece, but would also induce latency.
Sequence of requests {R0, R1, ..., Rn} is appending iff offset(R_i)> file_size for some i, 0<= i<= n. In particular, appending sequence changes file size.
Then shouldn't this be (offset(R_i) + size(R_i))> file_size?
Sequence of requests is overwriting, iff it is not appending. Appending sequence is minimal, iff offset(R0)> file_size
Lemma
Every sequence can be split into an overwriting and a minimal appending sub-sequences.
So we split every sequence of requests into 2 sub-sequences (overwriting and appending ones). An overwriting sub-sequence is written in parallel fashion. An appending subsequence is written in ordered fashion (see below for definitions).
Every sequence has
. block of HEAD_ATOM type (<= 1), . block of TAIL_ATOM type (<= 1), . blocks of FULL_ATOM type (>= 0).
We define a linear order on a set of blocks of any sequence by the following rule:
(A< B) iff (offset(A)< offset(B)).
All requests {R1, R2, ...} of appending sequence are written in ordered ("parent first") fashion. This means that:
If the client has grouped the writes it receives into a sequence, why not coalesce the entire sequence into a single writev?
On the one hand there is a funny restriction for iov.len: it must not be larger than (2G minus small). I tried to eliminate this restriction a year ago without success: http://bugzilla.redhat.com/show_bug.cgi?id=612839
On the other hand Gluster restricts the number of iovecs by MAX_IOVEC.
All this means that we can not avoid granulation stuff (teaching ->cbk() to spawn ->writev() for next chunk, etc).
If a sequence is *only* appending (i.e. completely beyond current EOF), does the client synthesize an encrypted-zero-byte write to fill the hole? Does it lock (actually lease) the entire region from current EOF to the end of the sequence all at once?
Appending writes are performed under exclusive lock (see next mail). Perhaps we can grant a shared lock for one appending write. However, more then one appending writes executing in parallel can conflict:
Suppose file is 20K Process A performs ->writev(size = 10, off = 20); Process B performs ->writev(size = 10, off = 30);
Process A checks file size (20K); Process B checks file size (20K); Process A writes 10K bytes from offset 20K; Process B converts a 10K "hole" at offset 20K. Process B writes from offset 30K.
In the result we'll have unexpected 10K of zeros at offset 20K. Obviously "shared" locks don't work here in spite of writes to disjoint intervals.
Such conflicts don't take place in local file systems, which update hole metadata "in place".
A1. On a client side
R_(i+1) is written by the callback function ->writev_cbk() of ->writev() spawned to write its direct parent (R_i). Since we acquire an exclusive access to write the whole appending sequence, all its requests are written immediately in ordered fashion (we don't ask server-side manager to write a separate R_j). See do_ordered_submit().
B1. On a server side
A special server-side manager (oplock xlator) queues requests and grants (or decline) exclusive access to write the whole appending sequence.
All requests {R1, R2, ...} of overwriting sequence are written in parallel fashion. This means that:
A2. On a client side
We submit all R_j in a loop (see do_parallel_submit). I.e. for every request R_j we ask the server-side manager (oplock xlator) for "shared access". If the shared access is not granted, then we try again.
B2. On a server side
A special server-side manager (oplock xlator) queues requests and grants (or decline) shared access to write a separate request R_j of overwriting sequence. (Definitions of exclusive and shared access, and the policy of their granting will be defined separately).
That explanation will be helpful, since it's not clear what kind of "shared" lease/lock would be needed or even valid here.
Such technique allows to simplify things (i.e. to not involve additional sorting means at server side).
Implementation details.
The order HEAD_ATOM< FULL_BLOCK_ATOM< HEAD_ATOM is hardcoded
I assume you mean HEAD_ATOM< FULL_BLOCK_ATOM< TAIL_ATOM here.
(see function do_ordered_submit). The order on blocks of the same FULL_BLOCK_ATOM type is provided by maintaining a special cursor at local area (see crypt_local_t, avec_config).
Recap
We ask for exclusive access for the whole appending sequence. Once it is granted, all requests of the sequence are written one-by-one in ordered fashion.
All requests of any overwrite sequence are submitted in parallel fashion. We ask for shared access for every separate request of an overwrite sequence.
All comments, suggestions are welcome.
Let me play devil's advocate here. Why fill holes at all?
Because this is a reasonable working solution.
An
alternative would be for the server to store information about holes, e.g. in one or more xattrs, and keep an up-to-date version of that information in memory for any open file.
Such hole maps will be large xattrs/memory consumers. We'll need to flush parts of such map to disk once in a while. Also we'll need to synchronize a set of disk holes with the hole map in xattrs by a special order to make sure that a set disk holes is "not larger", then a hole map.
Any read involving a hole
would return a unique error. The crypt translator receiving such an error could then issue a query roughly equivalent to FIEMAP, to get the hole information for the to-be-read region, and actually read only the filled parts. This might even allow hole-aware programs to avoid transferring encrypted zero bytes for the regions that were never actually written. Both approaches involve some complexity, but delaying or over-serializing writes seems like something we should try hard to avoid.
(Stupid mail client replied only to Edward instead of the list before)
On Mon, 14 Nov 2011 19:31:13 +0100 Edward Shishkin edward@redhat.com wrote:
Your discussion of how clients handle appending vs. overwriting sequences seems to indicate that sequences are recognized as such on the client side. How?
Oplock xlator accumulates events (file size changes) in special maintained data-structures and evaluates every arrived request as append-truncate, or overwrite. This is a part of locking protocol. I'll post the design document a bit later.
That would be nice. I look forward to seeing how it can do such aggregation without adding even more latency.
If the client has grouped the writes it receives into a sequence, why not coalesce the entire sequence into a single writev?
On the one hand there is a funny restriction for iov.len: it must not be larger than (2G minus small). I tried to eliminate this restriction a year ago without success: http://bugzilla.redhat.com/show_bug.cgi?id=612839
On the other hand Gluster restricts the number of iovecs by MAX_IOVEC.
All this means that we can not avoid granulation stuff (teaching ->cbk() to spawn ->writev() for next chunk, etc).
Shouldn't we at least *try* to do such aggregation in the vast majority of cases where it's still possible? Failure to do so will impact already-questionable performance.
If a sequence is *only* appending (i.e. completely beyond current EOF), does the client synthesize an encrypted-zero-byte write to fill the hole? Does it lock (actually lease) the entire region from current EOF to the end of the sequence all at once?
Appending writes are performed under exclusive lock (see next mail). Perhaps we can grant a shared lock for one appending write. However, more then one appending writes executing in parallel can conflict:
Suppose file is 20K Process A performs ->writev(size = 10, off = 20); Process B performs ->writev(size = 10, off = 30);
Process A checks file size (20K); Process B checks file size (20K); Process A writes 10K bytes from offset 20K; Process B converts a 10K "hole" at offset 20K. Process B writes from offset 30K.
In the result we'll have unexpected 10K of zeros at offset 20K. Obviously "shared" locks don't work here in spite of writes to disjoint intervals.
So why not exclusive locks? Such conflicts should be rare, and we have to issue some kind of lock request anyway, so there should be no additional overhead for the common cases.
Let me play devil's advocate here. Why fill holes at all?
Because this is a reasonable working solution.
It is clearly *not* working yet, not without a lot of additional work, and whether it's reasonable is what we're discussing.
An alternative would be for the server to store information about holes, e.g. in one or more xattrs, and keep an up-to-date version of that information in memory for any open file.
Such hole maps will be large xattrs/memory consumers. We'll need to flush parts of such map to disk once in a while. Also we'll need to synchronize a set of disk holes with the hole map in xattrs by a special order to make sure that a set disk holes is "not larger", then a hole map.
A fair point. In many cases, the holes will be aligned to filesystem blocks, so the information we need will already be available from there. That means we only need to track holes bigger than a cipherblock but smaller than a filesystem block, but that list could still become quite large. Could we "cheat" by filling such holes with explicit zeroes but leaving bigger holes at the local-filesystem level?
I'm not trying to "champion" one approach or the other. We shouldn't be rejecting *any* alternatives because of easily-solved secondary issues, just because of who has proposed what. The goal here is to determine how this problem can be solved with minimum development time and/or impact on performance, and it's not at all clear which route leads to that goal. Let's not cut short lines of inquiry too soon because of any one person's opinion or bias (including mine).
cloudfs-devel@lists.fedorahosted.org