= HekaFS Improved Replication =
== Background and Requirements ==
One of the most serious internal complaints about GlusterFS is performance for small synchronous requests when using their filesystem-level replication (AFR). This problem particularly afflicts virtual-machine-image and database workloads, reducing performance to about a third of what it "should" be (compared on a per-server basis to NFS on the same hardware). The fundamental problem is that the AFR approach to making writes crash-proof involves the following operations:
1. Lock on the primary (first) server 2. Record operation-pending state (using extended attributes) on all servers 3. Issue write to all servers 4. As writes complete, update operation-pending state on other servers 5. Unlock on primary server
Even with some operations in parallel, this requires a minimum of five network round trips to/from the primary server - possibly more as step 4 might be repeated if there are more than two replicas. Even with pending changes to AFR, such as coalescing step 4 updates, AFR's per-request latency is likely to remain terrible.
Externally, users seem to focus on a different problem: the timeliness and observability of replica repair after a server has failed and been restored[1][2]. AFR was built on the assumption that on-demand repair of individual files or directories as they're accessed would be sufficient. The message from users ever since has been unequivocal: leaving unknown numbers of unrepaired files vulnerable to a second failure for an indefinite period is unacceptable. These users require immediate repair with explicit notification of return to a fully protected state, but here they run into a second snag: the time required to do a full xattr scan of a multi-terabyte filesystem through a single node is also unacceptable. Patches were submitted almost a year ago3 to implement precise recovery by maintaining a list of files that are partially written and might therefore require repair, but those have never been adopted. The recently introduced "proactive self heal" functionality is only slightly better. It is triggered automatically and runs inside one of the server daemons - avoiding many machine to machine and user to kernel round trips - but it's still single-threaded and drags all data through one server that might be neither source nor destination. Worse, if a second failure occurs while the lengthy repair process for a previous failure is still ongoing, a new repair cycle will be scheduled but might not even start for days while the previous repair scans millions of perfectly healthy files.
The primary requirements, therefore, are:
* Improve performance for synchronous small requests
* Provide efficient "minimal" replica repair with a positive indication of replica status
In addition to these requirements, compatibility with planned enhancements to distribution and wide-area replication would also be highly desirable.
== Proposed Solution ==
The origin of AFR's performance problems is that it requires extra operations (beyond the necessary N writes) in the non-failure case to ensure correct operation in the failure case. The basis of the proposed solution is therefore to be optimistic instead of pessimistic, expending minimal resources in the normal case and taking extra steps only after a failure. The basic write algorithm becomes:
1. Forward the write to all N replicas 2. If all N replicas indicate success, we're done 3. If any replica fails, add information about the failed request (e.g. file, offset, length) to journals on the replicas where it succeeded 4. As part of the startup process, defer completion of startup until brought up to date by replaying peers' journals
Because the process relies on a journal, there's no need to maintain a separate list of files in need of repair; journal contents can be examined at any time, and if they're empty (the normal case) that serves as a positive indication that the volume is in a fully protected state.
Doing repair as part of the startup process means that, if the failure is a network partition rather than a server failure4, then neither side will go through the startup process. Each server must therefore initiate repair upon being notified of another server coming up as well as during startup. Journal entries are pushed rather than pulled, from the servers that have them to the newly booted or reconnected server. Each server must also be a client, both to receive peer-status notifications (which currently go only to clients) and to issue journal-related requests.
In the case of a network partition, a second problem also arises: split brain. Writes might continue to be received and entered into the journal on both sides of the partition. When journal entries are being propagated in both directions between two servers, establishing the correct combined order for writes that overlap would require additional information (e.g. version vectors) not currently present in the GlusterFS network protocol. This is a problem we will have to solve when we get to wide-area replication, but not right now. To keep things simpler in this release, we can instead enforce quorum as has already been suggested5 and implemented for AFR.
Although the description so far has mostly concentrated on writes, other modifications - e.g. create, symlink, setxattr - mostly work the same way. In the case of namespace operations followed by data operations - e.g. rename followed by write - ordinary care must be taken to ensure that the second operation is applied to the correct object. In the worst case, we might need to store UUIDs in the journal and use a UUID-to-path mapping maintained on each server (which would be useful for other reasons).
[1] "Experience with GlusterFS" http://www.devco.net/archives/2010/09/22/experience_with_glusterfs.php
[2] "Why GlusterFS is Glusterfsck'd Too" http://chip.typepad.com/weblog/2011/09/why-glusterfs-is-glusterfsckd-too.htm...
[3] http://bugs.gluster.com/show_bug.cgi?id=2088
[4] Yes, partitions do occur even in a network environment.
Hi,
Here's a simple question.
- Forward the write to all N replicas
- If all N replicas indicate success, we're done
- If any replica fails, add information about the failed request (e.g. file, offset, length) to journals on the replicas where it succeeded
- As part of the startup process, defer completion of startup until brought up to date by replaying peers' journals
What would happen if the primary (forwarder) node fails in the middle of writing its own local file (i.e. before starting the replication)? The file on the primary node is now corrupted, and should be roll-backed to be synced with other replicas. How would it be achieved?
- Etsuji Nakai, Red Hat K.K.
----- 元のメッセージ ----- 差出人: "Jeff Darcy" jdarcy@redhat.com To: cloudfs-devel@fedorahosted.org 送信済み: 2011年9月14日, 水曜日 午前 1:45:10 件名: RFC: improved replication
= HekaFS Improved Replication =
== Background and Requirements ==
One of the most serious internal complaints about GlusterFS is performance for small synchronous requests when using their filesystem-level replication (AFR). This problem particularly afflicts virtual-machine-image and database workloads, reducing performance to about a third of what it "should" be (compared on a per-server basis to NFS on the same hardware). The fundamental problem is that the AFR approach to making writes crash-proof involves the following operations:
1. Lock on the primary (first) server 2. Record operation-pending state (using extended attributes) on all servers 3. Issue write to all servers 4. As writes complete, update operation-pending state on other servers 5. Unlock on primary server
Even with some operations in parallel, this requires a minimum of five network round trips to/from the primary server - possibly more as step 4 might be repeated if there are more than two replicas. Even with pending changes to AFR, such as coalescing step 4 updates, AFR's per-request latency is likely to remain terrible.
Externally, users seem to focus on a different problem: the timeliness and observability of replica repair after a server has failed and been restored[1][2]. AFR was built on the assumption that on-demand repair of individual files or directories as they're accessed would be sufficient. The message from users ever since has been unequivocal: leaving unknown numbers of unrepaired files vulnerable to a second failure for an indefinite period is unacceptable. These users require immediate repair with explicit notification of return to a fully protected state, but here they run into a second snag: the time required to do a full xattr scan of a multi-terabyte filesystem through a single node is also unacceptable. Patches were submitted almost a year ago3 to implement precise recovery by maintaining a list of files that are partially written and might therefore require repair, but those have never been adopted. The recently introduced "proactive self heal" functionality is only slightly better. It is triggered automatically and runs inside one of the server daemons - avoiding many machine to machine and user to kernel round trips - but it's still single-threaded and drags all data through one server that might be neither source nor destination. Worse, if a second failure occurs while the lengthy repair process for a previous failure is still ongoing, a new repair cycle will be scheduled but might not even start for days while the previous repair scans millions of perfectly healthy files.
The primary requirements, therefore, are:
* Improve performance for synchronous small requests
* Provide efficient "minimal" replica repair with a positive indication of replica status
In addition to these requirements, compatibility with planned enhancements to distribution and wide-area replication would also be highly desirable.
== Proposed Solution ==
The origin of AFR's performance problems is that it requires extra operations (beyond the necessary N writes) in the non-failure case to ensure correct operation in the failure case. The basis of the proposed solution is therefore to be optimistic instead of pessimistic, expending minimal resources in the normal case and taking extra steps only after a failure. The basic write algorithm becomes:
1. Forward the write to all N replicas 2. If all N replicas indicate success, we're done 3. If any replica fails, add information about the failed request (e.g. file, offset, length) to journals on the replicas where it succeeded 4. As part of the startup process, defer completion of startup until brought up to date by replaying peers' journals
Because the process relies on a journal, there's no need to maintain a separate list of files in need of repair; journal contents can be examined at any time, and if they're empty (the normal case) that serves as a positive indication that the volume is in a fully protected state.
Doing repair as part of the startup process means that, if the failure is a network partition rather than a server failure4, then neither side will go through the startup process. Each server must therefore initiate repair upon being notified of another server coming up as well as during startup. Journal entries are pushed rather than pulled, from the servers that have them to the newly booted or reconnected server. Each server must also be a client, both to receive peer-status notifications (which currently go only to clients) and to issue journal-related requests.
In the case of a network partition, a second problem also arises: split brain. Writes might continue to be received and entered into the journal on both sides of the partition. When journal entries are being propagated in both directions between two servers, establishing the correct combined order for writes that overlap would require additional information (e.g. version vectors) not currently present in the GlusterFS network protocol. This is a problem we will have to solve when we get to wide-area replication, but not right now. To keep things simpler in this release, we can instead enforce quorum as has already been suggested5 and implemented for AFR.
Although the description so far has mostly concentrated on writes, other modifications - e.g. create, symlink, setxattr - mostly work the same way. In the case of namespace operations followed by data operations - e.g. rename followed by write - ordinary care must be taken to ensure that the second operation is applied to the correct object. In the worst case, we might need to store UUIDs in the journal and use a UUID-to-path mapping maintained on each server (which would be useful for other reasons).
[1] "Experience with GlusterFS" http://www.devco.net/archives/2010/09/22/experience_with_glusterfs.php
[2] "Why GlusterFS is Glusterfsck'd Too" http://chip.typepad.com/weblog/2011/09/why-glusterfs-is-glusterfsckd-too.htm...
[3] http://bugs.gluster.com/show_bug.cgi?id=2088
[4] Yes, partitions do occur even in a network environment.
[5] http://bugs.gluster.com/show_bug.cgi?id=3533 _______________________________________________ cloudfs-devel mailing list cloudfs-devel@lists.fedorahosted.org https://fedorahosted.org/mailman/listinfo/cloudfs-devel
I might be mistaken in the previous note. If the replication is initiated by the client, it's not a problem as the client doesn't hold data files. But if the replication failure is detected and handled by the client, it looks the multiple failure including the client and replica servers cannot be recovered without some journaling before starting the replication.
- Etsuji Nakai, Red Hat K.K.
----- 元のメッセージ ----- 差出人: "Etsuji Nakai" enakai@redhat.com To: cloudfs-devel@lists.fedorahosted.org 送信済み: 2011年9月14日, 水曜日 午前 9:49:24 件名: Re: RFC: improved replication
Hi,
Here's a simple question.
- Forward the write to all N replicas
- If all N replicas indicate success, we're done
- If any replica fails, add information about the failed request (e.g. file, offset, length) to journals on the replicas where it succeeded
- As part of the startup process, defer completion of startup until brought up to date by replaying peers' journals
What would happen if the primary (forwarder) node fails in the middle of writing its own local file (i.e. before starting the replication)? The file on the primary node is now corrupted, and should be roll-backed to be synced with other replicas. How would it be achieved?
- Etsuji Nakai, Red Hat K.K.
_______________________________________________ cloudfs-devel mailing list cloudfs-devel@lists.fedorahosted.org https://fedorahosted.org/mailman/listinfo/cloudfs-devel
On 09/13/2011 08:49 PM, Etsuji Nakai wrote:
Hi,
Here's a simple question.
- Forward the write to all N replicas 2. If all N replicas
indicate success, we're done 3. If any replica fails, add information about the failed request (e.g. file, offset, length) to journals on the replicas where it succeeded 4. As part of the startup process, defer completion of startup until brought up to date by replaying peers' journals
What would happen if the primary (forwarder) node fails in the middle of writing its own local file (i.e. before starting the replication)? The file on the primary node is now corrupted, and should be roll-backed to be synced with other replicas. How would it be achieved?
That's a very good question, Etsuji. In fact, Kaleb raised pretty much the same issue offline after I'd sent the previous email. I had already written some stuff into the document addressing that, but since it has come up twice already I'll address it here as well.
First, I would not say that the file on the server where the write succeeded (there's no real primary in this architecture) is corrupt. It's just more up to date that on a server where the same write failed. The key here is that it can update its journal *locally* as part of the write itself. This would involve extra disk I/O but not extra network traffic. (It occurs to me BTW that the same optimization could possibly be applied to AFR as it is now.) If the journal is fully updated before the write itself is attempted, then that server has sufficient information to forward the write to other nodes even if the client that sent it dies - and in fact even if there's a total power outage taking out the servers as well. Journal entries can then be retired *asynchronously* when all servers have been updated. This involves network traffic, but doesn't affect latency. In some orderings this could lead to a journal entry being replayed on a node where it had already succeeded, but - assuming that other mechanisms are in place to deal with ordering (see next message) - that's OK.
On Wed, Sep 14, 2011 at 7:30 PM, Jeff Darcy jdarcy@redhat.com wrote:
- Forward the write to all N replicas 2. If all N replicas
indicate success, we're done 3. If any replica fails, add information about the failed request (e.g. file, offset, length) to journals on the replicas where it succeeded.
First, I would not say that the file on the server where the write succeeded (there's no real primary in this architecture) is corrupt. It's just more up to date that on a server where the same write failed. The key here is that it can update its journal *locally* as part of the write itself. This would involve extra disk I/O but not extra network traffic.
I feel the above two paragraphs are not in sync.
Paragraph 1 says if all N replicas indicate success, we're done (i.e no more network calls. which means, no journal entries were made yet). -- (1)
Paragraph 2 says journals are local to the server as part of the write itself. But in anticipation of all writes succeeding, nothing journal entries are made along with the write. -- (2).
Paragraph 2 says, in the event of a write failing on some server, journal is written on those servers where write succeeded via an extra disk I/O but not a network I/O. But, due to (1) and (2), and due to the fact that a write failure is detected by the client (and not the other servers), an extra network call seems inevitable. Which means, there is scope for failure in that phase as well. Also, there is the window between write and journal update on the local server (assuming it is achieved somehow) where the data center can lose power.
I may have missed something obvious. Please correct me if so.
Avati
On 09/14/2011 11:27 AM, Anand Avati wrote:
On Wed, Sep 14, 2011 at 7:30 PM, Jeff Darcy <jdarcy@redhat.com mailto:jdarcy@redhat.com> wrote:
>> 1. Forward the write to all N replicas 2. If all N replicas >> indicate success, we're done 3. If any replica fails, add >> information about the failed request (e.g. file, offset, length) to >> journals on the replicas where it succeeded. First, I would not say that the file on the server where the write succeeded (there's no real primary in this architecture) is corrupt. It's just more up to date that on a server where the same write failed. The key here is that it can update its journal *locally* as part of the write itself. This would involve extra disk I/O but not extra network traffic.
I feel the above two paragraphs are not in sync.
Paragraph 1 says if all N replicas indicate success, we're done (i.e no more network calls. which means, no journal entries were made yet). -- (1)
Paragraph 2 says journals are local to the server as part of the write itself. But in anticipation of all writes succeeding, nothing journal entries are made along with the write. -- (2).
Yes, they are out of sync. In response to the concerns that Etsuji and others had raised, I moved the journal-entry creation back to the write; now only the change to the entry's status occurs later. I guess that's less optimistic than what I started with, but still more optimistic than current AFR. Note that the extra disk I/O is only for metadata; the write data itself will be available from the file at all relevant points, so it doesn't need to be written a second time (which would still be better than extra network round trips BTW).
My apologies for not being clearer that the description in the second paragraph represented a change. Once the discussion has settled a bit, I'll send out a second (and hopefully self-consistent) version.
On Tue, Sep 13, 2011 at 10:15 PM, Jeff Darcy jdarcy@redhat.com wrote:
The origin of AFR's performance problems is that it requires extra
operations (beyond the necessary N writes) in the non-failure case to ensure correct operation in the failure case. The basis of the proposed solution is therefore to be optimistic instead of pessimistic, expending minimal resources in the normal case and taking extra steps only after a failure. The basic write algorithm becomes:
1. Forward the write to all N replicas 2. If all N replicas indicate success, we're done 3. If any replica fails, add information about the failed request
(e.g. file, offset, length) to journals on the replicas where it succeeded 4. As part of the startup process, defer completion of startup until brought up to date by replaying peers' journals
Because the process relies on a journal, there's no need to maintain a separate list of files in need of repair; journal contents can be examined at any time, and if they're empty (the normal case) that serves as a positive indication that the volume is in a fully protected state.
Doing repair as part of the startup process means that, if the failure is a network partition rather than a server failure4, then neither side will go through the startup process. Each server must therefore initiate repair upon being notified of another server coming up as well as during startup. Journal entries are pushed rather than pulled, from the servers that have them to the newly booted or reconnected server. Each server must also be a client, both to receive peer-status notifications (which currently go only to clients) and to issue journal-related requests.
There is a situation where, in the middle of Step 2. where half the servers have completed the write (other half servers have not yet processed the writes, and there is a power outage of the entire data center including the client. If the writes happened to be overwrites which do not extend the file size, they will go unnoticed and never get healed. Being optimistic (without writing pre-changelog) works in situation where partial failures are trivially detected - e.g. namespace operations where lookups can detect there was a failure just by the fact that an entry is present on one server and not on the other (an xattr journal is not necessary to "show" a mismatch). It could even work for writes which extend the file size as lookup will notice mismatching file sizes instantly without the need for an xattr changelog.
That is the pattern of situations where such "optimisitic" changelog handling can be done (where partial failures result in easily noticeable mismatches). I refer to "optmisitc changelog" meaning proceeding to perform the actual syscall modification without the initial changelog shielding against failures in the middle (and writing out a changelog - if you survive - about where all the change succeeded).
Another important point to note here, is the recovery process. Even in failure situations described above, the question of direction of recovery comes into picture. If a changelog exists (i.e, client survived long enough to write out the journal), then that will indicate the direction of "healing". The client should absolutely not return the syscall before the journal update is done (just cannot be a background process). But if a changelog does not exist after noticing the mismatch, it means that the client did not survive long enough to make the change.
At this point it becomes an arbitrary decision about chosing the direction of healing. With most changes (both namespace and data) you can always make a "conservative" choice. If a file exists on one and not on the other, then recreate it. That means, if it was due to a partial creation, we "roll-ahead" the transaction to completion, whereas if it was due to a partial unlinking, then we "roll-back" the transaction to the initial state - we can make a conservative decision without really caring what the actual transaction was.
The situation with file data is slightly different, if there is a mismatch in file size we can heal in the direction of the making the file size big on both servers. That way, we would have "rolled-ahead" a partial file-extending write and "rolled-back" a partial truncate.
But if there was a partial overwrite in the middle of the file, it is just not feasible to bring it under the "optimistic changelogging" kind of a optimization.
Another step which I don't see in the above sequence of operations is locking/unlocking of the modification regions.
Avati
On 09/14/2011 02:34 AM, Anand Avati wrote:
There is a situation where, in the middle of Step 2. where half the servers have completed the write (other half servers have not yet processed the writes, and there is a power outage of the entire data center including the client. If the writes happened to be overwrites which do not extend the file size, they will go unnoticed and never get healed.
Indeed, Joe raised the same issue. The funny thing is that the state we're in here is essentially the same as we get in the async long-distance replication case. I have a pretty complete design for handling these scenarios there, but it's necessarily complex and I was hoping to avoid some of that complexity in the local case. I guess at least some of it is still necessary; the trick is going to be figuring out how much.
Being optimistic (without writing pre-changelog) works in situation where partial failures are trivially detected - e.g. namespace operations where lookups can detect there was a failure just by the fact that an entry is present on one server and not on the other (an xattr journal is not necessary to "show" a mismatch). It could even work for writes which extend the file size as lookup will notice mismatching file sizes instantly without the need for an xattr changelog.
I think the key here is that the journal/changelog/whatever can be maintained *locally* on the servers, without extra network round trips in the latency path. Clearly, as you/Kaleb/Joe/Etsuji have pointed out, there are some details that still need to be discussed, but I think avoiding those network round trips is essential to improving latency.
Another important point to note here, is the recovery process. Even in failure situations described above, the question of direction of recovery comes into picture. If a changelog exists (i.e, client survived long enough to write out the journal), then that will indicate the direction of "healing". The client should absolutely not return the syscall before the journal update is done (just cannot be a background process).
Completely agree. The only place we differ so far seems to be on where the changelog is and who updates it. I think having it on the client is unsafe because of the scenario you describe and others as well. Clients can't be trusted. Any number of clients can go away mid-operation and never come back, and the result should still be consistent. A single server can also go away mid-operation and never come back, but not N servers all at once (where N is the replication level and thus N-1 is the number of concurrent failures the system has been explicitly configured to tolerate).
But if there was a partial overwrite in the middle of the file, it is just not feasible to bring it under the "optimistic changelogging" kind of a optimization.
I think it is feasible if sufficient ordering information is present (e.g. version vectors). Yes, I know that would require a significant protocol change. This is precisely the complexity I've gone through with the async stuff, which I was hoping to avoid for sync. Let's walk through the relevant failure scenario with N=2 to see how this works. A client writes to two servers, using a last known version number at each server as a predicate. If the write succeeds both places, that means there were no conflicting writes and we're done. If the write fails both places, for any reason not limited to predicate failure, that means we had no effect at all and can simply retry (presumably using new version numbers that we got back in the previous replies). So far, so good.
The real fun starts when a write succeeds at one server X and fails at another server Y because of a version mismatch. This means someone updated Y without (yet) updating X, either because the writes were concurrent or because the other writer failed in the middle of an update (the situation Kaleb and Etsuji both pointed out). The key here is that we haven't yet acknowledged the write to the user, and both versions of the conflicting region exist - our version on X and some other writer's version on Y. All that remains is to pick an order, and ensure that the conflict region contains the later version (according to the chosen order, all before we do acknowledge to the user. To do that, we define the version vectors as follows:
{ server1_version, server2_version, client_ID, client_version }
For the most part, the standard older-than rules for version vectors apply. We can add a twist, though, which is that the client versions are only comparable when the client IDs are identical. When all of the server versions are identical, the client ID is used instead of the client version to break the tie. This establishes a consistent "pecking order" to determine the order of application for concurrent writes from mutually oblivious clients.
OK, enough computer science. How does this work in practice? Simply put, when two clients write to two servers in opposite orders, each server will end up with a fully versioned write which it will try to push to the other. When the two servers try to push to one another, they'll agree on an order for the two writes and cross-propagate the parts that correspond to that order. This is the same code that would get exercised during startup to deal with the server-failure case, or possibly as the result of a dirty-status timeout to deal with the network-partition case, and here it can run in response to an explicit request from a client that has detected a conflict.
Another step which I don't see in the above sequence of operations is locking/unlocking of the modification regions.
The one drawback to this approach is that the conflicting region will differ transiently on the two servers, and without additional mechanisms it would be possible for two clients to read different data. I'm going to commit heresy and suggest that that's OK. If clients issue reads while a prior write is in progress, they have no guarantee whether it and/or any other unrelated writes will be present in what they read. If they want that kind of ordering they should take locks, not expect that somebody else will reduce their performance by automagically taking locks on their behalf.
We can apply much stricter consistency rules and accompanying mechanisms to every operation other than data reads and writes, and IMO should do so. In the specific case of data reads and writes, for the workloads that are driving this, incurring large performance penalties in return for imperceptible or irrelevant consistency gains would be the wrong tradeoff.
cloudfs-devel@lists.fedorahosted.org