Hi David,
I'm working now on sanlock fencing feature for ovirt.
I have some questions and suggestions regarding the patch.
A host can send a predefined msg_num to another host.
The host messages are sent from one host to another via a lockspace that both hosts are using. If no lockspace name is specified, the sanlock daemon will search for a common lockspace to use. (N.B. hosts do not necessarily use the same host_id in all lockspaces, so not specifying the lockspace could result in targeting the wrong host.)
I think that making the lockspace a required parameter makes more sense and will avoid fatal errors.
The lockspace used to transmit the message may or may not have any other relation to the message itself.
A host can send one message to a one other host at a time.
Can we increase this number, simplifying (unlikely) case where more then one host need to be fenced?
The message is placed in the sending host's delta lease, and remains there for two renewals. When the receiving host renews its own delta lease, it checks the delta leases of all other hosts, and sees itself addressed in the sending host's lease. It then processes the message from the sending host.
Why did you choose the keep the message for two renewals?
We would like to have this value configurable, to make it easier to solve issues in the field.
I think we have to handle the (unlikely) case, where a host lost its lease without seeing the WD_RESET message, then acquire the lease again (not sure if this is possible in vdsm currently). The fencing host may assume wrongly that the host was fenced in this case.
What if we leave the WD_RESET message until the fencing host send a WD_UNRESET message?
This way I can send a WD_RESET message, wait for some renwals, ensuring that the host either lost it's lease, or will *not* get a new lease, until I decide to allow the host to get one.
We may have a case where we cannot access a host, we fence it, ensuring that it cannot access the storage, but the host never see the fence request, and keeping it in "fenced" mode is required until we can reboot the bost using power management or manually.
If a message is currently active in a lockspace, the sending host_message call will return -EBUSY. After two renewals (around 40 seconds), another message may be sent.
An optional host generation can be included, in which case the receiving host_id will accept the message only if its current generation matches.
The single msg_num defined here is WD_RESET (1), which means that the host receiving the message should use its watchdog device to reset itself as soon as possible. The WD_RESET message has no effect on any lockspaces or resources that may exist. Existing lockspaces and resources continue to operate as usual until the reset. (A watchdog reset due to "standard" lockspace failure could in fact occur before the watchdog reset caused by the host message.)
Because host messages may not be received if the destination host fails, or looses storage access, there are no guaranteed times associated with the delivery, processing or effect of a host message. Guaranteed times for another host being dead should continue to be based on either acquiring a resource, or sanlock_get_hosts().
What would be the best way to detect that the host was fenced?
TODO: will be adding another msg_num to cause the destination to use /proc/sysrq-trigger to reboot itself. (After setting up the watchdog to reset the machine in case the sysrq mechanism fails.) The sysrq reboot is immediate, whereas the watchdog takes a minute to reset.
Maybe use WD_FENCE, and let sanlock use the best available method for fencing?
Can we customize the actions taken by sanlock when receiving the WD_RESET message? For example, running a script after the message was received?
What would be the best way to detect if a host supports the new fencing feature - check its sanlock version?
Regards, Nir
On Wed, Jan 29, 2014 at 07:49:29AM -0500, Nir Soffer wrote:
The host messages are sent from one host to another via a lockspace that both hosts are using. If no lockspace name is specified, the sanlock daemon will search for a common lockspace to use. (N.B. hosts do not necessarily use the same host_id in all lockspaces, so not specifying the lockspace could result in targeting the wrong host.)
I think that making the lockspace a required parameter makes more sense and will avoid fatal errors.
You should specify the lockspace if you know it, then this won't matter.
The lockspace used to transmit the message may or may not have any other relation to the message itself.
A host can send one message to a one other host at a time.
Can we increase this number, simplifying (unlikely) case where more then one host need to be fenced?
This was the one big question I had about the design. If it's necessary to address more than one host simultaneously I can do that, but I'll need to go back and come up with a more complex design. The existing design is simple (and completely compatible with the existing format) because it uses three unused fields in the delta lease area. So, perhaps think a little more about how important this would be and let me know.
The message is placed in the sending host's delta lease, and remains there for two renewals. When the receiving host renews its own delta lease, it checks the delta leases of all other hosts, and sees itself addressed in the sending host's lease. It then processes the message from the sending host.
Why did you choose the keep the message for two renewals?
Because the targeted host would generally observe it in that time.
We would like to have this value configurable, to make it easier to solve issues in the field.
How configurable would you need: 1. a daemon config option (set it when the daemon starts) 2. an duration-based api option (set it when you call the function in terms of seconds to remain active.) 3. an on/off api option (one function call to set the message, and a second function call to turn it off.)
I think we have to handle the (unlikely) case, where a host lost its lease without seeing the WD_RESET message, then acquire the lease again (not sure if this is possible in vdsm currently).
This case would require you to send another message because a host_message is addressed to host_id:generation. The generation number of a host_id is increased when it joins the lockspace again. (Before we derive too much from this example, I think we'd want to outline exactly what the case is and what we want to happen.)
The fencing host may assume wrongly that the host was fenced in this case.
Whether the message is received or not, sanlock should never make an incorrect conclusion about the safe/unsafe state of a failed host.
It is not based on the host_message being received or not, but based only on the existing link between delta lease renewals and watchdog renewals on the targeted host.
(Remember that a failed host may actually be really dead and incapable of doing anything, e.g. powered off. sanlock can't distinguish this from other failure cases, but needs to work regardless.)
If a program outside of sanlock, like vdsm, wants to depend on sanlock's determination about the state of another host, you have two ways to do it:
1. wait until sanlock grants you a lease that was held by the other host 2. wait until sanlock_get_hosts() returns FREE or DEAD for the other host
The WD_RESET host_message is not a factor there. The WD_RESET is a way to force a partially failed host to be fully failed so that one of the two options above can be effective more quickly. i.e. it quickly reduces some varieties of partial failures into full failures from which our programs can go forward, safely forgetting about the failed host.
What if we leave the WD_RESET message until the fencing host send a WD_UNRESET message?
- The other host may be dead and incapable of doing anything.
- It's only purpose would be to let the initiating host know that it could clear its message. This provides very minimal benefit over other forms of clearing the message.
- Bidirectional send message / acknowledge message would be more complicated to implement than we want to do here. (I've been thinking about acknowledgments for another types of messages that I'd like to use elsewhere, and I've given up and gone with other ideas.)
This way I can send a WD_RESET message, wait for some renwals, ensuring that the host either lost it's lease, or will *not* get a new lease, until I decide to allow the host to get one.
I don't understand what you're trying to do there.
As I mentioned above, sending WD_RESET host_message should be completely orthogonal to the logic around leases. They are not related.
We may have a case where we cannot access a host, we fence it, ensuring that it cannot access the storage, but the host never see the fence request, and keeping it in "fenced" mode is required until we can reboot the bost using power management or manually.
I don't understand this. Maybe it's helpful to think about the differences among:
- power fencing . toggle the power of victim host on a switch . assume that worked . let programs use locks/resources that the victim had been using
- storage fencing . cut off storage access of victim host by turning off a switch port, (or removing its SCSI persistent reservation) . assume that worked . let programs use locks/resources that the victim had been using
- sanlock/wdmd/watchdog lease protection . wait for a fixed timeout from the victim host's last storage renewal . assume that the victim's watchdog has reset due to no lease renewal . let programs use locks/resources that the victim had been using
- sanlock/wdmd/watchdog lease protection + WD_RESET host_message . send victim the WD_RESET host_message, which would cause the victim to force it's own watchdog to expire in a minute . assume nothing about the receipt or effect of WD_RESET . wait for a fixed timeout from the victim host's last storage renewal . assume that the victim's watchdog has reset due to no lease renewal . let programs use locks/resources that the victim had been using
Notice that: - the end goal/result is the same in all cases - there are assumptions made in all cases - the first two are usually much quicker than the second two because the feedback from the third party (switch) is the basis of success - the results of WD_RESET are not a factor in the end result - if a host is partially operating and can receive sanlock messages from storage, then WD_RESET can speed up the progression - if a host is partially operating and has lost all storage access, the WD_RESET does nothing, but the end result is the same
Because host messages may not be received if the destination host fails, or looses storage access, there are no guaranteed times associated with the delivery, processing or effect of a host message. Guaranteed times for another host being dead should continue to be based on either acquiring a resource, or sanlock_get_hosts().
What would be the best way to detect that the host was fenced?
See the two options I listed above (acquire lease or use get_hosts state)
TODO: will be adding another msg_num to cause the destination to use /proc/sysrq-trigger to reboot itself. (After setting up the watchdog to reset the machine in case the sysrq mechanism fails.) The sysrq reboot is immediate, whereas the watchdog takes a minute to reset.
Maybe use WD_FENCE, and let sanlock use the best available method for fencing?
We want the target host to reset itself as dependably and as quickly as possible. Using the watchdog is dependable, and using sysrq-trigger is quick, so the idea is to combine them to get the best of both.
Can we customize the actions taken by sanlock when receiving the WD_RESET message? For example, running a script after the message was received?
That kind of thing is already supported using the sanlock_request() interface, which is based on a resource lease held by the target host. There is room to extend this with new options if needed.
What would be the best way to detect if a host supports the new fencing feature - check its sanlock version?
Yes, that will probably work.
Dave
Hi Dave,
Sorry for the late response, but we know now better what we would like to do, and will hopefully waste less of your time.
----- Original Message -----
From: "David Teigland" teigland@redhat.com To: "Nir Soffer" nsoffer@redhat.com Cc: sanlock-devel@lists.fedorahosted.org, "Ayal Baron" abaron@redhat.com, "Federico Simoncelli" fsimonce@redhat.com, "Allon Mureinik" amureini@redhat.com Sent: Wednesday, January 29, 2014 7:16:09 PM Subject: Re: [PATCH] sanlock: host_message
On Wed, Jan 29, 2014 at 07:49:29AM -0500, Nir Soffer wrote:
The host messages are sent from one host to another via a lockspace that both hosts are using. If no lockspace name is specified, the sanlock daemon will search for a common lockspace to use. (N.B. hosts do not necessarily use the same host_id in all lockspaces, so not specifying the lockspace could result in targeting the wrong host.)
I think that making the lockspace a required parameter makes more sense and will avoid fatal errors.
You should specify the lockspace if you know it, then this won't matter.
Can you describe a situation where guessing the lockspace is useful?
The lockspace used to transmit the message may or may not have any other relation to the message itself.
A host can send one message to a one other host at a time.
Can we increase this number, simplifying (unlikely) case where more then one host need to be fenced?
This was the one big question I had about the design. If it's necessary to address more than one host simultaneously I can do that, but I'll need to go back and come up with a more complex design. The existing design is simple (and completely compatible with the existing format) because it uses three unused fields in the delta lease area. So, perhaps think a little more about how important this would be and let me know.
We would like to be able to fence more than once host at a time, but having backward compatible format is more important.
This can help when you have some network issue that cause many hosts to become in accessible, and you have high-available vms on those hosts, that should be started as soon as possible on another host.
The message is placed in the sending host's delta lease, and remains there for two renewals. When the receiving host renews its own delta lease, it checks the delta leases of all other hosts, and sees itself addressed in the sending host's lease. It then processes the message from the sending host.
Why did you choose the keep the message for two renewals?
Because the targeted host would generally observe it in that time.
We would like to have this value configurable, to make it easier to solve issues in the field.
How configurable would you need:
- a daemon config option (set it when the daemon starts)
I think this will good enough.
- an duration-based api option (set it when you call the function in terms of seconds to remain active.)
- an on/off api option (one function call to set the message, and a second function call to turn it off.)
I think we have to handle the (unlikely) case, where a host lost its lease without seeing the WD_RESET message, then acquire the lease again (not sure if this is possible in vdsm currently).
I don't understand this. Maybe it's helpful to think about the differences among:
power fencing . toggle the power of victim host on a switch . assume that worked . let programs use locks/resources that the victim had been using
storage fencing . cut off storage access of victim host by turning off a switch port, (or removing its SCSI persistent reservation) . assume that worked . let programs use locks/resources that the victim had been using
sanlock/wdmd/watchdog lease protection . wait for a fixed timeout from the victim host's last storage renewal . assume that the victim's watchdog has reset due to no lease renewal . let programs use locks/resources that the victim had been using
sanlock/wdmd/watchdog lease protection + WD_RESET host_message . send victim the WD_RESET host_message, which would cause the victim to force it's own watchdog to expire in a minute . assume nothing about the receipt or effect of WD_RESET . wait for a fixed timeout from the victim host's last storage renewal . assume that the victim's watchdog has reset due to no lease renewal . let programs use locks/resources that the victim had been using
We do not want to assume that victim's watcdog has reset the machine. What we plan to do is to wait until the host is up and query the state of the vms, before we start these vms on another host.
Notice that:
- the end goal/result is the same in all cases
- there are assumptions made in all cases
Some assumptions are more likely. When you talk to a power management device and it tells you that the machine is powered off, there is very little chance that this is not true.
Nir
On Thu, Feb 20, 2014 at 05:48:23PM -0500, Nir Soffer wrote:
Can you describe a situation where guessing the lockspace is useful?
No, I'll remove this until there's a reason for it.
This was the one big question I had about the design. If it's necessary to address more than one host simultaneously I can do that, but I'll need to go back and come up with a more complex design. The existing design is simple (and completely compatible with the existing format) because it uses three unused fields in the delta lease area. So, perhaps think a little more about how important this would be and let me know.
We would like to be able to fence more than once host at a time, but having backward compatible format is more important.
This can help when you have some network issue that cause many hosts to become in accessible, and you have high-available vms on those hosts, that should be started as soon as possible on another host.
OK, I'll keep the current method.
How configurable would you need:
- a daemon config option (set it when the daemon starts)
I think this will good enough.
OK
- sanlock/wdmd/watchdog lease protection + WD_RESET host_message . send victim the WD_RESET host_message, which would cause the victim to force it's own watchdog to expire in a minute . assume nothing about the receipt or effect of WD_RESET . wait for a fixed timeout from the victim host's last storage renewal . assume that the victim's watchdog has reset due to no lease renewal . let programs use locks/resources that the victim had been using
We do not want to assume that victim's watcdog has reset the machine. What we plan to do is to wait until the host is up and query the state of the vms, before we start these vms on another host.
OK, the obvious limitation being a host that really lost power or had a hardware failure.
Notice that:
- the end goal/result is the same in all cases
- there are assumptions made in all cases
Some assumptions are more likely. When you talk to a power management device and it tells you that the machine is powered off, there is very little chance that this is not true.
Human error plugging hosts into the wrong power outlet is probably the biggest problem.
I'll try to get the patch updated next week. Dave
I think the following example illustrates the problem with the current plan to use the vdsm lockspace for fencing:
hostA and hostB vdsm using a common lockspace hostB sends hostA a "reset yourself" message via the vdsm lockspace hostA storage fails for vdsm lockspace around the same time hostA sanlock gracefully shuts down vdsm and removes the lockspace hostA has no storage access and cannot see the message from B
The "problem" here is the graceful cleanup and removal of the vdsm lockspace. This graceful cleanup is done precisely to *avoid* having the watchdog reset the host. In effect what we want are two different lockspaces with two opposite behaviors:
1. the vdsm lockspace that wants to *avoid* a watchdog reset at all costs 2. a "fencing" lockspace that wants to *cause* a watchdog reset at all costs
Trying to tweak the behavior of sanlock to do these two opposite things is not going to work out well, I suspect. sanlock tries very hard to either "avoid" or "cause" the reset in each case.
Here's a solution that I think would work well at the sanlock level, but it requires a new domain format for vdsm:
Create a new lockspace dedicated to the fencing behavior. (This is what the fence_sanlock daemon/agent do.) This new lockspace would *not* be killed or gracefully cleaned up if storage is lost. This way, either the lockspace message would cause the host to be reset, or if lockspace storage is lost, (and no messages are possible), the failure to renew the lockspace lease would cause the host to be reset. The same result is guaranteed in either case.
Dave
----- Original Message -----
From: "David Teigland" teigland@redhat.com To: "Nir Soffer" nsoffer@redhat.com Cc: "Allon Mureinik" amureini@redhat.com, "Ayal Baron" abaron@redhat.com, sanlock-devel@lists.fedorahosted.org, fsimonce@redhat.com, smizrahi@redhat.com Sent: Wednesday, February 26, 2014 11:58:25 PM Subject: Re: [PATCH] sanlock: host_message
I think the following example illustrates the problem with the current plan to use the vdsm lockspace for fencing:
hostA and hostB vdsm using a common lockspace hostB sends hostA a "reset yourself" message via the vdsm lockspace hostA storage fails for vdsm lockspace around the same time hostA sanlock gracefully shuts down vdsm and removes the lockspace hostA has no storage access and cannot see the message from B
The "problem" here is the graceful cleanup and removal of the vdsm lockspace. This graceful cleanup is done precisely to *avoid* having the watchdog reset the host. In effect what we want are two different lockspaces with two opposite behaviors:
- the vdsm lockspace that wants to *avoid* a watchdog reset at all costs
- a "fencing" lockspace that wants to *cause* a watchdog reset at all costs
Trying to tweak the behavior of sanlock to do these two opposite things is not going to work out well, I suspect. sanlock tries very hard to either "avoid" or "cause" the reset in each case.
Here's a solution that I think would work well at the sanlock level, but it requires a new domain format for vdsm:
Create a new lockspace dedicated to the fencing behavior. (This is what the fence_sanlock daemon/agent do.) This new lockspace would *not* be killed or gracefully cleaned up if storage is lost. This way, either the lockspace message would cause the host to be reset, or if lockspace storage is lost, (and no messages are possible), the failure to renew the lockspace lease would cause the host to be reset. The same result is guaranteed in either case.
CCing Eli and Barak
I think the following example illustrates the problem with the current plan to use the vdsm lockspace for fencing:
I thought I was being somewhat unfair to the existing plan for vdsm-sanlock-fencing, so I worked out a description of it for myself that I think would be a reasonable solution.
I think the plan was to solve the limited problem of fencing a host with sanlock when network communication is lost, but the storage remains functional. (And intend to solve the loss of both later.) The problem I described in the last mail covered the loss of both.
If we assume that storage remains functional, then we can assume that a "reset yourself" message is received, and can verify this by watching the host status in the functional lockspace (it will eventually become "dead" after the necessary host_id lease timeout.)
However, we still need to be aware of the case when both network and storage are lost, and revert to a reasonable state, even if it's not to be solved entirely. I think the possible outcomes would be:
1. The vdsm lockspace is cleanly removed due to the loss of lockspace storage. The "reset yourself" message that was sent through the vdsm lockspace may or may not have been received (depending on how quickly the lockspace was cleared.)
1.a) if the message was received, the host will reset itself, even though the lockspace was removed.
1.b) if the message was not received, then the host will not reset itself, and will remain running with no lockspace.
2. The vdsm lockspace cannot be cleanly removed. sanlock will reset the host, either because of the "reset yourself" message, or because the lockspace lease expires when it cannot be cleared. The result is the same regardless.
1.b is the condition that we do not intend to solve immediately, and the question that interests me is whether this state could be reliably detected. I think there's a fair chance it could be. The condition should be implied by the fact that the host has cleanly released its host_id lease in all lockspaces. This would not be true for the other conditions. One doubt I have is how this would be distinguished from a fresh, initial state of the host. Perhaps rhev/vdsm could distinguish this, but I don't think sanlock could.
Dave
----- Original Message -----
From: "David Teigland" teigland@redhat.com To: "Nir Soffer" nsoffer@redhat.com Cc: emesika@redhat.com, bazulay@redhat.com, abaron@redhat.com, fsimonce@redhat.com, smizrahi@redhat.com, sanlock-devel@lists.fedorahosted.org Sent: Thursday, February 27, 2014 8:13:21 PM Subject: Re: [PATCH] sanlock: host_message
I think the following example illustrates the problem with the current plan to use the vdsm lockspace for fencing:
I thought I was being somewhat unfair to the existing plan for vdsm-sanlock-fencing, so I worked out a description of it for myself that I think would be a reasonable solution.
I think the plan was to solve the limited problem of fencing a host with sanlock when network communication is lost, but the storage remains functional. (And intend to solve the loss of both later.) The problem I described in the last mail covered the loss of both.
If we assume that storage remains functional, then we can assume that a "reset yourself" message is received, and can verify this by watching the host status in the functional lockspace (it will eventually become "dead" after the necessary host_id lease timeout.)
However, we still need to be aware of the case when both network and storage are lost, and revert to a reasonable state, even if it's not to be solved entirely. I think the possible outcomes would be:
The vdsm lockspace is cleanly removed due to the loss of lockspace storage. The "reset yourself" message that was sent through the vdsm lockspace may or may not have been received (depending on how quickly the lockspace was cleared.)
1.a) if the message was received, the host will reset itself, even though the lockspace was removed.
1.b) if the message was not received, then the host will not reset itself, and will remain running with no lockspace.
The vdsm lockspace cannot be cleanly removed. sanlock
What do you mean by cannot be cleanly removed?
On Sun, Mar 02, 2014 at 05:38:23AM -0500, Nir Soffer wrote:
The vdsm lockspace is cleanly removed due to the loss of lockspace storage. The "reset yourself" message that was sent through the vdsm lockspace may or may not have been received (depending on how quickly the lockspace was cleared.)
1.a) if the message was received, the host will reset itself, even though the lockspace was removed.
1.b) if the message was not received, then the host will not reset itself, and will remain running with no lockspace.
The vdsm lockspace cannot be cleanly removed. sanlock
What do you mean by cannot be cleanly removed?
Roughly, here's how vdsm uses sanlock:
1. vdsm joins a lockspace (= acquires a host_id lease) 2. vdsm acquires a resource lease 3. vdsm specifies a "killpath" program or script that sanlock can call to gracefully shut it down if its lease cannot be renewed.
(Things operate normally here for some time...) (Then storage is lost.)
4. sanlock fails to renew the host_id lease 5. sanlock runs a "killpath" program or script against vdsm 6. vdsm tries to shut down gracefully 7. if vdsm can shut down cleanly, it releases its resource lease 8. sanlock sees no more resource leases are held in the lockspace and can clear the lockspace, which disables the watchdog 9. if all of that happens within a given time, then the host will not be reset by the watchdog
If vdsm was not able to cleanly shut down, it will not release its resource lease, so sanlock will not be able to remove the lockspace, and eventually the local watchdog will fire.
----- Original Message -----
From: "David Teigland" teigland@redhat.com To: "Nir Soffer" nsoffer@redhat.com Cc: "Allon Mureinik" amureini@redhat.com, "Ayal Baron" abaron@redhat.com, sanlock-devel@lists.fedorahosted.org, fsimonce@redhat.com, smizrahi@redhat.com Sent: Wednesday, February 26, 2014 11:58:25 PM Subject: Re: [PATCH] sanlock: host_message
I think the following example illustrates the problem with the current plan to use the vdsm lockspace for fencing:
hostA and hostB vdsm using a common lockspace
Do you mean the host id lockspace?
hostB sends hostA a "reset yourself" message via the vdsm lockspace hostA storage fails for vdsm lockspace around the same time
What do you mean by fails for vdsm lockspace?
hostA sanlock gracefully shuts down vdsm and removes the lockspace hostA has no storage access and cannot see the message from B
The "problem" here is the graceful cleanup and removal of the vdsm lockspace. This graceful cleanup is done precisely to *avoid* having the watchdog reset the host. In effect what we want are two different lockspaces with two opposite behaviors:
- the vdsm lockspace that wants to *avoid* a watchdog reset at all costs
- a "fencing" lockspace that wants to *cause* a watchdog reset at all costs
Trying to tweak the behavior of sanlock to do these two opposite things is not going to work out well, I suspect. sanlock tries very hard to either "avoid" or "cause" the reset in each case.
Here's a solution that I think would work well at the sanlock level, but it requires a new domain format for vdsm:
Create a new lockspace dedicated to the fencing behavior. (This is what the fence_sanlock daemon/agent do.)
Do you mean host id like lockspace (1MB for 2000 hosts)?
This new lockspace would *not* be killed or gracefully cleaned up if storage is lost. This way, either the lockspace message would cause the host to be reset, or if lockspace storage is lost, (and no messages are possible), the failure to renew the lockspace lease would cause the host to be reset. The same result is guaranteed in either case.
I don't think we like to fence a host if it lost access to some storage, or even to all storage.
If we can access the host through the network, we would like to migrate the vms on it to another host instead of killing the vms.
Nir
On Sun, Mar 02, 2014 at 05:32:07AM -0500, Nir Soffer wrote:
----- Original Message -----
From: "David Teigland" teigland@redhat.com To: "Nir Soffer" nsoffer@redhat.com Cc: "Allon Mureinik" amureini@redhat.com, "Ayal Baron" abaron@redhat.com, sanlock-devel@lists.fedorahosted.org, fsimonce@redhat.com, smizrahi@redhat.com Sent: Wednesday, February 26, 2014 11:58:25 PM Subject: Re: [PATCH] sanlock: host_message
I think the following example illustrates the problem with the current plan to use the vdsm lockspace for fencing:
hostA and hostB vdsm using a common lockspace
Do you mean the host id lockspace?
Sorry for being unclear here. I'm not entirely sure what lockspace(s) vdsm is using, so I was being vague. I guess we'll need to sort out exactly what lockspaces we're talking about.
hostB sends hostA a "reset yourself" message via the vdsm lockspace hostA storage fails for vdsm lockspace around the same time
What do you mean by fails for vdsm lockspace?
I meant that sanlock fails to renew its host_id lease in this vaguely defined lockspace. We speak about this condition with some interchangable terms:
- storage is lost / storage access is lost - the storage is the storage on which the sanlock leases exist - the effect is that sanlock can no longer renew its host_id lease - we may say that the lockspace fails at this point or that lockspace enters recovery
That's all referencing the same situation.
Create a new lockspace dedicated to the fencing behavior. (This is what the fence_sanlock daemon/agent do.)
Do you mean host id like lockspace (1MB for 2000 hosts)?
Yes, a sanlock_write_lockspace() / sanlock_add_lockspace() that is dedicated to the fencing behavior and not used for anything else.
This new lockspace would *not* be killed or gracefully cleaned up if storage is lost. This way, either the lockspace message would cause the host to be reset, or if lockspace storage is lost, (and no messages are possible), the failure to renew the lockspace lease would cause the host to be reset. The same result is guaranteed in either case.
I don't think we like to fence a host if it lost access to some storage, or even to all storage.
If we can access the host through the network, we would like to migrate the vms on it to another host instead of killing the vms.
OK, I'll need to learn a little more about what behavior you want in each circumstance.
I visited with Nir and one of the big problems with my initial host_message design was the lack of acknowledgements, which I'd been strongly resisting.
I've come up with a new design that could be a workable way of doing host messages with acknowledgements. I don't like it very much, but will give it a try.
There are three 64 bit fields in the delta lease leader record that we can use as follows:
field 1: uint32_t send_to_host_id; /* message destination */ uint32_t send_to_host_generation; /* message destination */
field 2: uint32_t send_msg; /* the caller-specified message */ uint32_t send_seq; /* internal sequence number */
field 3: uint32_t recv_from_host_id; /* acknowledgement: message source */ uint32_t recv_seq; /* acknowledgement: send_seq */
host_id and host_generation are 64 bit values everywhere else in sanlock, and this shortens them to 32 bits to fit them into the available space. Realistically, they should always fit in 32 bits, but it's ugly.
This also removes the 32 bit "data" field that could previously be sent along with the 32 bit msg number. 160 bits of overhead for a 32 bit message is a little sad.
Sending a message -----------------
int sanlock_host_message(const char *ls_name, uint32_t flags, int hm_size, struct sanlk_host_message *hm uint32_t *send_seq);
struct sanlk_host_message { uint64_t host_id; uint64_t generation; uint32_t send_msg; }
The sending host sets the following in its delta_lease:
field 1: send_to_host_id = hm.host_id & 0xFFFFFFFF; send_to_host_gen = hm.generation & 0xFFFFFFFF;
field 2: send_msg = hm.send_msg; send_seq = local_msg_seq++;
send_seq is returned to the caller for matching an acknowledgement.
Receiving a message -------------------
The receiving host sees its own host_id/generation in the sending host's lease, processes send_msg, and saves host_id/seq in a list of messages to be acknowledged.
At the next delta lease renewal, it takes the next host_id/seq from its list and sets:
field 3: recv_from_host_id = the host_id that send the message; recv_seq = the seq number that accompanied the message;
Receiving acknowledgement -------------------------
sanlock will not keep any state about the host messages it has sent or try to match acknowledgements. But, sanlock does keep track of other host's delta lease state, and that could include recv_from_host_id/recv_seq. We can add an api for the caller to query the recv_from_host_id/recv_seq for a given host_id.
In the caller, sanlock_host_message() returned the send_seq value that was used for the message. After this, the caller would query sanlock for the recv_seq until it matched send_seq (or until it wants to give up.)
Problems with the acknowledgement scheme:
1. It will not work with a fast reset option using /proc/sysrq-trigger because there will not be enough time for the acknowledgement to be written before the host is reset. (With another independent message area, we could write an acknowledgement immediately, but borrowing the lockspace lease means we do not have this option.)
2. If multiple hosts send messages to a single destination at once, the destination host will need to acknowledge them one at a time in consecutive renewals. It takes longer to get an ack, each ack would be visible for one renewal and could be missed.
----- Original Message -----
From: "David Teigland" teigland@redhat.com To: "Nir Soffer" nsoffer@redhat.com Cc: "Allon Mureinik" amureini@redhat.com, "Ayal Baron" abaron@redhat.com, sanlock-devel@lists.fedorahosted.org, fsimonce@redhat.com, smizrahi@redhat.com, "Barak Azulay" bazulay@redhat.com, "Eli Mesika" emesika@redhat.com Sent: Thursday, March 6, 2014 12:32:42 AM Subject: Re: [PATCH] sanlock: host_message
I visited with Nir and one of the big problems with my initial host_message design was the lack of acknowledgements, which I'd been strongly resisting.
I've come up with a new design that could be a workable way of doing host messages with acknowledgements. I don't like it very much, but will give it a try.
There are three 64 bit fields in the delta lease leader record that we can use as follows:
field 1: uint32_t send_to_host_id; /* message destination */
Do we need 32 bit for that? isn't this a number from 1 to 2000 (11 bits)?
uint32_t send_to_host_generation; /* message destination */
Is this 32 bit value?
field 2: uint32_t send_msg; /* the caller-specified message */
Do we really need 4G different messages?
uint32_t send_seq; /* internal sequence number */
Do we really need 32 bit counter?
field 3: uint32_t recv_from_host_id; /* acknowledgement: message source */ uint32_t recv_seq; /* acknowledgement: send_seq */
Why send this message in different field?
host_id and host_generation are 64 bit values everywhere else in sanlock, and this shortens them to 32 bits to fit them into the available space. Realistically, they should always fit in 32 bits, but it's ugly.
Can be cleaned by converting these everywhere to 32 bit values :-)
This also removes the 32 bit "data" field that could previously be sent along with the 32 bit msg number. 160 bits of overhead for a 32 bit message is a little sad.
How about this:
Message format - 64bit value:
field bits ----------------- host_id 12 generation 32 send_msg 8 send_seq 12
Acknowledge is just another message, so we send in the same field.
RESET = 0x01 ACK = 0xFF
The sender detects the ack by getting an ACK message in the lease of the receiver with the sender host_id and generation and the same send_seq as the sent message.
field2 and field3 - use for sending messages to multiple hosts, or keep as reserved for future extension.
Sending a message
int sanlock_host_message(const char *ls_name, uint32_t flags, int hm_size, struct sanlk_host_message *hm uint32_t *send_seq);
struct sanlk_host_message { uint64_t host_id; uint64_t generation; uint32_t send_msg; }
The sending host sets the following in its delta_lease:
field 1: send_to_host_id = hm.host_id & 0xFFFFFFFF; send_to_host_gen = hm.generation & 0xFFFFFFFF;
field 2: send_msg = hm.send_msg; send_seq = local_msg_seq++;
send_seq is returned to the caller for matching an acknowledgement.
Receiving a message
The receiving host sees its own host_id/generation in the sending host's lease, processes send_msg, and saves host_id/seq in a list of messages to be acknowledged.
At the next delta lease renewal, it takes the next host_id/seq from its list and sets:
field 3: recv_from_host_id = the host_id that send the message; recv_seq = the seq number that accompanied the message;
Receiving acknowledgement
sanlock will not keep any state about the host messages it has sent or try to match acknowledgements. But, sanlock does keep track of other host's delta lease state, and that could include recv_from_host_id/recv_seq. We can add an api for the caller to query the recv_from_host_id/recv_seq for a given host_id.
This means that the clients has to remember sent messages sequence, so implementing a simple fence agent script will be impossible. You will have to create another process running from start of fencing, remembering the sent message sequence, and polling sanlock daemon for the result.
If sanlock does remember sent messages and check for acks, it will be easier to use it from other tools.
In the caller, sanlock_host_message() returned the send_seq value that was used for the message. After this, the caller would query sanlock for the recv_seq until it matched send_seq (or until it wants to give up.)
Problems with the acknowledgement scheme:
- It will not work with a fast reset option using /proc/sysrq-trigger because there will not be enough time for the acknowledgement to be written before the host is reset. (With another independent message area, we could write an acknowledgement immediately, but borrowing the lockspace lease means we do not have this option.)
You can do a fast reset after the write to the storage finished, assuming that the write is not asynchronous.
- If multiple hosts send messages to a single destination at once, the destination host will need to acknowledge them one at a time in consecutive renewals. It takes longer to get an ack, each ack would be visible for one renewal and could be missed.
I don't see a problem here for the fencing use case.
Nir
On Thu, Mar 06, 2014 at 03:56:25PM -0500, Nir Soffer wrote:
field bits
host_id 12 generation 32 send_msg 8 send_seq 12
Even if I didn't need to be consistent with the way this works elsewhere, creating ad hoc field sizes like this would be unmanageable.
Acknowledge is just another message, so we send in the same field.
RESET = 0x01 ACK = 0xFF
The sender detects the ack by getting an ACK message in the lease of the receiver with the sender host_id and generation and the same send_seq as the sent message.
It doesn't work because we can easily have unregulated overlapping of sending/receiving/acking messages, all trying to use the same fields at once. The result is chaos. Perhaps in your very specific usage this wouldn't happen, but this is at least a minimally generalized capability.
Receiving acknowledgement
sanlock will not keep any state about the host messages it has sent or try to match acknowledgements. But, sanlock does keep track of other host's delta lease state, and that could include recv_from_host_id/recv_seq. We can add an api for the caller to query the recv_from_host_id/recv_seq for a given host_id.
This means that the clients has to remember sent messages sequence, so implementing a simple fence agent script will be impossible. You will have to create another process running from start of fencing, remembering the sent message sequence, and polling sanlock daemon for the result.
I thought the program that sent the message (and got send_seq), would itself want to watch for the ack (recv_seq). If it got the ack, it would then procede to monitor the host status.
If they are different programs, there are ways of passing a number between them. If it's truely difficult, then perhaps we could query both send_seq and recv_seq from sanlock.
If sanlock does remember sent messages and check for acks, it will be easier to use it from other tools.
I think it's too unrelated to sanlock's main job. I'm really aiming for as minimal and primitive and unintrusive as possible. One reason I don't like the idea of acks is because a system that needs acks probably wants a level of sophistication which sanlock simply can't provide (and shouldn't because it's not the purpose of sanlock.) So I want to add the absolute minimum that we need to implement your function.
- It will not work with a fast reset option using /proc/sysrq-trigger because there will not be enough time for the acknowledgement to be written before the host is reset. (With another independent message area, we could write an acknowledgement immediately, but borrowing the lockspace lease means we do not have this option.)
You can do a fast reset after the write to the storage finished, assuming that the write is not asynchronous.
I'm not sure how I'd use sysrq-trigger yet anyway -- I don't like the idea of encoding such a specific feature directly into sanlock. So, I'd need to figure that out, and maybe whatever is doing sysrq-trigger could add some delay to given the next renewal (with ack) a chance to complete.
sanlock-devel@lists.fedorahosted.org