RFC feature planning - robust instance launching

Matt Wagner matt.wagner at redhat.com
Wed Mar 28 14:18:41 UTC 2012


On Wed, Mar 28, 2012 at 01:59:18PM +0200, Jan Provaznik wrote:
> On 03/27/2012 11:18 PM, Matt Wagner wrote:
<lots of snipping throughout>
> >
> >What is the type of failure we're trying to guard against here? This
> >seems a bit extreme, but I suspect I'm just not quite understanding.
> 
> It depends what are most common launch failures. The idea of the
> proposed solution (rollback+start somewhere else) is that if you get
> an error (caused by something outside conductor), there is better
> chance you will success with choosing another account/provider than
> retrying to launch the failed instance. IOW it expects that failure
> reasons are mostly not short-term. This expectation might be
> completely wrong, a problem is that I don't know what are common
> failures/outages for various providers.
> 
> >Suppose you launch ten instances, and one of them fails due to a
> >transient network error. Wouldn't it make more sense to try again,
> 
> Good example - you are right, if there is a chance that retry would
> help, then it makes sense to do retry before rollback. Question is
> what failures deserve retry and how many retries should be done? What
> about this:
> 
> If an error occurs, do 2 retries, each after 30 secs, rollback if no
> the error remains
> 
> >rather than migrating the whole thing over to another cloud? I don't
> >think that is incompatible with what you're saying, but I'm wondering
> >what the conditions are where Conductor *thinks* we can launch an
> >instance somewhere, but an error pops up preventing it from succeeding.
> >
> 
> some examples would be:
> - dc-core/provider becomes inaccessible
> - some provider-side user quota is exceeded
> - provider can't fulfil hw profile requirements (for example there is
> not enough RAM/disk space)

You know, this almost makes me think of SMTP where you have 400-level
temporary errors where you should retry delivery again (the remote
mailserver's disk is full, the mailbox is over quota, etc.), and
500-level permanent errors (e.g., the mailbox you're trying to send to
just doesn't exist.)

It seems like the same thing sort of applies here, except we don't have
the convenience of getting a status code that unambiguously tells us if
we should try again or not. If an API call to the provider times out, or
an instance fails to launch for an ambiguous reason, it probably makes
sense to retry that instance. But if the instance fails because we're
over quota on the provider's end, it probably makes sense to just move
onto the next provider immediately, since it's unlikely to succeed if we
try again.

All of that said, I suspect that this would be a real nightmare to
implement in the short-term, and that it would require a real mess of
code to try to map various return codes from various providers to a
temporary_error? method or something... So I'm not necessarily proposing
that this is the way we should do things, as much as brainstorming out
loud.


> >This is probably an implementation detail and an edge case, but I wonder
> >what the right thing to do is if we end up in this state. I think we
> >could probably continue onto the next cloud and warn the user about
> >the instance that failed to stop, but it sounds like we're really in
> >trouble if we get here -- multiple things have to go wrong in a row.
> >
> 
> Yes, something really bad has to happen, for example you exceed
> provider-side quota -> rollback is activated -> dc-core goes down.
> 
> In such situation, you still have a running instance which is
> unusable but a user is still may pay for it. Also it's still counted
> in conductor's quota. So I think it's better/safer to abort
> deployment launch in this case.

Yes, I think I agree. And even if we can do something smart here, it
probably makes sense to start simple.

> >I'd be interested in finding out more about Heat and the plans, but I'm
> >especially interested in understanding how this is all meant to fit
> >together long-term.
> >
> 
> CCing Tomas who is working on Heat now.

Thanks! (Though I understand he's quite busy today.)

-- Matt



More information about the aeolus-devel mailing list