RFC feature planning - robust instance launching

Tomas Sedovic tsedovic at redhat.com
Wed Mar 28 14:55:10 UTC 2012


On 03/28/2012 01:59 PM, Jan Provaznik wrote:
> On 03/27/2012 11:18 PM, Matt Wagner wrote:
>> On Tue, Mar 27, 2012 at 02:32:58PM +0200, Jan Provaznik wrote:
>>> Hi, sending proposal for "robust instance launching" scenario. Any
>>> thoughts or improvement ideas are welcomed.
>>>
>>> Cut&paste from
>>> https://www.aeolusproject.org/redmine/projects/aeolus/wiki/Robust_instance_launching
>>>
>>>
>>>
>>> Summary
>>> This page describes multi-instances deployment launch process
>>
>> I will apologize in advance for what is likely my ignorance below, but
>> I'm maybe not the only one.
>>
>
> Hi Matt, thanks for reply. Some comments are inline.
>
>>> Owner
>>>
>>> Jan Provaznik (jprovazn at redhat.com)
>>>
>>> Current status
>>> Targeted release:
>>> Last update:
>>> When launching a deployment, deployment object is created and saved,
>>> Then ‘launch’ method is called on this deployment which creates
>>> required instances in conductor DB and associates them w/ the
>>> deployment object. Then it tries to find suitable ‘match’
>>> (combination of hwp, provider account, realm) where all instances of
>>> this deployment can be launched. If a match is found, launch params
>>> are computed for all instances. Finally we iterate through all
>>> instances and try to launch them. If any instance launch fails, we
>>> set create_failed state on this instance and continue with next.
>>>
>>> All of above steps are not in transaction, IOW if match is not found
>>> or launch params upload fails or instance launch fails, deployment
>>> and instances stay created. There is not retry or fallback plan if an
>>> error occurs (for example the provider of chosen match is not
>>> accessible).
>>
>>
>> So my understanding of status quo is that multi-instance deployments
>> work, but with poor error handling and inflexibility on launch ordering?
>>
>
> yes
>
>>
>>> Screencast Demo
>>> 1) Successful deployment launch
>>> All instances should be launched in proper order
>>>
>>> 2) Launch on first provider account fails, succeeds on second
>>> provider account
>>> Launch two deployment's instances, third instance fails to launch
>>> Launch on the first account should be rolled back -the two launched
>>> instances should be stopped
>>> launch should be done on the second account and should be successful
>>>
>>> 3) Launch fails on both providers
>>> Launch two deployment's instances, third instance fails to launch
>>> Launch on the first account should be rolled back - the two launched
>>> instances should be stopped
>>> Same for second account
>>> Deployment should be destroyed
>>> Log for this launch should be created in a log
>>>
>>>
>>> Implementation tasks
>>> Tasks which were already in Redmine cover whole deployment launch
>>> process, though may be broken into smaller tasks soon:
>>> #3060 - Refactor the launch process to include better error
>>> reporting, retries, switching to alternate providers etc.
>>> #3061 - Ensure that the UI doesn't contain unlaunched instances
>>> #3062 - Ensure that multi-instance deployments always launch fully or
>>> not at all. Conductor should automatically clean-up partial
>>> deployments
>>>
>>>
>>> Detailed description
>>> whole deployment launch process can be split into 3 phases:
>>> 1) pre-launch: we prepare deployment and instances objects (in
>>> conductor db), prepare launch params and compute dependencies between
>>> instances in this phase - if anything goes wrong, we just call
>>> rollback, nothing is saved and user stays on launch page
>>>
>>> 2)launch of non-blocked instances: send dc-api create instance
>>> request for each instance which is not blocked. This step is done on
>>> foreground together with phase 1 when a user presses "launch" button
>>> (note: it’s possible to do this call from dbomatic too, if we decide
>>> it’s better).
>>>
>>> 3)launch instances on state change: instances which have not been
>>> launched in phase 2. because they depend on instanceX are launched
>>> when instanceX is running. This can be done from instance
>>> after_update callback - when instance’s state is changed to
>>> ‘running’, get list of instances which becomes unblocked and launch
>>> them. Phase 3 will be usually executed on background, because
>>> instances states are usually updated by dbomatic (though not always -
>>> in some cases instance’s state is updated directly on dc-api request
>>> call).
>>> If an instance launch fails for some reason, we try to deploy
>>> somewhere else: stop all instances which have been already launched,
>>> then find another match (skipping all matches which failed), reset
>>> state to NEW for all instances (or drop and recreate them)
>>
>>
>> What is the type of failure we're trying to guard against here? This
>> seems a bit extreme, but I suspect I'm just not quite understanding.
>
> It depends what are most common launch failures. The idea of the
> proposed solution (rollback+start somewhere else) is that if you get an
> error (caused by something outside conductor), there is better chance
> you will success with choosing another account/provider than retrying to
> launch the failed instance. IOW it expects that failure reasons are
> mostly not short-term. This expectation might be completely wrong, a
> problem is that I don't know what are common failures/outages for
> various providers.
>
>> Suppose you launch ten instances, and one of them fails due to a
>> transient network error. Wouldn't it make more sense to try again,
>
> Good example - you are right, if there is a chance that retry would
> help, then it makes sense to do retry before rollback. Question is what
> failures deserve retry and how many retries should be done? What about
> this:
>
> If an error occurs, do 2 retries, each after 30 secs, rollback if no the
> error remains
>
>> rather than migrating the whole thing over to another cloud? I don't
>> think that is incompatible with what you're saying, but I'm wondering
>> what the conditions are where Conductor *thinks* we can launch an
>> instance somewhere, but an error pops up preventing it from succeeding.
>>
>
> some examples would be:
> - dc-core/provider becomes inaccessible
> - some provider-side user quota is exceeded
> - provider can't fulfil hw profile requirements (for example there is
> not enough RAM/disk space)
>
>>
>>> launch progress page (TBD)
>>> Angus suggested that there could be something like “launch progress
>>> page” where details of what’s being done w/ deployment would be
>>> showed. So if the user checks “show me details” checkbox before
>>> clicking “launch” button, he is redirected to this progress page
>>> where info which step is being done is displayed:
>>> "Selecting provider account... account_name"
>>> “Making launch request for instance... x”
>>>
>>> This could be probably just displaying of all events associated with
>>> this deployment.
>>
>>
>> I like the idea of just showing an event log, rather than trying to
>> implement anything over-the-top for this.
>>
>>
>>> Showing of this page would be optional, alternatively it could be
>>> part of deployment’s show page where a user could redirected after
>>> launch.
>>>
>>>
>>> High-level implementation details
>>> Add 'state' attribute to Deployment model, states can be:
>>> new - deployment is created in Conductor DB, but no instance has been
>>> launched yet
>>> pending - at least one instance launch has been requested
>>> failed - final state, deployment launch/shutdown failed
>>> rollback_in_progress - an error occurred during launching an instance
>>> and there are already some launched instances which have to be
>>> stopped
>>> rollback_failed - stopping of already launched instances failed
>>
>>
>> This is probably an implementation detail and an edge case, but I wonder
>> what the right thing to do is if we end up in this state. I think we
>> could probably continue onto the next cloud and warn the user about
>> the instance that failed to stop, but it sounds like we're really in
>> trouble if we get here -- multiple things have to go wrong in a row.
>>
>
> Yes, something really bad has to happen, for example you exceed
> provider-side quota -> rollback is activated -> dc-core goes down.
>
> In such situation, you still have a running instance which is unusable
> but a user is still may pay for it. Also it's still counted in
> conductor's quota. So I think it's better/safer to abort deployment
> launch in this case.
>
>>
>>> rollback_complete - stopping of already launched instances, now the
>>> deployment can be launched somewhere else
>>> running - all instances were successfully are in running state
>>> shutting_down - sthutdown was initiated
>>> stopped - all instances are stopped
>>>
>>> Allowed state transitions:
>>> new -> pending
>>> pending -> running|rollback_in_progress|failed
>>> rollback_in_progress -> rollback_complete|rollback_failed
>>> rollback_complete -> pending|failed
>>> running -> shutting_down
>>> shutting_down -> stopped
>>>
>>> Deployment state will be used to track deployment's history and
>>> decide what to do on a change - for example if last deployment's
>>> instance is stopped, deployment relaunch is done only if deployment
>>> was in rollback_in_progress state, otherwise the deployment stays
>>> stopped.
>>>
>>> State will be also used in UI for displaying deployment's state -
>>> currently we use only 3 states: pending, running and failed and these
>>> are computed "per request" by checking state of all instances in
>>> deployment.
>>>
>>> deployment_launch:
>>> in transaction do
>>> create deployment
>>> create deployment’s instances
>>> compute instances dependencies (covered by task 3054)
>>> find match where all instances can be launched (covered by task 3064)
>>> invoke instances_launch
>>> on error:
>>> deployment and instances are not created in conductor’s db
>>> user stays on deployment launch page
>>> proper error with reason why launch was not successful is displayed
>>>
>>> instances_launch:
>>> for each deployment’s instance which is not blocked do
>>> check quota
>>> send dc api launch request
>>> on error:
>>> initiate deployment rollback
>>>
>>> instance’s after update callback:
>>> if instance is in running state then invoke instances_launch
>>> elsif instance is in failed state then invoke deployment_rollback
>>>
>>> deployment_rollback:
>>> if all instances are stopped/failed invoke deployment_relaunch
>>> else send stop request to any instances in pending or running state
>>>
>>> deployment_relaunch
>>> find new match where all instances can be launched (skipping
>>> matches which we tried before)
>>> if match is found, invoke instances_launch
>>> elsif match is not found, retry for all matches -> use first match
>>> which failed before
>>> if match is not found, create log about failed launch in some
>>> history log (covered scenario 3037) and destroy this deployment
>>>
>>> Instance launch timeout
>>> On deployment launch when an instance is in pending state for X
>>> minutes, the launch is terminated and deployment rollback is
>>> initiated.
>>> This timeout should be configurable, default timeout could be 15
>>> minutes?
>>>
>>>
>>> Future plan
>>> The above is short/mid-term solution how to improve instance
>>> launching, it doesn't add any new dependency/tool. Long-term solution
>>> is to integrate Heat (https://github.com/heat-api), which is expected
>>> to do all things we need (take care of deps between instances, launch
>>> instances in proper order, rollback of failed launch, monitoring...).
>>
>>
>> I'd be interested in finding out more about Heat and the plans, but I'm
>> especially interested in understanding how this is all meant to fit
>> together long-term.
>>
>
> CCing Tomas who is working on Heat now.
>
>> -- Matt
>
> Jan

Hey Matt,

Heat's goal is to take care of orchestration and high availability 
concerns when launching deployments in the cloud.

Essentially, you feed it a CloudFormations or TOSCA template, it will 
figure out the proper order in which to start the instances and make 
sure the services inside them are configured and launched properly.

It will also take care of monitoring the deployments and keeping them up.

We hope to integrate Heat into OpenStack but since it's going to be a 
daemon with a web-accessible API, there's no reason why Aeolus can't use 
it as well.

Note that the project is very young. We've *just* created a skeleton 
website and the repo is only a couple of weeks old.

http://heat-api.org/



More information about the aeolus-devel mailing list