Using pacemaker cloud for start/stop Was: Re: Background Jobs in conductor

Steven Dake sdake at redhat.com
Thu Dec 8 20:18:42 UTC 2011


On 12/08/2011 12:46 PM, Scott Seago wrote:
> On 12/08/2011 02:21 PM, Steven Dake wrote:
>> On 12/08/2011 03:37 AM, Jan Provaznik wrote:
>>> On 12/01/2011 01:41 PM, Tomas Hrcka wrote:
>>>> Hi all,
>>>>
>>>> the time has come, and we need to somehow implement background jobs. I
>>>> am curently working on task which involve long run operations on
>>>> provider side(ie. stop and delete). So I can make the UI not responding
>>>> for 2-4 mins, but that is not what we want.
>>>>
>>> Hi,
>>> it's only "destroy" operation which takes long time. This operation can
>>> be triggered on changing instance state to "stop". "stop" state is set
>>> by dbomatic which is running on background, so "destroy" operation
>>> should always run on background too ->  no slow UI response in this
>>> case.
>>>
>>> As I said before, we (will) need background job on conductor side anyway
>>> because of mass operations on instances, example: if a user selects
>>> multiple instances running on various providers, one of providers
>>> doesn't respond because of firewall/connection error, the conductor user
>>> will wait for a response forever or will get connection timeout error
>>> w/o knowing on which instance a command was executed. Another use case
>>> is "build+push" feature which we had to postpone - having background job
>>> solves this too.
>>>
>>>> Question is what will we use for performing background jobs. On
>>>> ruby-toolbox.com [1] is overview of ruby projects for background jobs.
>>>> Or we can implement our own solution.
>>>>
>>>> [1] - https://www.ruby-toolbox.com/categories/Background_Jobs
>>>>
>>>>
>>> Yes, there are many nice tools we can use. As it was already mentioned
>>> in this thread we have been using DelayedJob in conductor before and
>>> from what I remember it worked fine, but not saying we must use this
>>> again.
>>>
>>> Another option is to extend dbomatic to do this job, but I don't think
>>> it's best solution.
>>>
>>> Since this background job integration probably won't be part of 1.0
>>> version we have enough time to think about best solution.
>>>
>>> Jan
>> Jan,
>>
>> Since we are speaking post 1.0, an alternative to this problem of
>> deployable stopping without blocking the conductor UI is to use the API
>> that Tomas and Padraig implemented in [1].
>>
>> In essence, the general idea would be to drive start/stop from a
>> separate process (pacemaker cloud's cpe process) via the API described
>> by Tomas.  Pacemaker cloud would take care of calling the deltacloud
>> APIs to start and stop instances and provide feedback of what happened
>> to conductor.  Conductor would tell CPE to start/stop instances and
>> provide feedback given by DPE to the administrator removing policy
>> execution from conductor and more tightly focusing conductor as an OAM
>> UI.
>>
>> This offers several advantages:
>> * Multiple assembly deployable support with full HA
>>    + Rigorous active fault detection of resources (applications)
>>    + Rigorous active fault detection of assemblies
>>    + multiple assemblies can have start ordering per user policy
>>    + When a failure is detected, children assemblies can be properly
>>      recovered by restart or reconfiguratoin
>>    + A recovery escalation policy can be enforced on fault detection
>> * Clean separation of OAM policy generation (occurs in conductor) and
>>    OAM policy execution (occurs in CPE).  This separation removes any
>>    blocking that occurs in conductor with the current implementation.
>>
>> The disadvantage is the rework involved however most of the building
>> blocks are in place in the code base limiting the rework.
>>
> This brings up a question that was unclear to me in the recent
> discussion around moving instance control into pacemaker. I can see this
> being handled in one of two ways:
> 1) pacemaker as required component in all cases (essentially a full
> replacement for what was once handled by taskomatic and is now handled
> inline)
> 2) pacemaker only used for HA deployment, but the default/simpler config
> uses Conductor as it exists today.
> 
> 1) would be easier to implement (there won't be 2 ways to deploy), but
> it may make the simpler use cases more complex than they need to be (and
> development for that matter). One of the problems with the 'required
> condor' architecture was that it made a full end-to-end setup much more
> complex for developers to set up, so more often than not people ended up
> not testing code with condor (i.e. not testing actual launches, etc). In
> addition, we had the issue of platform availability of condor. Would
> pacemaker result in similar issues, or is it a lot more light-weight and
> easily configured than condor was?
> 

Pacemaker cloud code base is 7500 lines of code including test cases.
With deltacloud launching, we may hit 8-9k.  Compare to condor which was
515k lines of C code alone...

Configuration involves three things:
1. install the rpms
2. starting a daemon via init scripts or systemctl scripts
3. defining the authentication credentials between conductor and
   pacemaker cloud

At the moment we don't have a solution for #3 (ie: there is no security
in Tomas's API definition) so initially 1&2 would be the configuration
required.

Regarding second question (platform availability) do you mean that
condor would fail and not handle that case?  Need more detail here to
answer your question.

> 2) would result in more code complexity within Conductor as we'd
> potentially have to have 2 ways of launching, stopping, etc. -- one
> with, and one without pacemaker. We could possibly get around this by
> implementing a variant of the deltacloud API in front of pacemaker --
> essentially send a deltacloud request to pacemaker instead of
> deltacloud-core, and pacemaker could parse it as needed, and then for
> launch, pass the request along to deltacloud. But with this variant we
> now have pacemaker as the component that bears the brunt of the
> additional complexity.
> 

This approach seems less optimal since one of our main values is to
offer multiple instance deployables.  Providing a deltacloud wrapper
around pacemaker cloud wouldn't give us that advantage, and make the
work not all that valuable (ie: condor take 2).  While it would be
interesting to add the MID policy concepts into deltacloud, that work
seems overwhelming for us (specifically the API definition work in
deltacloud that would need to happen given the slow pace of Apache
projects).

> Any thoughts here? From a 'drawing the boxes' point of view, 1) is the
> easiest to understand. My concern is that there will be a lot of
> resistance to pulling in "condor, the sequel", unless it's clear that
> things will run a lot more smoothly this time around.
> 

Agree, condor the sequel would be seen as a failure by us.  Condor was
removed for a reason.  Just to be clear as to what pacemaker cloud does,
its main purpose is to maintain the multiple instance deployable the
user (conductor) requests be maintained.  It takes a user policy and
takes action to make it happen.  It is custom made for this specific
problem and no other.

To prevent any risks to the code base, we could develop conductor
changes in a fork where the patches can be sorted out.  Then we can do a
compare/contrast and the dev community can choose which they like better.

Regards
-steve

> Scott
> 
>> [1]
>> https://www.aeolusproject.org/redmine/projects/aeolus/wiki/Pacemaker_Cloud_and_Conductor_Notification_API
>>
>>
> 




More information about the aeolus-devel mailing list