RFC: Integrating Aeolus with Heat

Jan Provaznik jprovazn at redhat.com
Fri Sep 7 12:35:39 UTC 2012


On 09/04/2012 11:25 PM, Ian Main wrote:
> On Mon, Sep 03, 2012 at 12:44:05PM +0200, Jan Provazník wrote:
>> On 08/30/2012 10:23 PM, Ian Main wrote:
>>> On Thu, Aug 30, 2012 at 09:21:37AM +0200, Jan Provaznik wrote:
>>>> On 08/29/2012 09:55 PM, Ian Main wrote:
>>>>> On Mon, Aug 27, 2012 at 02:47:42PM +0200, Jan Provaznik wrote:
>>>>>> On 08/21/2012 06:15 PM, Tomas Sedovic wrote:
>>>>>>> Hey Folks,
>>>>>>>
>>>>>
>>>>> [snip]
>>>>>
>>>>>>> ### Querying Heat data from Conductor ###
>>>>>>>
>>>>>>> Heat doesn't support any callbacks. When Conductor wants to know details
>>>>>>> about the stack it launched, it will use the CloudFormation API to query
>>>>>>> the data.
>>>>>>>
>>>>>>> For the proof of concept stage, we will just issue the query to Heat
>>>>>>> upon every relevant UI action: e.g. `ListStacks` when showing
>>>>>>> deployables in the UI, `DescribeStackResource` when shoving a details of
>>>>>>> a single deployable, `DescribeStackEvents` to get deployable events, etc.
>>>>>>>
>>>>>>
>>>>>> This is OK for POC, but it would be really nice to have callback
>>>>>> support for real integration.
>>>>>>
>>>>>> nit: you probably meant 'deployment' instead of 'deployable' in the
>>>>>> paragraph above.
>>>>>
>>>>> I am curious as to why you think it is necessary to use callbacks and
>>>>> mirror the data held in heat within aeolus?
>>>>>
>>>>>      Ian
>>>>>
>>>>
>>>> Conductor needs to know if/when a deployment or single instance
>>>> changed its state (is this what you mean by mirroring data?). W/o
>>>> notification support on Heat side, Conductor would have to poll Heat
>>>> which is painful (dbomatic-like service presence on conductor side)
>>>> and not very effective.
>>>
>>> I agree dbomatic type service is error prone.  However mirroring data
>> >from one service to another is a very difficult problem to solve well
>>> and have it be reliable.
>>>
>>> Is this required for some sort of reporting?  If it is just for the
>>
>> Yes, reporting and keeping history logs about instances is part of
>> Conductor. Conductor also uses this information when choosing a
>> provider when launching an instance and also for quota checking.
>
> This could be done either way, but really you are just needing a tally
> of instances per user and per cloud.  I'm not saying it is ideal but I
> wouldn't say it's impossible or even unwise to consider direct querying
> even here.
>
>>> user to view the states then that can be done on an as-needed basis by
>>> contacting heat.  Even reporting is part of the AWS cloudformations API
>>> and events for stacks are supposed to be kept around for something like
>>> 90 days (iirc).
>>>
>>> Personally I very much question the need to mirror the data heat retains
>>> into Aeolus as these kinds of things are very error prone and difficult.
>>> Unless there is some kind of special need for reporting etc. the data
>>> could just as easily be queried directly.
>>>
>>>      Ian
>>>
>>
>> Besides the needs listed above, I'm afraid there might be
>> performance issues with querying Heat directly per each user
>> request.
>
> I think we would have to try it and see.  Ultimately it all comes from a
> database over a network connection.  Querying heat only adds one more
> http request/socket layer.  Perhaps some time munging data but all in
> all it's fairly light weight stuff.
>

Hi, sorry for late response, there was a deadline in last days...

Honestly I don't think that it would be light weight stuff, Heat will 
have sooner or later some authentication mechanism, also it will do 
another query (multiple queries) into its own DB backend.

> I think it's important to really consider what is involved in duplicating

As I mentioned in the first mail in this thread Conductor needs 
primarily information about instance changes, from other resources it 
checks also realms availability (maybe we might need check hw profile 
changes in future).

> that data.  It has been difficult in the past eg with dbomatic
> because it is a very hard problem.  Even with events this doesn't
> change.  What happens when a service goes down for a time?  You need to
> replay events, read logs etc.. people spend a lot of time and a lot of

Well, for callback-related issues were discussed in a thread about DC 
tracker and it wouldn't be so hard (it would be combination of 
retry+replay).

What about this:
instead of callbacks we could use a messaging queue (an AMQP 
implementation, e.g. RabbitMQ). Integrate this into Heat and Conductor 
would be matter of few seconds (there are clients for most of 
programming languages). This would be quite robust solution.

The same messaging might be used also for sending notifications between 
Conductor and Imagefactory.

> money trying to solve data replication.  IMO if you can access the
> primary data source for your implementation that will be a far more

Don't forget that Heat is *not* primary data source (only cloud provider 
is), And BTW, Heat will have to do the same "data mirroring" (if you 
mean instance state checking by this term) itself and if it's going to 
use dc-api you will probably end up with polling providers.

> robust solution.  Yes, this may mean having to sort out performance
> issues, but I think that is ultimately an easier problem.
>
>      Ian
>

Jan



More information about the aeolus-devel mailing list