RFC: Integrating Aeolus with Heat

Hugh Brock hbrock at redhat.com
Fri Sep 7 12:31:07 UTC 2012


On Thu, Sep 06, 2012 at 09:47:40PM -0700, Ian Main wrote:
> On Thu, Sep 06, 2012 at 03:56:44PM -0400, Scott Seago wrote:
> > On 09/04/2012 05:25 PM, Ian Main wrote:
> > >On Mon, Sep 03, 2012 at 12:44:05PM +0200, Jan Provazník wrote:
> > >>On 08/30/2012 10:23 PM, Ian Main wrote:
> > >>>On Thu, Aug 30, 2012 at 09:21:37AM +0200, Jan Provaznik wrote:
> > >>>>On 08/29/2012 09:55 PM, Ian Main wrote:
> > >>>>>On Mon, Aug 27, 2012 at 02:47:42PM +0200, Jan Provaznik wrote:
> > >>>>>>On 08/21/2012 06:15 PM, Tomas Sedovic wrote:
> > >>>>>>>Hey Folks,
> > >>>>>>>
> > >>>>>[snip]
> > >>>>>
> > >>>>>>>### Querying Heat data from Conductor ###
> > >>>>>>>
> > >>>>>>>Heat doesn't support any callbacks. When Conductor wants to know details
> > >>>>>>>about the stack it launched, it will use the CloudFormation API to query
> > >>>>>>>the data.
> > >>>>>>>
> > >>>>>>>For the proof of concept stage, we will just issue the query to Heat
> > >>>>>>>upon every relevant UI action: e.g. `ListStacks` when showing
> > >>>>>>>deployables in the UI, `DescribeStackResource` when shoving a details of
> > >>>>>>>a single deployable, `DescribeStackEvents` to get deployable events, etc.
> > >>>>>>>
> > >>>>>>This is OK for POC, but it would be really nice to have callback
> > >>>>>>support for real integration.
> > >>>>>>
> > >>>>>>nit: you probably meant 'deployment' instead of 'deployable' in the
> > >>>>>>paragraph above.
> > >>>>>I am curious as to why you think it is necessary to use callbacks and
> > >>>>>mirror the data held in heat within aeolus?
> > >>>>>
> > >>>>>     Ian
> > >>>>>
> > >>>>Conductor needs to know if/when a deployment or single instance
> > >>>>changed its state (is this what you mean by mirroring data?). W/o
> > >>>>notification support on Heat side, Conductor would have to poll Heat
> > >>>>which is painful (dbomatic-like service presence on conductor side)
> > >>>>and not very effective.
> > >>>I agree dbomatic type service is error prone.  However mirroring data
> > >>>from one service to another is a very difficult problem to solve well
> > >>>and have it be reliable.
> > >>>
> > >>>Is this required for some sort of reporting?  If it is just for the
> > >>Yes, reporting and keeping history logs about instances is part of
> > >>Conductor. Conductor also uses this information when choosing a
> > >>provider when launching an instance and also for quota checking.
> > >This could be done either way, but really you are just needing a tally
> > >of instances per user and per cloud.  I'm not saying it is ideal but I
> > >wouldn't say it's impossible or even unwise to consider direct querying
> > >even here.
> > One difficulty is that if we're talking about making heat optional,
> > heat and any other launching infrastructure (including, perhaps, the
> > current/legacy one if that remains) will need to handle
> > quota/instance state and queries/etc in the same way. Currently
> > instance metadata/state/quota checking is tracked in conductor
> > itself. As long as heat is optional, I'm not sure how we would
> > change that. If heat became a complete replacement, we _could_ query
> > all of this stuff live (rather than caching), but there are a lot of
> > moving parts in the existing infrastructure that would need to be
> > rewritten. After the pain of handling image metadata as a separate
> > server (with data only available with live calls outside), we're in
> > the process of moving that back into conductor. We need to be
> > careful about deciding to do the opposite with instances and
> > deployments. I'm not saying we can't/shouldn't do that, but we'd
> > better make sure we've got answers to the various pain points --
> > performance, searching, object associations, permissions, etc. For
> > permissions, in particular, even if all instance/deployment metadata
> > were in heat (and only queried live from Conductor), we'd at least
> > need placeholder objects on our side, so that we don't lose the
> > ability to manage permissions on a per-object basis.
> 
> Yeah, I'm really just asking people to think about it a bit more and
> consider what is really involved.  Ultimately as in the nature of any
> open source project we will just have to see how things unfold :).

Something else to think about that has some bearing on this question:

David Lutterkort is working on a Deltacloud instance state tracker that
would sit in front of Deltacloud and be authoritative for instance
status, the success or failure of state transitions, etc. (We should
probably help out with this too.)

In the long term, I wonder if it makes sense to move Conductor towards
an architecture where it always depends on some external service --
Heat, Deltacloud, maybe even Foreman for a bare-metal cloud -- for state
information. 

You're right (Scott) that given that cloud brokering (multiplexing
credentials) and self-service placement and so on are always going to
require Conductor to track permissions, Conductor will always have to
have some representation of the running instance in the local db for
permission queries and so on. However I'm not sure that causes the same
sync issues that Ian is worried about (Ian correct me if I'm wrong).

In the first round of Heat integration however I'm sure we are going to
have some duplication of information and we're probably just going to
have to live with it...

--Hugh

-- 
== Hugh Brock, hbrock at redhat.com                                   ==
== Engineering Manager, Cloud BU                                   ==
== Aeolus Project: Manage virtual infrastructure across clouds.    ==
== http://aeolusproject.org                                        ==

"I know that you believe you understand what you think I said, but I’m
not sure you realize that what you heard is not what I meant."
--Robert McCloskey



More information about the aeolus-devel mailing list