Dear Aeolus community,
I've sent the first pull request integrating Heat with Conductor. Please have a look at the changes here:
https://github.com/aeolusproject/conductor/pull/424
The pull request contains some observations, notes and rambling that should start a discussion on things I missed, did incorrectly, etc. For your convenience, I'm pasting it here as well:
This is the first step towards full Heat integration. This routes deployment launching and querying for status to Heat. Launch-time parameters should just work, ditto for deleting the instances.
This is mostly to give you guys a direction where I think this should go and to get comments and feedback. I don't think it's wise to merge it just yet -- we need to test it more, clean up the code, find the subtle things this breaks, etc.
There are also a few issues that need fixing:
* Heat's API supports atomic launch (i.e. rollback) but as far as I know, it's not been implemented yet * There's a bug in Heat that means the actual instance status isn't reported back correctly. If you launch a deployment and stop the instances out of band, Heat will show them as running. * Heat generates its own wealth of events that we should show in Conductor. They're currently being ignored.
There's also an issue of performance. The way it's implemented now, every time you ask a deployment or an instance for a status, they request the Heat API for an up-to-date value.
This happens a lot of times and it crawls the entire UI into a halt.
I tried caching the value for the lifetime of the object as a simple workaround for rendering, but apparently, we (or Rails) are instantiating the same Deployment and Instance model objects over and over again when rendering the view so this didn't help much.
The same issue is exists with a database backend as well, but Rails transparently caches the DB queries for each controller action.
As a band-aid, I hooked `Heat::heat_request` to the Rails cache which deals with the issue.
Eventually though, we need to think of a better way to solve it -- either by reducing the number of instances we create or by introducing a proper HTTP caching layer.
Heat's API currently doesn't have any querying capabilities, so to list a few deployments, you have to send a separate request for each and then another request per instance to get instance properties as well. We could work with the Heat community to extend the API in a way that would mean issuing fewer requests.
Before we commit to any optimizations though, it's better to have the raw thing so that we can measure where the bottlenecks actually lie. The advantage of using the built-in Rails caching API here is that you can disable it in the app config without changing anything in the code.
Try it out ------------
You need Heat running with Deltacloud as a backend. I'll write a proper guide tomorrow (I need to package a couple of Python libraries to make things easier).
If you're adventurous, you can go ahead and follow the respective guides:
You can get Heat from here: https://github.com/openstack/heat
No need to set up OpenStack, we're not using it. Though Heat does use a few OpenStack libraries for dependencies.
Next, install `deltacloud_heat` which is a Deltacloud backend for Heat. It in turn uses the Deltacloud Python client.
https://github.com/tomassedovic/deltacloud_heat
https://github.com/tomassedovic/deltacloud
There were some bugs fixed in the Deltacloud API that we require, you need at least version 1.1.0. We also need the latest Python client which I didn't get to submitting upstream yet. In short, use the server and the client from my fork above.
To configure Heat to use Deltacloud, open the `/etc/heat/heat-engine.conf` file and add the following line at the end:
cloud_backend=deltacloud_heat.clients
Then open `/etc/heat/heat-engine.conf` and uncomment the last two lines:
[paste_deploy] flavor = custombackend
Start the Heat engine and api services (if you're going to use the Audrey features, start cloudwatch and cfn-api as well).
That should be it.
Hi Tomas,
On Wed, Feb 27, 2013 at 06:47:16PM +0100, Tomas Sedovic wrote:
Dear Aeolus community,
I've sent the first pull request integrating Heat with Conductor. Please have a look at the changes here:
https://github.com/aeolusproject/conductor/pull/424
The pull request contains some observations, notes and rambling that should start a discussion on things I missed, did incorrectly, etc. For your convenience, I'm pasting it here as well:
This is the first step towards full Heat integration. This routes deployment launching and querying for status to Heat. Launch-time parameters should just work, ditto for deleting the instances.
This is mostly to give you guys a direction where I think this should go and to get comments and feedback. I don't think it's wise to merge it just yet -- we need to test it more, clean up the code, find the subtle things this breaks, etc.
There are also a few issues that need fixing:
- Heat's API supports atomic launch (i.e. rollback) but as far as I
know, it's not been implemented yet
I implemented this (for stack create and update) recently, it landed for the Grizzly-3 development milestone:
https://blueprints.launchpad.net/heat/+spec/stack-rollback https://blueprints.launchpad.net/heat/+spec/update-rollback
Note this feature is disabled by default in current heat master (it wasn't for g3) because we don't yet have persistent stack events:
https://bugs.launchpad.net/heat/+bug/1131303 https://blueprints.launchpad.net/heat/+spec/event-persistence
You can enable it with the heat-cfn/heat-boto/heat tools with the --enable-rollback option on stack create
Note that this rollback flag is defined at stack creation time, and currently cannot be changed during the lifetime of the stack (same as AWS, but maybe we should allow specifying it explicitly on StackUpdate)
- There's a bug in Heat that means the actual instance status isn't
reported back correctly. If you launch a deployment and stop the instances out of band, Heat will show them as running.
Is there a bug report for this?
Also I'm not sure what you mean by running - we track resource creation status, not instantaneous state/health of any instance/resource - that must be done via CloudWatch alarms, e.g like the HA/Autoscaling templates.
So when your Instance resource goes CREATE_COMPLETE, after that we don't care if it's running or not - if the user thinks instances may need restarting for some reason, they need to make use of WaitConditions, the HA features and CloudWatch alarms.
Steve
On 03/05/2013 11:55 AM, Steven Hardy wrote:
Hi Tomas,
On Wed, Feb 27, 2013 at 06:47:16PM +0100, Tomas Sedovic wrote:
Dear Aeolus community,
I've sent the first pull request integrating Heat with Conductor. Please have a look at the changes here:
https://github.com/aeolusproject/conductor/pull/424
The pull request contains some observations, notes and rambling that should start a discussion on things I missed, did incorrectly, etc. For your convenience, I'm pasting it here as well:
This is the first step towards full Heat integration. This routes deployment launching and querying for status to Heat. Launch-time parameters should just work, ditto for deleting the instances.
This is mostly to give you guys a direction where I think this should go and to get comments and feedback. I don't think it's wise to merge it just yet -- we need to test it more, clean up the code, find the subtle things this breaks, etc.
There are also a few issues that need fixing:
- Heat's API supports atomic launch (i.e. rollback) but as far as I
know, it's not been implemented yet
I implemented this (for stack create and update) recently, it landed for the Grizzly-3 development milestone:
https://blueprints.launchpad.net/heat/+spec/stack-rollback https://blueprints.launchpad.net/heat/+spec/update-rollback
Note this feature is disabled by default in current heat master (it wasn't for g3) because we don't yet have persistent stack events:
https://bugs.launchpad.net/heat/+bug/1131303 https://blueprints.launchpad.net/heat/+spec/event-persistence
You can enable it with the heat-cfn/heat-boto/heat tools with the --enable-rollback option on stack create
Thanks, I didn't know that. I'll check it out.
Note that this rollback flag is defined at stack creation time, and currently cannot be changed during the lifetime of the stack (same as AWS, but maybe we should allow specifying it explicitly on StackUpdate)
That's perfectly fine for our use case, actually.
- There's a bug in Heat that means the actual instance status isn't
reported back correctly. If you launch a deployment and stop the instances out of band, Heat will show them as running.
Is there a bug report for this?
Also I'm not sure what you mean by running - we track resource creation status, not instantaneous state/health of any instance/resource - that must be done via CloudWatch alarms, e.g like the HA/Autoscaling templates.
So when your Instance resource goes CREATE_COMPLETE, after that we don't care if it's running or not - if the user thinks instances may need restarting for some reason, they need to make use of WaitConditions, the HA features and CloudWatch alarms.
I see, I may have misunderstood what that field means, then.
I was talking about reporting the ACTIVE/STOPPED/ERROR OpenStack state to monitor if the instance goes down, etc.
I thought this would be passed via the `resource_status` field. But it may very well make more sense to get to this via HA/CloudWatch capabilities as you suggest. I'll look into it more.
It would still be useful to report these state changes in the events I think. Is that being done already / would Heat be interested in having that?
Steve
On Tue, Mar 05, 2013 at 05:33:28PM +0100, Tomas Sedovic wrote: <snip>
- There's a bug in Heat that means the actual instance status isn't
reported back correctly. If you launch a deployment and stop the instances out of band, Heat will show them as running.
Is there a bug report for this?
Also I'm not sure what you mean by running - we track resource creation status, not instantaneous state/health of any instance/resource - that must be done via CloudWatch alarms, e.g like the HA/Autoscaling templates.
So when your Instance resource goes CREATE_COMPLETE, after that we don't care if it's running or not - if the user thinks instances may need restarting for some reason, they need to make use of WaitConditions, the HA features and CloudWatch alarms.
I see, I may have misunderstood what that field means, then.
I was talking about reporting the ACTIVE/STOPPED/ERROR OpenStack state to monitor if the instance goes down, etc.
I thought this would be passed via the `resource_status` field. But it may very well make more sense to get to this via HA/CloudWatch capabilities as you suggest. I'll look into it more.
Yep, we can't pass this information via the heat (orchestration) API, since we can only express a specific subset of resource states related to the resource lifecycle in the ResourceStatus field:
http://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/API_StackRe...
It would still be useful to report these state changes in the events I think. Is that being done already / would Heat be interested in having that?
Not being done already, I think in openstack the expectation would be for these state changes to be monitored via Ceilometer, not Heat, although I agree exposing this information would be useful.
Looking at the way AWS does it, they provide a "Monitoring" property to the AWS::EC2::Instance resource type, which heat doesn't currently implement:
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties...
It seems like the closest approximation to this would be to set up a nova notification for state changes (not sure on the current state of this..)
http://docs.openstack.org/developer/nova/api/nova.notifications.html
Then I guess we could have a callback to heat which created heat resource events based on state changes reported by nova - but if we did this, I think we'd need to be careful not to overlap in any way with the monitoring capabilities provided (or planned to be provided) by Ceilometer.
I think for the Aeolus use-case we need to give some consideration to what happens when Ceilometer becomes the primary source of alarms for Heat - we are not planning to maintain our internal Cloudwatch metric/alarm infrastructure indefinitely, so we may need to come up with a separate solution for Aeolus to use.
Steve
aeolus-devel@lists.fedorahosted.org