cloud state requirements

Martyn Taylor mtaylor at redhat.com
Thu May 24 12:26:27 UTC 2012


Had a chat with Jan this morning about some of your questions. I've 
replied to qs with what we talked about plus a few thoughts of my own.


On 05/24/2012 01:36 AM, David Lutterkort wrote:
> We need to define in detail for each of these resources what needs to be
> tracked, and what status changes constitute an 'event', but as a general
> requirement this is fine.
>
> I suggest though that we start with something very basic, like tracking
> only instance state changes, and expand from there gradually.
I'd suggest that this is what ever changes from a->b, probably let the 
driver to decide what to return since it should know what changes from 
say Pending -> Running.
> Do you just want a callback every time an event happens or more detail
> about what changed (like 'instance went from pending to started') ?
Same as above
>> TODO: authentication when posting data to hook url? we use oauth between
>> other components now
> For the very first cut, I'd either not do any auth, or do something
> incredibly cheesy like a token.
We use http basic authentication for the Conductor API.  So we can add 
authentication in the callback url.
>> TODO: how to handle hook failures (conductor is not accessible and hook
>> can't be invoked)?
> For sure, we'd retry for a while ... after that, retry very infrequently
> (like once a day) + provide an API to retrieve undelivered events. That
> way, if the conductor failure is transient, everything should catch up
> fairly quickly. If the conductor failure was longer (some multi-hour
> maintenance event), conductor can catch up by asking for events
> explicitly.
I wonder if its worth having some agreed upon policy.

For example, retry for 2 hours then revert the state change  (I'm not 
sure if this is even possible).  But then would know if it receives 
nothing in 2 hours then the request failed.
>> TODO: how to handle credentials? will the stateful app keep credentails
>> permanently for each instance being checked?
> As much as this worries me from a security standpoint, I don't see
> another way around this - cloud API's generally don't allow any
> delegation of auth.
>
> There's a couple more TODO's connected to credentials:
>
> TODO: how are credentials changes handled (user revokes API Key and
> generates a new one) ? [not for the first cut
>
> TODO: when are stored credentials purged ? We want to make sure we get
> rid of them as quickly as possible.
Why not make this a feature.  Credentials could be stored via the API 
and managed via a single login (maybe OAuth)?

Then to answer the previous question, we could add another callback on 
the credentials resource that is invoked when authentication fails.

Another
> I think combining these should be fairly straightforward; for frequency
> I imagine we'll build something in based on the anticipated state
> change: for example, while an instance is pending, we might poll its
> state pretty frequently, once it's running, we'd back off and poll much
> less often.
>
> What poll frequencies does conductor use today ?
At the moment conductor uses 60s.

It makes more sense to me though to have this set at a driver level  
rather than across the board.  Each provider has a different underlying  
process and potentially different state machine so it makes sense to  
have a different poll frequency for each, for example in EC2 you can 
get  the state of a bunch of instances in one request so the poll 
frequencey  might be higher than say a provider that only allows status 
query on a  per instance basis

A nice to have feature would be to then offer a high level system  
setting e.g. POLL_FREQUENCY=FASTEST, SLOWEST, DEFAULT which the driver 
interprets.  This could be set at  boot time, or on a per request basis.

Thanks

Martyn



More information about the aeolus-devel mailing list