cloud state requirements

Martyn Taylor mtaylor at redhat.com
Fri May 25 07:42:47 UTC 2012


On 05/24/2012 11:41 PM, David Lutterkort wrote:
> On Thu, 2012-05-24 at 13:26 +0100, Martyn Taylor wrote:
>> Had a chat with Jan this morning about some of your questions. I've
>> replied to qs with what we talked about plus a few thoughts of my own.
>>
>>
>> On 05/24/2012 01:36 AM, David Lutterkort wrote:
>>> We need to define in detail for each of these resources what needs to be
>>> tracked, and what status changes constitute an 'event', but as a general
>>> requirement this is fine.
>>>
>>> I suggest though that we start with something very basic, like tracking
>>> only instance state changes, and expand from there gradually.
>> I'd suggest that this is what ever changes from a->b, probably let the
>> driver to decide what to return since it should know what changes from
>> say Pending ->  Running.
> The state changes should all be on the level of DC instance states; we
> do not want to expose driver-specific state ... I assume that's what you
> are saying.

No.

So, I assumed each provider would have a different set of changes moving 
from one state to another.  For example.  When EC2 instance is started 
it's assigned public/private ip addresses and state is set to 
"Running".  I imagine that another providers (Provider B) might have 
different set of changes when moving to state running, maybe only 
private ip address is set.

I agree the changes should be on the DC Level, i.e. DC resource fields, 
nothing specific to the driver/provider.  But chosing which of these 
resource fields to look out for, would probably be best suited to live 
in the driver, since the driver knows that Provider B only changes 
private ip addresses when moving to running state.

Alternatively, and probably more appropriately.  We could just inspect 
the resource before and after the change and return the diff of the two.
>
>>> Do you just want a callback every time an event happens or more detail
>>> about what changed (like 'instance went from pending to started') ?
>> Same as above
> I don't understand what you are saying here - basically, I am asking if
> the callback should just be a 'go GET the details of the resource, they
> have changed' or if there should be some indication of what has changed
> an how (which might save you a roundtrip)
Sorry, I think this was a copy and paste error, I had some notes and 
pasted them in.

So, I'd hope that you would return some indication of what has changed.  
a PUT with the changed resource in the body to the call back URL?  (This 
is how we've decided to handle this in imagefactory -> conductor 
interaction)
>
>>>> TODO: authentication when posting data to hook url? we use oauth between
>>>> other components now
>>> For the very first cut, I'd either not do any auth, or do something
>>> incredibly cheesy like a token.
>> We use http basic authentication for the Conductor API.  So we can add
>> authentication in the callback url.
> Works for me.
>
>>>> TODO: how to handle hook failures (conductor is not accessible and hook
>>>> can't be invoked)?
>>> For sure, we'd retry for a while ... after that, retry very infrequently
>>> (like once a day) + provide an API to retrieve undelivered events. That
>>> way, if the conductor failure is transient, everything should catch up
>>> fairly quickly. If the conductor failure was longer (some multi-hour
>>> maintenance event), conductor can catch up by asking for events
>>> explicitly.
>> I wonder if its worth having some agreed upon policy.
>>
>> For example, retry for 2 hours then revert the state change  (I'm not
>> sure if this is even possible).  But then would know if it receives
>> nothing in 2 hours then the request failed.
> This would essentially be some lightweight monitoring functionality;
> maybe we should just model that as another event with a callback ?
I'm not quite sure what you mean here.  Really I was outlining one way 
of how to handle the case when the callback urls are unreachable.  tbh 
though there's probably a ton of tried and tested stuff that is probably 
can use in this scenario.
>
>>>> TODO: how to handle credentials? will the stateful app keep credentails
>>>> permanently for each instance being checked?
>>> As much as this worries me from a security standpoint, I don't see
>>> another way around this - cloud API's generally don't allow any
>>> delegation of auth.
>>>
>>> There's a couple more TODO's connected to credentials:
>>>
>>> TODO: how are credentials changes handled (user revokes API Key and
>>> generates a new one) ? [not for the first cut
>>>
>>> TODO: when are stored credentials purged ? We want to make sure we get
>>> rid of them as quickly as possible.
>> Why not make this a feature.  Credentials could be stored via the API
>> and managed via a single login (maybe OAuth)?
>>
>> Then to answer the previous question, we could add another callback on
>> the credentials resource that is invoked when authentication fails.
> I don't want to introduce an explicit credentials resource; because we
> then need to safeguard that with credentials of its own. Rather, I'd
> prefer that the state tracker just snoop the backend credentials out of
> the ordinary DC requests.
Fair point.

Mainly my thinking was from aeolus perspective here.  Credentials get 
passed around a lot in aeolus from  "conductor -> dc", "conductor -> 
image factory" and they only really make sense in the context of DC.  I 
always assumed that the login credentials to DC replicated the back end 
provider login simply due to the lack of state.  If you stored 
credentials in DC Stateful you could have a single consistent way to 
authenticate.

Another thing of interest, if we did go down this route, is that you 
could do some cool things like aggregating clouds hiding all of the 
backend provider stuff from the user who just starts an instance 
"somewhere" and a list of all his instances in "The Cloud".
>
>> Another
>>> I think combining these should be fairly straightforward; for frequency
>>> I imagine we'll build something in based on the anticipated state
>>> change: for example, while an instance is pending, we might poll its
>>> state pretty frequently, once it's running, we'd back off and poll much
>>> less often.
>>>
>>> What poll frequencies does conductor use today ?
>> At the moment conductor uses 60s.
>>
>> It makes more sense to me though to have this set at a driver level
>> rather than across the board.  Each provider has a different underlying
>> process and potentially different state machine so it makes sense to
>> have a different poll frequency for each, for example in EC2 you can
>> get  the state of a bunch of instances in one request so the poll
>> frequencey  might be higher than say a provider that only allows status
>> query on a  per instance basis
>>
>> A nice to have feature would be to then offer a high level system
>> setting e.g. POLL_FREQUENCY=FASTEST, SLOWEST, DEFAULT which the driver
>> interprets.  This could be set at  boot time, or on a per request basis.
> Yes, I think that makes the most sense (and to get started, we'll only
> have one frequency, POLL_FREQUENCY=optimal ;)
>
> David
>
Thanks

Martyn



More information about the aeolus-devel mailing list