On Fri, May 27, 2011 at 05:20:51PM +0100, Mark McLoughlin wrote:
On Thu, 2011-05-26 at 23:49 -0400, Scott Seago wrote:
> On 05/26/2011 10:57 AM, Mark McLoughlin wrote:
> > Hey,
> >
[snip]
> >
> We should clarify at what levels we store quota. We've already
> implemented instance quota in conductor -- i.e. how many instances can
> be run at once by various measures:
> 1) user quota: how many instances an end user can run at once,
> regardless of pool or provider account
> 2) pool quota: how many instances can run in a pool, regardless of
> running user or provider account
> 3) environment/pool family quota: how many instances can run in a
> particular environment, regardless of pool, running user, or provider
> account
> 4) provider account quota: how many instances can run in a provider
> account, regardless of user, pool, or environment.
Well, I don't know what the rationale for those quotas was, so I can't
really comment.
To me, quota is only really important as a replacement for billing.
Anything an admin might want to set up billing for, they will need
quotas for in the absence of billing.
I think that's correct, with the exception of the pool quota and the
pool family/environment quota. The only reason that quota exists is to
keep one environment from DOS-ing the provider accounts that back it,
in the event another environment is also allowed to access those
accounts. Same thing goes for pool quota -- If multiple teams share
the "dev" environment, we want a way to limit how many instances each
team can use so that one team doesn't monopolize the whole
environment.
> Note the absence of quotas that restrict more than one of these
at once
> -- i.e. we don't have a "per user, per pool" quota limiting how many
> instances a given user can run in a given pool, and we don't have a "per
> user, per provider account" quota limiting how many instances a
> particular user can run on a particular back end account.
Okay, that seems a little weird to me.
If I've got a RHEV-M cloud that has spare capacity and an EC2 account,
don't I want to limit each user's use of (expensive) EC2 resources
without limiting their RHEV-M usage or without setting things up so that
one user's usage could impact other users.
i.e. if the only tool I have to constrain the usage of an EC2 account
(without constraining the RHEV-M cloud usage) is the provider account
quota, then a single user can DOS everyone else by using up all that
quota.
> Doing this would make things even more confusing as the number of
> possible "quota-able" permutations hits double digits.
Right, I'm not suggesting we have endless permutations :)
I guess I had an assumption about the important use cases for quotas and
the quotas we have don't seem to match that.
> Back to image quota -- the proposed image quota above is "per user, per
> account" -- do we want this, or do we want to take a similar model to
> what we're doing with instances:
>
> 1) Bob has a quota of 10 images (or 10 builds)
> 2) provider account hbrock@ec2-us-east-1 has a quota of 20 provider images
>
> So if Bob has 10 images already, he can't add another one, even if
> hbrock@ec2-us-east-1 could support another. If hbrock@ec2-us-east-1
> already has 20 provider images, then the next time bob builds, he can't
> push to the already-full ec2 account.
>
> Now it could be that quota needs are sufficiently different on the image
> side that we _do_ want the per-user, per-provider account combined
> model. Lets put together some use cases, though, as it's not really
> clear to me what model meets the needs.
I don't think images are all that different. If the use cases that drove
the current quota design make sense, then the (1) and (2) quotas above
should be sufficient.
> > 3) Environment/pool family policies
> >
> > Based on the environment a user is launching an instance, a
> > different set of images should be available to the user.
> >
> > (This sounds to me like a policy managed by the image tools and
> > enforced by Conductor)
> >
> Conductor doesn't know about images though -- so if conductor enforces
> it, this means conductor may have to maintain references to all of the
> images -- this gets a bit too close to the db caching problem that lead
> to the separation in the first place.
>
> One alternative would be to have an 'environments' tag on images in
> IWHD, and at launch time, Conductor could make sure that the images
> selected by the deployable/assemblies are tagged w/ the appropriate
> environment. The other question is how to manage these tags. If we add a
> fourth Conductor API call that the build CLI uses to return a list of
> environments that Conductor knows about, we can then add an
> --environment option to the build call, so the CLI can add this tag when
> inserting the image object into IWHD.
Right, that's one way that makes sense
Or if deployables contain versionless image references, then we could
have a different "latest_build" tag for each environment - i.e. the
versionless reference resolution is influenced by the environment
I like this last idea a lot. It's a nice way of leveraging the
existence of environments.
--H
--
== Hugh Brock, hbrock(a)redhat.com ==
== Engineering Manager, Cloud BU ==
== Aeolus Project: Manage virtual infrastructure across clouds. ==
==
http://aeolusproject.org ==
"I know that you believe you understand what you think I said, but I’m
not sure you realize that what you heard is not what I meant."
--Robert McCloskey