On Thu, 2011-05-26 at 23:49 -0400, Scott Seago wrote:
On 05/26/2011 10:57 AM, Mark McLoughlin wrote:
> Hey,
>
> We had a chat earlier medium term plans for image permissions. John is
> going to write up some more detailed design thoughts, but I thought I'd
> write down my understanding of the basic requirements before I forget:
>
> 1) Access control
>
> We need users to be able to restrict access to images they create
> or own - e.g. if you've got sensitive data in an image, or you
> just want to prevent others from being able to delete your images
>
> (This sounds to me like posix filesystem style permissions on
> IWHD objects)
>
Yes, now that image metadata lives exclusively in IWHD we probably need
to control access here rather than in conductor. Can we limit this to
cases where we have SSO w/ conductor? i.e. when IWHD and Conductor are
using the same user DB we can do this. If we're using the internal
Conductor db and IWHD doesn't have access to this, I'm not sure how we'd
handle access control on images, but I'm not sure we can say "IWHD
content is all public unless SSO with Conductor is enabled.
I think it's reasonable to say that if IWHD doesn't know anything about
the user creating an image, then it's owned by root and world
readable/writable?
i.e. sticking with the the posix filesystem analogy, if images were just
stored on disk ... that's what we'd do, right? Only if the user exists
at a system level via LDAP/NSS/whatever, could we make the user the
owner of the image on disk
> 2) Quotas
>
> When an administrator adds a provider account in conductor, she
> needs to be able to set a per-user quota for that account - e.g.
> Mary can only use 20Gb of S3 storage on this EC2 account
>
> (This sounds to me like a policy stored in Conductor, enforced
> either by conductor or image factory. If the latter, the quota
> could be passed to image factory via the credentials XML)
>
We should clarify at what levels we store quota. We've already
implemented instance quota in conductor -- i.e. how many instances can
be run at once by various measures:
1) user quota: how many instances an end user can run at once,
regardless of pool or provider account
2) pool quota: how many instances can run in a pool, regardless of
running user or provider account
3) environment/pool family quota: how many instances can run in a
particular environment, regardless of pool, running user, or provider
account
4) provider account quota: how many instances can run in a provider
account, regardless of user, pool, or environment.
Well, I don't know what the rationale for those quotas was, so I can't
really comment.
To me, quota is only really important as a replacement for billing.
Anything an admin might want to set up billing for, they will need
quotas for in the absence of billing.
Note the absence of quotas that restrict more than one of these at
once
-- i.e. we don't have a "per user, per pool" quota limiting how many
instances a given user can run in a given pool, and we don't have a "per
user, per provider account" quota limiting how many instances a
particular user can run on a particular back end account.
Okay, that seems a little weird to me.
If I've got a RHEV-M cloud that has spare capacity and an EC2 account,
don't I want to limit each user's use of (expensive) EC2 resources
without limiting their RHEV-M usage or without setting things up so that
one user's usage could impact other users.
i.e. if the only tool I have to constrain the usage of an EC2 account
(without constraining the RHEV-M cloud usage) is the provider account
quota, then a single user can DOS everyone else by using up all that
quota.
Doing this would make things even more confusing as the number of
possible "quota-able" permutations hits double digits.
Right, I'm not suggesting we have endless permutations :)
I guess I had an assumption about the important use cases for quotas and
the quotas we have don't seem to match that.
Back to image quota -- the proposed image quota above is "per
user, per
account" -- do we want this, or do we want to take a similar model to
what we're doing with instances:
1) Bob has a quota of 10 images (or 10 builds)
2) provider account hbrock@ec2-us-east-1 has a quota of 20 provider images
So if Bob has 10 images already, he can't add another one, even if
hbrock@ec2-us-east-1 could support another. If hbrock@ec2-us-east-1
already has 20 provider images, then the next time bob builds, he can't
push to the already-full ec2 account.
Now it could be that quota needs are sufficiently different on the image
side that we _do_ want the per-user, per-provider account combined
model. Lets put together some use cases, though, as it's not really
clear to me what model meets the needs.
I don't think images are all that different. If the use cases that drove
the current quota design make sense, then the (1) and (2) quotas above
should be sufficient.
> 3) Environment/pool family policies
>
> Based on the environment a user is launching an instance, a
> different set of images should be available to the user.
>
> (This sounds to me like a policy managed by the image tools and
> enforced by Conductor)
>
Conductor doesn't know about images though -- so if conductor enforces
it, this means conductor may have to maintain references to all of the
images -- this gets a bit too close to the db caching problem that lead
to the separation in the first place.
One alternative would be to have an 'environments' tag on images in
IWHD, and at launch time, Conductor could make sure that the images
selected by the deployable/assemblies are tagged w/ the appropriate
environment. The other question is how to manage these tags. If we add a
fourth Conductor API call that the build CLI uses to return a list of
environments that Conductor knows about, we can then add an
--environment option to the build call, so the CLI can add this tag when
inserting the image object into IWHD.
Right, that's one way that makes sense
Or if deployables contain versionless image references, then we could
have a different "latest_build" tag for each environment - i.e. the
versionless reference resolution is influenced by the environment
Cheers,
Mark.