On Mon, Jul 25, 2011 at 05:53:17AM -0400, Chris Lalancette wrote:
On 07/24/11 - 02:44:33PM, Hugh Brock wrote:
> Conductor features:
>
> * Authorization. We have a fair amount of authorization checking in
> place, but no way to actually set who can do what. Given that a
> central Conductor feature is the ability to control access to cloud
> resources, this seems like an important feature. Things we'll need to
> put this in place:
>
> * UX around setting permissions
>
> * UX around displaying appropriate "You can't do that" messages
> where required, or showing/hiding controls as appropriate
>
> * Good tests
>
> * Not much model code -- I think it's all mostly in place. Correct
> me if I'm wrong.
[snip]
> * Admin UX work
>
> * We need to give the pool, pool family, and provider management
> screens the same loving treatment we have given the instance
> management screens.
>
> * We need to make sure self-service really is sane. A big part of
> self service is image visibility -- i.e. who can launch what where
> (VMWare's "Catalog" concept answers this requirement for them). A
> good self-service solution is going to take thinking through some
> use cases and some serious UX work as well.
Right, I'm actually not sure about this one. What is the use case for
self-service, and what kinds of users would actually use it? I think we need
to think through why we really want/need this before we commit to it for 0.4.0.
A very good question. The use case I have in mind is as follows:
* Administrator provisions three pool families with different cloud
provider back-ends. The pool families differ in QOS and cost: one is
cheap and crappy (slow to provision, sometimes fails altogether),
one is less cheap and less crappy, one is expensive and
super-reliable.
* Administrator creates one pool in each pool family.
* Administrator grants permissions to "devel" user group on the
cheap-and-crappy pool, no other pools
* Administrator provisions the cheap-and-crappy pool with some images
that can be run
* User in the "devel" group browses to the app, sees cheap-and-crappy
pool, no other pools.
* User can launch instances from the available images there, or
* User can use Image Factory to build an image and add it to
cheap-and-crappy, then launch an instance from it
So, the "User" in this case is a self-service user because they don't
have to do any setup or admin in order to use the resources Conductor
is managing. The key tricky bit, in my mind, is managing access to the
images that should or should not be run in the various partitioned-off
pool families.
>
> * I'd really like to see a front door to the Conductor app. I'm
> afraid to call it a "dashboard" because then it will never get
> built :). I'd love suggestions for what should appear on such a
> thing.
Also not sure about this one. I kind of like the current design, where you
login and get dumped to a screen where you can get stuff done. On the other
hand, I'm not a UX designer, so maybe that is "bad".
Fair enough, that's a UX question.
>
> * Other UX work
>
> * I think we should be able to launch single images from Conductor
> without requiring a deployable XML.
Yes.
>
> * To facilitate launching single images, I think we need a UI that
> shows the images that are available in the warehouse for the user
> to launch. I also think we need a really, really easy way to
> import existing images. I had proposed associating this UI with
> pools in some way, since a pool is where you go to launch stuff,
> but we need to work through the UX process to decide all that for
> sure.
Definitely.
>
> * We need to determine whether and how soon Katello will provide a
> UI for defining images to build with Image Factory. If that UI is
> a long way off, we should consider building a simple, serviceable
> web UI that stands alone with Image Factory but could conceivably
> be plugged into a common look-and-feel with Conductor. However I
> don't advocate rushing into that unless there is a really strong
> upstream need for such a thing.
I'm still in the "we need to have a building UI" camp, but let's get
the
importing UI done first, then re-evaluate where we stand here.
>
> * Status reporting
>
> * We should reliably display the status of a running instance and
> its uptime
>
> * We should start thinking about how we will handle the richer data
> about instance health that we will get once Matahari is in place,
> and about the API we'll need to provide the Cloud Policy Engine so
> it can tell Conductor about events it generates or handles.
>
> * Users should be able to view an audit trail of events for an
> instance or a set of instances
>
> * Users should be able to export those events
Yes, all of these are essential.
>
> * API
>
> * We've been saying for a very long time that we need a real API for
> managing Conductor and for doing instance stuff in Conductor. If
> we admit that we have to manage instances that are not part of
> deployments, then we can also just say that the Deltacloud API we
> expose only works for instances. I think this is good enough for
> the next release.
I'm still not convinced that the deltacloud API is the right thing to be
slapping on the front of the conductor. But in any case we should definitely
put some sort of API on it.
Mark's proposed solution to this is to move the multi-instance
deployable handling out of Conductor altogether and into the
Orchestrator app. We're still debating whether that's a good idea or
not.
>
> * We need to make sure the Image Factory API is callable by
> Katello. This may mean resurrecting Connector, as I'm certain
> Katello is going to prefer a REST API to the QMF one.
I wouldn't immediately jump to that conclusion. We should talk to the Katello
people and see if they would go with QMF. If they have no fundamental
objection, then this would turn into a task of documentation/examples, which
we need anyway.
You didn't see the look on Bryan Kearney's face when I said "QMF" to
him the last time :).
>
> * Documentation
>
> * I'd like to see the community get into the habit of creating or
> updating wiki how-to pages as we go along. I'm not sure exactly
> how to promote this (free beer once a month to the best new
> page?), but I am fairly sure we do need to make the infrastructure
> a little easier to deal with for it. Like it would be good if the
> wiki didn't become unreachable a lot, the way the Redmine wiki
> seems to do.
Yes, and merging the two wikis into one would help too :).
>
> Infrastructure-around-Conductor features:
>
> * Identity and encryption. In addition to the bits that go in
> Conductor proper, there's going to be a lot of work in the installer
> and in other projects nearby.
>
> * Better self-monitoring. I'd like to see a quick shell command that
> will give a meaningful report of the status of all the app
> components.
>
> * Way better logging and error reporting.
>
> * All components should be using syslog if at all possible
I don't know that I agree here. Why would I want all of my logs dumped into
the same bucket, where it is difficult to tease them apart? I kind of like the
per-daemon logging, as it is easy to debug the piece you care about. Maybe
what we should have is an aggregation tool, or sos plugin, that will gather all
of the stuff together from somebody when we need to debug multiple components.
Fair point. I guess I was thinking more in terms of "Use a robust log
management system so that we don't have issues with rollover, writing
logs to the wrong place, etc. etc."
>
> * Logs should be timestamped
>
> * We should not be logging credentials or things that are
> potentially embarassing
>
> * Components can be distributed across multiple machines
I would punt this off, as I'm not sure it is super interesting to the people
who are going to be evaluating it. This seems more like iteration 5 material.
The SAs were very fired up about being able to make things HA, but I
agree we can probably mostly wait on this.
>
> * RHEV-M 3.0 really works as a cloud provider.
>
> "Orchestrator" features (even though these aren't yet separate
> components, I've bracketed off stuff that concerns post-boot and
> multi-instance operations as conceptually different topics to work on)
>
> * Assemblies
>
> * Users can define assemblies that cause the post-boot config
> apparatus to install software and set config parameters on
> instances when they check in after booting
>
> * Investigate how much assembly config can be stuffed into user-data
> without requiring config server, since we now have a working
> userdata mechanism on the three cloud back ends we really care
> about.
We know this to a large degree:
EC2 - 16K
Rackspace - 10K (though you can probably get beyond this using user_files
instead of user_data)
RHEV-M - floppy disk size
VMware - ISO image size
While I usually like to make simple things simple, in this case I'm not sure
that we can ever get away with *not* using a config server. The EC2
limitation, in particular, means that you can never really inject anything
except for some meta-data. And one of the use cases I would eventually like
to be able to support is the ability to inject post-config "packages".
Well, you could fit a lot of script/manifest stuff into 16k, even with
no actual software, right? Including enough information to go and get
the packages you want from the right place?
>
> * Deployables and deployments
>
> * Users can define deployables that contain multiple assemblies.
>
> * Users can specify parameters that should be collected from a user
> when the user launches the deployable.
>
> * Users can direct that parameters collected from a user be
> interpolated in arbitrary spots in the deployable descriptor.
>
> * There is a UI for collecting parameters from the launching user
>
> * There is a mechanism for passing all the assembly and deployable
> config information through to the post-boot agent. (I think this
> could use user-data, *or* a config server.)
Yes.
>
> * Authorization
>
> * Should there be some way of restricting the assemblies/deployables
> that a user can launch on particular hardware?
I'm actually not sure what this one means :).
See the self-service use case above...
Thanks for the comments, more to come...
--H
--
Chris Lalancette
--
== Hugh Brock, hbrock(a)redhat.com ==
== Engineering Manager, Cloud BU ==
== Aeolus Project: Manage virtual infrastructure across clouds. ==
==
http://aeolusproject.org ==
"I know that you believe you understand what you think I said, but I’m
not sure you realize that what you heard is not what I meant."
--Robert McCloskey