The other day, Matty pinged me to discuss what needed doing and/or
coordination to make a common iwhd interface that both conductor and
aeolus-image (and potentially other clients) can use. This is part of
the current attempt to get the web app and cli tools more integrated,
and using common pieces where possible (task 1970, Define interface to
iwhd, subtask of 1969, 'As a user, I'd like Conductor to read a pool's
catalog, check which images are built, for which providers, and offer to
build & push'). What ensued was a long and arduous cross-team
discussion/debate on a wide swath of related topics. This email is an
attempt to capture the issues for (hopefully) more structured debate on
the various concerns. Assuming we can arrive at something resembling a
consensus (which is never a safe assumption), I (or perhaps Matty) will
summarize at the end of the thread a plan of some kind to make this
happen. Please add topics that need discussion if I have missed
For the purpose of this discussion, there are 2 major pieces:
== Code merge for iwhd ==
I think, in its simplest form, the middle chunk of the feature 'check
which images are built, for which providers' is the main thing to get
into conductor. However, without a model of some sort (Active Resource
seems to be what Conductor team is leaning toward), this is a no-go, so
Scott proposed a variation of something we discussed a couple months
back. Namely, there is a Common ActiveResource model that can be used
from aeolus-image and conductor, with conductor layering on relationship
metadata as needed (though I am a bit unclear how this would work
exactly in practice). This code probably belongs in aeolus-image, so
conductor can require it w/o forcing the user to install conductor just
to use the cli tools. I think this part is relatively uncontroversial.
Where we run into issues is the next bit.
== Integration with factory (and/or 'How much metadata should iwhd
=== Part 1: Metadata ===
Conductor wants/will want to show the status of builds, if I understand
correctly. This can be broken into 3 increasingly detailed types of
information. Note that all these pieces of information can be gotten to
varying degrees from qmf events already thrown by factory:
* Actual Status (things like building, pushing, failed, queued, etc)
* Completion %
* What was the error, if there was a failure?
Again, all these things are surfaced as QMF events currently, _but they
are captured nowhere_. So the question is, where does it make sense to
store these things? I only see 2 main options here, and which we choose
directly impacts many other aspects of this integration:
1. Store all this metadata in iwhd. Iwhd is the canonical home for
information about images, so depending how you view this, all the
above metrics are information about an object iwhd owns. If you follow
this line of reasoning, it may make sense for iwhd to store status,
percent complete, and error metadata.
2. Store these things in Conductor. In this way of thinking, Conductor
is trying to provide additional useful information to the user on
concepts iwhd does not know or care about, therefore it makes no sense
to have this data in iwhd.
Related to this, what happens if and when more metadata is needed about
any of these objects? Do we really want this information spread across
=== Part 2: Conductor <-> Factory Communication ===
In order to build/push from Conductor, we have to bring back in the
dependency of Conductor communicating with factory in some way. In the
past, we have tried several approaches to this:
* Embed a console directly in the webapp. This did not work well due to
ruby threading issues and process blocking. Also, if the console
encounters some kind of error, you need to restart the entire webapp to
recover from it.
* Separate Process (this used the delayed_job gem). This basically had
a 'job' inserted into a tasks table, and the worker process would poll
that table for new entries. Whenever a new one was found, it would do
whatever was defined in the job object's execute/run type method. This
is a similar approach to what we used to do with taskomatic/condor, but
in a more generic way. As I write this, I am trying to remember if this
just fired off a request to connector (in next bullet), and let that
handle events, or something else - I cannot recall for sure which it
* Separate app. This was the aeolus-connector, or HTTP/QMF Bridge, as
it has more recently been called. It interacted with conductor via the
conductor's REST api (albeit simple bits) and had a console inside to
talk to factory. While this would require a restart if console died, it
could be done independently of the web app, minimizing impact.
All of these approaches have their drawbacks, and led us to the
* Is qmf actually buying us anything when talking to factory? For
pacemaker-cloud, we decided it would be simpler for all involved to just
have an HTTP interface on each side. Perhaps this would be worth
considering in this case as well? This may at least partly be
determined by weighing the amount of work for either approach, as well
as what actually makes the most sense from a design standpoint. One
thought here in favor of REST, is that I know factory was designed using
the delegate approach for the very reason (well, partly at least) of
allowing different kinds of interaction should a client want to do
something other than qmf.
Apologies for such a long email, but I fear even with all that I have
tried to cover here, there is much more that this may bring up. I am
somewhat afraid what this thread will evolve into :)
Quick recap of the main questions (I think I) am trying to ask:
* Where should we store information on builds?
* How should Factory and Conductor communicate this information?
Thank you for all feedback...and... GO!
On Thu, 2011-08-18 at 11:33 -0500, Steve Loranz wrote:
> On Aug 18, 2011, at 11:23 AM, Jason Guiditta wrote:
> > So, the one slightly major change I see that would be needed here for
> > REST, is that $something would need to be inserted into iwhd when the
> > job started vs when it completed. If we can accept that for the moment...
> I have a problem accepting that. There's *SOOOO* much more of a possibility we'll end up with garbage in the warehouse this way.
Sure there is a possibility, but it would also allow the end user to
more easily manage what is there. If I build 5 images, and 2 of them
fail, click 'remove' or similar in the ui (or run same command cli) to
clean it up. I think the user should own that rather than us. I think
having an easy way to surface this information to the user is much
better than our current direction of lots of log reviewing - not that
that isn't useful, but it should be needed a lot less than it is right
Having basic support for Catalogs was a prerequisite to implementing the rest of #1969, so I've got it implemented here.
Basically, we renamed Suggested Deployables to Catalog Entries, and then made them belong to a Catalog, which simply has a name and a pool_id, trying a Catalog to a specific pool.
Right now, Catalogs don't *do* anything beyond what already existed with Suggested Deployables; this is just the code to pave the way towards more functionality down the road.
there are two things we need for sharing user identity in Katello and
1) Single sign on for Katello and Conductor:
Simplest solution is using 2 legged oauth as proposed in a mail before
(katello already uses this for accessing pulp and candlepin). In short:
auth is done on application level by sharing secret token, provider app
trusts consumer app that consumer already authenticated the user which
it passes to provider. This solution should be pretty easy to implement.
If this is not acceptable for some reason, we could consider using some
central auth service (CAS).
2) Authenticate against same external service in Katello and Conductor:
Katello and Conductor should support authentication against external
auth service (AD, LDAP, IPA, maybe more). It makes sense to use same
auth framework in both apps so we will be able to support same
authentication methods. Katello is far before conductor in
authentication, it uses warden and supports various auth strategies for
it (LDAP, SSO over http headers, certificates). I heard there was some
talk about switching to Omniauth, but I didn't find it on mailing list.
So there are two options here:
1) conductor switches to warden - this shouldn't be so difficult as we
can copy from Katello :). Also Omniauth is not packaged in Fedora,
2) both Katello and Conductor switch to Omniauth. I'm not sure if this
is required or optional step, Ken: you suggested switching to Omniauth,
could you please reply with your opinion about warden/omniauth (or point
me to older discussion)?
Please make your comments..
My comment is that it *may* be dangerous to make it easy to create
keys/certs via conductor. The danger for ec2 would be invalidating the
current key/cert and denying access to running instances. So I would
not vote for a feature *like* this.
From: Tomas Sedovic <tsedovic(a)redhat.com>
When an instance was being launched, there was a problem when its hardware
profile contained `nil` value for the `enum` type.
`nil` in that context means "match anything", so that's what it does now.
src/app/models/hardware_profile.rb | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/src/app/models/hardware_profile.rb b/src/app/models/hardware_profile.rb
index 57d5fd8..b6570a9 100644
@@ -178,7 +178,7 @@ class HardwareProfile < ActiveRecord::Base
create_array_from_property(back_end_property).sort!.each do |value|
- if BigDecimal.new(value) >= BigDecimal.new(front_end_property.value)
+ if front_end_property.value.nil? or BigDecimal.new(value) >= BigDecimal.new(front_end_property.value)