The other day, Matty pinged me to discuss what needed doing and/or
coordination to make a common iwhd interface that both conductor and
aeolus-image (and potentially other clients) can use. This is part of
the current attempt to get the web app and cli tools more integrated,
and using common pieces where possible (task 1970, Define interface to
iwhd, subtask of 1969, 'As a user, I'd like Conductor to read a pool's
catalog, check which images are built, for which providers, and offer to
build & push'). What ensued was a long and arduous cross-team
discussion/debate on a wide swath of related topics. This email is an
attempt to capture the issues for (hopefully) more structured debate on
the various concerns. Assuming we can arrive at something resembling a
consensus (which is never a safe assumption), I (or perhaps Matty) will
summarize at the end of the thread a plan of some kind to make this
happen. Please add topics that need discussion if I have missed
anything.
For the purpose of this discussion, there are 2 major pieces:
* iwhd
* factory
== Code merge for iwhd ==
I think, in its simplest form, the middle chunk of the feature 'check
which images are built, for which providers' is the main thing to get
into conductor. However, without a model of some sort (Active Resource
seems to be what Conductor team is leaning toward), this is a no-go, so
Scott proposed a variation of something we discussed a couple months
back. Namely, there is a Common ActiveResource model that can be used
from aeolus-image and conductor, with conductor layering on relationship
metadata as needed (though I am a bit unclear how this would work
exactly in practice). This code probably belongs in aeolus-image, so
conductor can require it w/o forcing the user to install conductor just
to use the cli tools. I think this part is relatively uncontroversial.
Where we run into issues is the next bit.
== Integration with factory (and/or 'How much metadata should iwhd
store?') ==
=== Part 1: Metadata ===
Conductor wants/will want to show the status of builds, if I understand
correctly. This can be broken into 3 increasingly detailed types of
information. Note that all these pieces of information can be gotten to
varying degrees from qmf events already thrown by factory:
* Actual Status (things like building, pushing, failed, queued, etc)
* Completion %
* What was the error, if there was a failure?
Again, all these things are surfaced as QMF events currently, _but they
are captured nowhere_. So the question is, where does it make sense to
store these things? I only see 2 main options here, and which we choose
directly impacts many other aspects of this integration:
1. Store all this metadata in iwhd. Iwhd is the canonical home for
information about images, so depending how you view this, all the
above metrics are information about an object iwhd owns. If you follow
this line of reasoning, it may make sense for iwhd to store status,
percent complete, and error metadata.
2. Store these things in Conductor. In this way of thinking, Conductor
is trying to provide additional useful information to the user on
concepts iwhd does not know or care about, therefore it makes no sense
to have this data in iwhd.
Related to this, what happens if and when more metadata is needed about
any of these objects? Do we really want this information spread across
2 systems?
=== Part 2: Conductor <-> Factory Communication ===
In order to build/push from Conductor, we have to bring back in the
dependency of Conductor communicating with factory in some way. In the
past, we have tried several approaches to this:
* Embed a console directly in the webapp. This did not work well due to
ruby threading issues and process blocking. Also, if the console
encounters some kind of error, you need to restart the entire webapp to
recover from it.
* Separate Process (this used the delayed_job gem). This basically had
a 'job' inserted into a tasks table, and the worker process would poll
that table for new entries. Whenever a new one was found, it would do
whatever was defined in the job object's execute/run type method. This
is a similar approach to what we used to do with taskomatic/condor, but
in a more generic way. As I write this, I am trying to remember if this
just fired off a request to connector (in next bullet), and let that
handle events, or something else - I cannot recall for sure which it
was.
* Separate app. This was the aeolus-connector, or HTTP/QMF Bridge, as
it has more recently been called. It interacted with conductor via the
conductor's REST api (albeit simple bits) and had a console inside to
talk to factory. While this would require a restart if console died, it
could be done independently of the web app, minimizing impact.
All of these approaches have their drawbacks, and led us to the
question:
* Is qmf actually buying us anything when talking to factory? For
pacemaker-cloud, we decided it would be simpler for all involved to just
have an HTTP interface on each side. Perhaps this would be worth
considering in this case as well? This may at least partly be
determined by weighing the amount of work for either approach, as well
as what actually makes the most sense from a design standpoint. One
thought here in favor of REST, is that I know factory was designed using
the delegate approach for the very reason (well, partly at least) of
allowing different kinds of interaction should a client want to do
something other than qmf.
Apologies for such a long email, but I fear even with all that I have
tried to cover here, there is much more that this may bring up. I am
somewhat afraid what this thread will evolve into :)
Quick recap of the main questions (I think I) am trying to ask:
* Where should we store information on builds?
* How should Factory and Conductor communicate this information?
Thank you for all feedback...and... GO!
-j