Notifications between Aeolus components

Martin Povolny mpovolny at redhat.com
Thu Jan 24 09:10:47 UTC 2013


On Wed, Jan 23, 2013 at 04:51:38PM -0600, Steve Loranz wrote:
> On Jan 23, 2013, at 4:21 PM, Matt Wagner <matt.wagner at redhat.com> wrote:
> 
> > On Wed, Jan 23, 2013 at 01:42:31PM -0500, Mo Morsi wrote:
> >> On 01/23/2013 01:16 PM, Bryan Kearney wrote:
> >>> Well, just thinking out loud.. can any component go into the cloud
> >>> which may not be in the same VPN space?
> >>> 
> >>> 
> >>> -- bk
> >>> 
> >> 
> >> I'd imagine this would come down to the security policy of the
> >> organization deploying to the cloud, namely how lax the firewall can be
> >> to permit connections from the cloud as well as the ip address
> >> assignment on the cloud instances launched.
> > 
> > For a while now, I've been watching this discussion thinking, "It feels
> > like we decided a while back that AMQP would be cool, and are now trying
> > to work backwards to come up with reasons to justify it." Perhaps I'm
> > just not forward-looking enough, or perhaps I'm overlooking an important
> > detail, but that's sort of how it feels to me.
> > 
> > I think networking is an edge case, and a minor detail. We should
> > absolutely try to support running components on different boxes (though
> > I don't believe we have ever properly tested or supported it). But if
> > you break Aeolus up and run it on disparate network segments, you're
> > going to have to handle somehow patching things together. I don't think
> > we should choose how we handle inter-component messaging based on what
> > happens if you break components up on different networks and refuse to
> > set up a VPN or appropriate port-forwarding rules.
> > 
> > I don't mean to single out networking in general, though; it's just the
> > latest in the discussion. I just worry that the discussion is largely
> > theoretical and academic, focused on how different means of
> > inter-component communications differ. That might be a good conversation
> > to have if we didn't already have all our components using one. What I'm
> > missing from this discussion is an exploration of what issues we're
> > actually experiencing today with our HTTP callbacks system, and whether
> > the overhead of switching to AMQP is a worthwhile trade-off. Is changing
> > Factory and Conductor to use AMQP worthwhile to prevent the issue where
> > if you shut down one of the two components in the middle of an exchange
> > of messages, some messages might be lost? Could that better be solved by
> > implementing some queuing or polling? Or, is it fair to say that if you
> > send a job to Factory from Conductor and then shut down Conductor, it's
> > just expected that the updated status might be missed?
> > 
> > It's not my intention to vehemently oppose AMQP, and I certainly don't
> > mean to suggest that it shouldn't be discussed. I just don't find the
> > current conversation terribly productive at making the case for why we
> > should switch.
> > 
> > -- Matt
> 
> imagefactory started off using QMF. It's what we were told was decided
> on when Aeolus was designed. But conductor was having a difficult time
> using it because it meant having a separate thread or process to
> bridge conductor to the broker. So, in the summer of 2011, there was a
> number of discussions where developers on both the imagefactory and
> conductor teams came out in favor of imagefactory offering a REST
> interface. We did, and near the end of January of 2012, we officially
> removed the QMF interface from the source tree when it started to seem
> like QPID/QMF was failing to gain traction in the wider community.

I was not here when this decisions where made and did not hear any
reasonable argument why the MQ was removed from the project.

So just my IMHO

Conductor has 2 components that are sort of "daemons"

* dbomatic
* delayed job

Then there is the web ui part.

The communication layer for the 3 is the dababase.

IMHO it would make much more sense to have a "backend conductor" that
would do the stuff that really matters including servicing dome sort of
job-queue (now dbomatic) and
  >> communicating with the other components <<
including polling where necessary (now delayed job).

Then the web part would be a thin layer to "display the results" -- the
web ui part.

As opposed to current situation where we have conductor the web part
that:

1) serves the web users
2) serves the API
3) accepts the "RESTful" callbacks from ImageFactory
4) also does some communication with DC and maybe other parts while
answering a client's request

IMHO this is a mess from the architecture point of view.

>From my point of view it is much more important to solve this mess and
have the communication
    >> happen in the right place <<
than deciding whether the communication is HTTP based (REST, Message,
RPC, whatever) or if we use MQ.

The question of MQ of then comes secondary.

I don't know why you guys failed with the MQ in the past but I see the
MQs today at a similar level as SQL databases. It's an industrial
verified way of doing communication in situations such as ours.

We cannot directly link because:

a) we use a REST proxy for REST provider APIs (the deltacloud)
although it's written in the same language as Conductor and it's
stateless, it's not a library and it's to stay that way I understood.

b) we use components written in other languages (the ImageFactory)

c) we might want to support a scenario where the individual components
run on different machines.

We have more then a pair of communicating parties.

We have more the one party communicating with other parties.

We have or want to have optional components.

There are reasons for using MQ over ad-hoc communication between
components and I would thing that it's not necessary to write that, but:

* as "Steve Loranz" pointed out: there's just one point to communicate with
in case of MQ

* reliability -- not easy to get this with ad-hoc solution

* then of course you have the bindings for all the necessary language

* that someone else is using and testing for you, you have known and
  documented ways of "doing it" for various types of message exchange
  scenarios

* and then you have the MQ implemented, debugged, tested and working

* message format, error handling etc.

  see the problems we have when trying to handle the various error
  conditions -- how many times do you get a reasonable error message in
  conductor when IMF or DC fails? this is also easier to do with a MQ

* we can go on -- google knows

To get job done, its good to write less code. It might not be the
scientific approach but it works (IMHO).

Somewhere in the thread someone said that it seems we decided to use MQ
because it is "cool" and seek the reasons why to do so. I don't see it
this way.

What I see much more the decision NOT to use the MQ and ignoring the
good reasons to use one.

Then I see (well no longer since the new year) use of "cool" noSQL thing
with no reason in a project that already had enough complexity.

Then see big effort on using "cool and in" RESTful API in a situation
that really is about events.

But after the short time being on the team I am already tired of this
topic and as I said the architecture of Conductor seems to be a much
bigger problem for me then just the messaging between the components.

So that's all of my IMHO. Shoot me if you please.

-- 
Martin Povolny <mpovolny at redhat.com>
tel. +420777714458


More information about the aeolus-devel mailing list