Hi folks, I'd like to discuss a topic which was briefly touched on Aeolus developer Conference in Brno. Many components of Aeolus project need to send notifications to other components: Heat -> Conductor (deployment/instance state changes) Conductor -> Winged Monkey (deployment/instance state changes) Imagefactory -> Conductor (notifications about image build&upload state) DC tracker* -> Heat (notifications about instance state changes) DC tracker -> Conductor (notifications about other provider resource changes, realms availability, hw profile changes)
* DC Tracker - this component is actually not agreed yet, it was just a proposal some time ago, but I believe this will be needed.
As far as I know there is no a unified plan how to deal with notifications between components. At this point notifications are implemented only in Imagefactory. This implementation is for now quite simple - a notification callback is sent back as http PUT request. It doesn't cover any failure situations (network error, auth error, Conductor is not running...), so if a request is not successful, no retry is done.
I think we need more robust notification system between all components above, this system should support at least: 1) retry on failure 2) keep correct order of notifications 3) support authentication
And here it comes...
Why not use a message bus (an AMQP implementation) for communication between all involved components? - it supports all required features out of the box - clients exists for all languages involved in Aeolus project - notifications will be solved in the same way between all components
Or is there some other solution how to solve notifications as painless as possible while keeping required robustness? What is your preferred solution of this problem?
Jan
On Tue, Jan 08, 2013 at 02:41:51PM +0100, Jan Provaznik wrote:
Hi folks, I'd like to discuss a topic which was briefly touched on Aeolus developer Conference in Brno. Many components of Aeolus project need to send notifications to other components: Heat -> Conductor (deployment/instance state changes) Conductor -> Winged Monkey (deployment/instance state changes) Imagefactory -> Conductor (notifications about image build&upload state) DC tracker* -> Heat (notifications about instance state changes) DC tracker -> Conductor (notifications about other provider resource changes, realms availability, hw profile changes)
- DC Tracker - this component is actually not agreed yet, it was
just a proposal some time ago, but I believe this will be needed.
As far as I know there is no a unified plan how to deal with notifications between components. At this point notifications are implemented only in Imagefactory. This implementation is for now quite simple - a notification callback is sent back as http PUT request. It doesn't cover any failure situations (network error, auth error, Conductor is not running...), so if a request is not successful, no retry is done.
I think we need more robust notification system between all components above, this system should support at least:
- retry on failure
- keep correct order of notifications
- support authentication
And here it comes...
Why not use a message bus (an AMQP implementation) for communication between all involved components?
- it supports all required features out of the box
- clients exists for all languages involved in Aeolus project
- notifications will be solved in the same way between all components
Or is there some other solution how to solve notifications as painless as possible while keeping required robustness? What is your preferred solution of this problem?
I have hated on message bus solutions publicly in the past, so it seems appropriate for me to weigh in now :).
What you're suggesting appears to make good sense. However, based on past (bitter, painful, very expensive) experience, I would like us to approach the message bus question very carefully, according to a few principles:
* Any message bus, regardless how robust or stable, introduces complexity and dependencies to our code. Any proposal to add message bus use to one of our components should include some consideration of the costs and benefits. In other words, I want to see a solid justification of why REST callbacks are inadequate for a particular API connection before we dive into AMQP.
* The message bus we choose should be one that other upstream cloud projects commonly use. (What is OpenStack using, for example? Is oVirt using anything?). It should also be available across our target developer and end user platforms.
* The message bus we choose must support all the encryption and authentication mechanisms that the app supports. This means LDAP, oAuth, and (eventually) kerberos.
* Unless it is incredibly expensive to build it this way, I'd like the message bus to be optional wherever possible -- meaning, fall back to a simple listener/callback over REST architecture whenever possible.
Darts welcome :)...
--Hugh
On 08/01/2013, at 5:45 PM, Hugh Brock wrote:
On Tue, Jan 08, 2013 at 02:41:51PM +0100, Jan Provaznik wrote:
<snip>
Why not use a message bus (an AMQP implementation) for communication between all involved components?
- it supports all required features out of the box
- clients exists for all languages involved in Aeolus project
- notifications will be solved in the same way between all components
<snip>
What you're suggesting appears to make good sense. However, based on past (bitter, painful, very expensive) experience, I would like us to approach the message bus question very carefully ...
Would a "more simplistic" approach (used consistently through Aeolus) like this be a decent alternative?
http://faye.jcoglan.com https://github.com/faye/faye
Seems widely used, including by Cloud companies. Unsure if it provides everything we need though. ;)
+ Justin
-- Aeolus Cloud Evangelist http://www.aeolusproject.org
On 01/08/2013 11:45 AM, Hugh Brock wrote:
On Tue, Jan 08, 2013 at 02:41:51PM +0100, Jan Provaznik wrote:
Hi folks, I'd like to discuss a topic which was briefly touched on Aeolus developer Conference in Brno. Many components of Aeolus project need to send notifications to other components: Heat -> Conductor (deployment/instance state changes) Conductor -> Winged Monkey (deployment/instance state changes) Imagefactory -> Conductor (notifications about image build&upload state) DC tracker* -> Heat (notifications about instance state changes) DC tracker -> Conductor (notifications about other provider resource changes, realms availability, hw profile changes)
- DC Tracker - this component is actually not agreed yet, it was
just a proposal some time ago, but I believe this will be needed.
As far as I know there is no a unified plan how to deal with notifications between components. At this point notifications are implemented only in Imagefactory. This implementation is for now quite simple - a notification callback is sent back as http PUT request. It doesn't cover any failure situations (network error, auth error, Conductor is not running...), so if a request is not successful, no retry is done.
I think we need more robust notification system between all components above, this system should support at least:
- retry on failure
- keep correct order of notifications
- support authentication
And here it comes...
Why not use a message bus (an AMQP implementation) for communication between all involved components?
- it supports all required features out of the box
- clients exists for all languages involved in Aeolus project
- notifications will be solved in the same way between all components
Or is there some other solution how to solve notifications as painless as possible while keeping required robustness? What is your preferred solution of this problem?
I have hated on message bus solutions publicly in the past, so it seems appropriate for me to weigh in now :).
What you're suggesting appears to make good sense. However, based on past (bitter, painful, very expensive) experience, I would like us to approach the message bus question very carefully, according to a few principles:
- Any message bus, regardless how robust or stable, introduces complexity and dependencies to our code. Any proposal to add message bus use to one of our components should include some consideration of the costs and benefits. In other words, I want to see a solid justification of why REST callbacks are inadequate for a particular API connection before we dive into AMQP.
+1. We tried AMQP / QMF before and it ended up being a pain. Introduced another service / point of failure.
In general I'd prefer to see more decoupling between components so they can be used independently, rather than trying to come up w/ efficient ways to tie things together.
We need to ask ourselves, "what will grow the community / userbase" as opposed to "what is needed for the technical aspects of the project".
The message bus we choose should be one that other upstream cloud projects commonly use. (What is OpenStack using, for example? Is oVirt using anything?). It should also be available across our target developer and end user platforms.
The message bus we choose must support all the encryption and authentication mechanisms that the app supports. This means LDAP, oAuth, and (eventually) kerberos.
As a side note, I've been becoming a big fan of JSON-RPC [1] and wrote the Ruby implementation as a side project here [2].
It's simple, supports any transport that you could desire (HTTP, AMQP, TCP, SSL, Websockets, and more), can use any authentication scheme that you desire on the backend and more.
Even gave a presentation / demo of it at the Brno-Ruby group when I was in Brno last spring [3].
-Mo
[1] http://en.wikipedia.org/wiki/JSON-RPC [2] https://github.com/movitto/rjr [3] http://mo.morsi.org/blog/node/365
On 01/09/2013 07:57 AM, Mo Morsi wrote:
On 01/08/2013 11:45 AM, Hugh Brock wrote:
On Tue, Jan 08, 2013 at 02:41:51PM +0100, Jan Provaznik wrote:
Hi folks, I'd like to discuss a topic which was briefly touched on Aeolus developer Conference in Brno. Many components of Aeolus project need to send notifications to other components: Heat -> Conductor (deployment/instance state changes) Conductor -> Winged Monkey (deployment/instance state changes) Imagefactory -> Conductor (notifications about image build&upload state) DC tracker* -> Heat (notifications about instance state changes) DC tracker -> Conductor (notifications about other provider resource changes, realms availability, hw profile changes)
- DC Tracker - this component is actually not agreed yet, it was
just a proposal some time ago, but I believe this will be needed.
As far as I know there is no a unified plan how to deal with notifications between components. At this point notifications are implemented only in Imagefactory. This implementation is for now quite simple - a notification callback is sent back as http PUT request. It doesn't cover any failure situations (network error, auth error, Conductor is not running...), so if a request is not successful, no retry is done.
I think we need more robust notification system between all components above, this system should support at least:
- retry on failure
- keep correct order of notifications
- support authentication
And here it comes...
Why not use a message bus (an AMQP implementation) for communication between all involved components?
- it supports all required features out of the box
- clients exists for all languages involved in Aeolus project
- notifications will be solved in the same way between all components
Or is there some other solution how to solve notifications as painless as possible while keeping required robustness? What is your preferred solution of this problem?
I have hated on message bus solutions publicly in the past, so it seems appropriate for me to weigh in now :).
What you're suggesting appears to make good sense. However, based on past (bitter, painful, very expensive) experience, I would like us to approach the message bus question very carefully, according to a few principles:
- Any message bus, regardless how robust or stable, introduces complexity and dependencies to our code. Any proposal to add message bus use to one of our components should include some consideration of the costs and benefits. In other words, I want to see a solid justification of why REST callbacks are inadequate for a particular API connection before we dive into AMQP.
+1. We tried AMQP / QMF before and it ended up being a pain. Introduced another service / point of failure.
Just thinking out loud... QMF is a pain but is dead now I think. if you do callbacks you have to buy off on alot of networking. Can you put image factory in EC2 and run Aeolous in house?
-- bk
On Wed, 2013-01-09 at 08:01 -0500, Bryan Kearney wrote:
+1. We tried AMQP / QMF before and it ended up being a pain. Introduced another service / point of failure.
Just thinking out loud... QMF is a pain but is dead now I think. if you do callbacks you have to buy off on alot of networking. Can you put image factory in EC2 and run Aeolous in house?
This is a very important point: the big difference between a webhook approach and using messaging is that if A wants to notify B, in a webhook world A needs to be able to establish a network connection to B (so A in EC2 and B behind the firewall won't work) whereas in a amqp world A and B only need to be able to get to possibly different brokers that can talk to each other (e.g. A and B both to a broker in EC2).
When you want to work around that with messaging-over-HTTP, you'll have to resort to things that really strain HTTP, like long polling.
David
On Tue, Jan 22, 2013 at 10:09:49AM -0800, David Lutterkort wrote:
On Wed, 2013-01-09 at 08:01 -0500, Bryan Kearney wrote:
+1. We tried AMQP / QMF before and it ended up being a pain. Introduced another service / point of failure.
Just thinking out loud... QMF is a pain but is dead now I think. if you do callbacks you have to buy off on alot of networking. Can you put image factory in EC2 and run Aeolous in house?
This is a very important point: the big difference between a webhook approach and using messaging is that if A wants to notify B, in a webhook world A needs to be able to establish a network connection to B (so A in EC2 and B behind the firewall won't work) whereas in a amqp world A and B only need to be able to get to possibly different brokers that can talk to each other (e.g. A and B both to a broker in EC2).
When you want to work around that with messaging-over-HTTP, you'll have to resort to things that really strain HTTP, like long polling.
Thank you for the comment.
It might seem obvious but seeing it written so explicitly helps a lot.
I am having bad dreams since the start of this thread and (although it was not explicitly stated) from the idea of trying to implement a library what would do "reliable REST callbacks" as part of our project.
On 01/22/2013 01:09 PM, David Lutterkort wrote:
On Wed, 2013-01-09 at 08:01 -0500, Bryan Kearney wrote:
+1. We tried AMQP / QMF before and it ended up being a pain. Introduced another service / point of failure.
Just thinking out loud... QMF is a pain but is dead now I think. if you do callbacks you have to buy off on alot of networking. Can you put image factory in EC2 and run Aeolous in house?
Sorry Bryan, missed your question. Unfortunately not at the current time. imagefactory depends on oz which depends on libvirt which cannot be run in the cloud since we don't have virt-in-virt yet. There had been discussions at various points to support separating the imagefactory frontend / backends so as to facilitate things like this but that never got implemented (and you'd potentially run into firewall problems in this scenario as well).
I had discussed this w/ clalance in the context of changing the tdl-launch utility [1] to use Oz directly but that looks like it won't be feasible at the current time.
This is a very important point: the big difference between a webhook approach and using messaging is that if A wants to notify B, in a webhook world A needs to be able to establish a network connection to B (so A in EC2 and B behind the firewall won't work) whereas in a amqp world A and B only need to be able to get to possibly different brokers that can talk to each other (e.g. A and B both to a broker in EC2).
When you want to work around that with messaging-over-HTTP, you'll have to resort to things that really strain HTTP, like long polling.
David
Mentioned this before, but I've been becoming a fan of JSON-RPC [2] which seperates the API from the transport. Obviously if a method requires a persistant connection, only transports facilitating this would be able to invoke that, but alot of methods do not require this and method handlers can check the type of connection a request comes in on to determine if the client issuing the request can proceed.
-Mo
[1] https://github.com/aeolus-incubator/tdl-tools/ [2] http://en.wikipedia.org/wiki/JSON-RPC
On 01/23/2013 01:14 PM, Mo Morsi wrote:
On 01/22/2013 01:09 PM, David Lutterkort wrote:
On Wed, 2013-01-09 at 08:01 -0500, Bryan Kearney wrote:
+1. We tried AMQP / QMF before and it ended up being a pain. Introduced another service / point of failure.
Just thinking out loud... QMF is a pain but is dead now I think. if you do callbacks you have to buy off on alot of networking. Can you put image factory in EC2 and run Aeolous in house?
Sorry Bryan, missed your question. Unfortunately not at the current time. imagefactory depends on oz which depends on libvirt which cannot be run in the cloud since we don't have virt-in-virt yet. There had been discussions at various points to support separating the imagefactory frontend / backends so as to facilitate things like this but that never got implemented (and you'd potentially run into firewall problems in this scenario as well).
I had discussed this w/ clalance in the context of changing the tdl-launch utility [1] to use Oz directly but that looks like it won't be feasible at the current time.
Well, just thinking out loud.. can any component go into the cloud which may not be in the same VPN space?
-- bk
On 01/23/2013 01:16 PM, Bryan Kearney wrote:
On 01/23/2013 01:14 PM, Mo Morsi wrote:
On 01/22/2013 01:09 PM, David Lutterkort wrote:
On Wed, 2013-01-09 at 08:01 -0500, Bryan Kearney wrote:
+1. We tried AMQP / QMF before and it ended up being a pain. Introduced another service / point of failure.
Just thinking out loud... QMF is a pain but is dead now I think. if you do callbacks you have to buy off on alot of networking. Can you put image factory in EC2 and run Aeolous in house?
Sorry Bryan, missed your question. Unfortunately not at the current time. imagefactory depends on oz which depends on libvirt which cannot be run in the cloud since we don't have virt-in-virt yet. There had been discussions at various points to support separating the imagefactory frontend / backends so as to facilitate things like this but that never got implemented (and you'd potentially run into firewall problems in this scenario as well).
I had discussed this w/ clalance in the context of changing the tdl-launch utility [1] to use Oz directly but that looks like it won't be feasible at the current time.
Well, just thinking out loud.. can any component go into the cloud which may not be in the same VPN space?
-- bk
I'd imagine this would come down to the security policy of the organization deploying to the cloud, namely how lax the firewall can be to permit connections from the cloud as well as the ip address assignment on the cloud instances launched.
It'd be tricky to permit generic incoming connections from the cloud to a local imagefactory backend since this might be used for a DOS attack or such, though a tight security policy might help mitigate this (not fully sure about all the imagefactory internals details).
-Mo
On Wed, Jan 23, 2013 at 01:42:31PM -0500, Mo Morsi wrote:
On 01/23/2013 01:16 PM, Bryan Kearney wrote:
Well, just thinking out loud.. can any component go into the cloud which may not be in the same VPN space?
-- bk
I'd imagine this would come down to the security policy of the organization deploying to the cloud, namely how lax the firewall can be to permit connections from the cloud as well as the ip address assignment on the cloud instances launched.
For a while now, I've been watching this discussion thinking, "It feels like we decided a while back that AMQP would be cool, and are now trying to work backwards to come up with reasons to justify it." Perhaps I'm just not forward-looking enough, or perhaps I'm overlooking an important detail, but that's sort of how it feels to me.
I think networking is an edge case, and a minor detail. We should absolutely try to support running components on different boxes (though I don't believe we have ever properly tested or supported it). But if you break Aeolus up and run it on disparate network segments, you're going to have to handle somehow patching things together. I don't think we should choose how we handle inter-component messaging based on what happens if you break components up on different networks and refuse to set up a VPN or appropriate port-forwarding rules.
I don't mean to single out networking in general, though; it's just the latest in the discussion. I just worry that the discussion is largely theoretical and academic, focused on how different means of inter-component communications differ. That might be a good conversation to have if we didn't already have all our components using one. What I'm missing from this discussion is an exploration of what issues we're actually experiencing today with our HTTP callbacks system, and whether the overhead of switching to AMQP is a worthwhile trade-off. Is changing Factory and Conductor to use AMQP worthwhile to prevent the issue where if you shut down one of the two components in the middle of an exchange of messages, some messages might be lost? Could that better be solved by implementing some queuing or polling? Or, is it fair to say that if you send a job to Factory from Conductor and then shut down Conductor, it's just expected that the updated status might be missed?
It's not my intention to vehemently oppose AMQP, and I certainly don't mean to suggest that it shouldn't be discussed. I just don't find the current conversation terribly productive at making the case for why we should switch.
-- Matt
On Jan 23, 2013, at 4:21 PM, Matt Wagner matt.wagner@redhat.com wrote:
On Wed, Jan 23, 2013 at 01:42:31PM -0500, Mo Morsi wrote:
On 01/23/2013 01:16 PM, Bryan Kearney wrote:
Well, just thinking out loud.. can any component go into the cloud which may not be in the same VPN space?
-- bk
I'd imagine this would come down to the security policy of the organization deploying to the cloud, namely how lax the firewall can be to permit connections from the cloud as well as the ip address assignment on the cloud instances launched.
For a while now, I've been watching this discussion thinking, "It feels like we decided a while back that AMQP would be cool, and are now trying to work backwards to come up with reasons to justify it." Perhaps I'm just not forward-looking enough, or perhaps I'm overlooking an important detail, but that's sort of how it feels to me.
I think networking is an edge case, and a minor detail. We should absolutely try to support running components on different boxes (though I don't believe we have ever properly tested or supported it). But if you break Aeolus up and run it on disparate network segments, you're going to have to handle somehow patching things together. I don't think we should choose how we handle inter-component messaging based on what happens if you break components up on different networks and refuse to set up a VPN or appropriate port-forwarding rules.
I don't mean to single out networking in general, though; it's just the latest in the discussion. I just worry that the discussion is largely theoretical and academic, focused on how different means of inter-component communications differ. That might be a good conversation to have if we didn't already have all our components using one. What I'm missing from this discussion is an exploration of what issues we're actually experiencing today with our HTTP callbacks system, and whether the overhead of switching to AMQP is a worthwhile trade-off. Is changing Factory and Conductor to use AMQP worthwhile to prevent the issue where if you shut down one of the two components in the middle of an exchange of messages, some messages might be lost? Could that better be solved by implementing some queuing or polling? Or, is it fair to say that if you send a job to Factory from Conductor and then shut down Conductor, it's just expected that the updated status might be missed?
It's not my intention to vehemently oppose AMQP, and I certainly don't mean to suggest that it shouldn't be discussed. I just don't find the current conversation terribly productive at making the case for why we should switch.
-- Matt
imagefactory started off using QMF. It's what we were told was decided on when Aeolus was designed. But conductor was having a difficult time using it because it meant having a separate thread or process to bridge conductor to the broker. So, in the summer of 2011, there was a number of discussions where developers on both the imagefactory and conductor teams came out in favor of imagefactory offering a REST interface. We did, and near the end of January of 2012, we officially removed the QMF interface from the source tree when it started to seem like QPID/QMF was failing to gain traction in the wider community.
I'm not saying the conversation shouldn't happen either. What I am saying is that it should pick up where it was left 18 months ago with the question of if the challenges of conductor actually connecting to a broker are easier to deal with now than they were in 2011 when they were great enough to decide to switch course.
-steve
On Wed, Jan 23, 2013 at 04:51:38PM -0600, Steve Loranz wrote:
On Jan 23, 2013, at 4:21 PM, Matt Wagner matt.wagner@redhat.com wrote:
On Wed, Jan 23, 2013 at 01:42:31PM -0500, Mo Morsi wrote:
On 01/23/2013 01:16 PM, Bryan Kearney wrote:
Well, just thinking out loud.. can any component go into the cloud which may not be in the same VPN space?
-- bk
I'd imagine this would come down to the security policy of the organization deploying to the cloud, namely how lax the firewall can be to permit connections from the cloud as well as the ip address assignment on the cloud instances launched.
For a while now, I've been watching this discussion thinking, "It feels like we decided a while back that AMQP would be cool, and are now trying to work backwards to come up with reasons to justify it." Perhaps I'm just not forward-looking enough, or perhaps I'm overlooking an important detail, but that's sort of how it feels to me.
I think networking is an edge case, and a minor detail. We should absolutely try to support running components on different boxes (though I don't believe we have ever properly tested or supported it). But if you break Aeolus up and run it on disparate network segments, you're going to have to handle somehow patching things together. I don't think we should choose how we handle inter-component messaging based on what happens if you break components up on different networks and refuse to set up a VPN or appropriate port-forwarding rules.
I don't mean to single out networking in general, though; it's just the latest in the discussion. I just worry that the discussion is largely theoretical and academic, focused on how different means of inter-component communications differ. That might be a good conversation to have if we didn't already have all our components using one. What I'm missing from this discussion is an exploration of what issues we're actually experiencing today with our HTTP callbacks system, and whether the overhead of switching to AMQP is a worthwhile trade-off. Is changing Factory and Conductor to use AMQP worthwhile to prevent the issue where if you shut down one of the two components in the middle of an exchange of messages, some messages might be lost? Could that better be solved by implementing some queuing or polling? Or, is it fair to say that if you send a job to Factory from Conductor and then shut down Conductor, it's just expected that the updated status might be missed?
It's not my intention to vehemently oppose AMQP, and I certainly don't mean to suggest that it shouldn't be discussed. I just don't find the current conversation terribly productive at making the case for why we should switch.
-- Matt
imagefactory started off using QMF. It's what we were told was decided on when Aeolus was designed. But conductor was having a difficult time using it because it meant having a separate thread or process to bridge conductor to the broker. So, in the summer of 2011, there was a number of discussions where developers on both the imagefactory and conductor teams came out in favor of imagefactory offering a REST interface. We did, and near the end of January of 2012, we officially removed the QMF interface from the source tree when it started to seem like QPID/QMF was failing to gain traction in the wider community.
I was not here when this decisions where made and did not hear any reasonable argument why the MQ was removed from the project.
So just my IMHO
Conductor has 2 components that are sort of "daemons"
* dbomatic * delayed job
Then there is the web ui part.
The communication layer for the 3 is the dababase.
IMHO it would make much more sense to have a "backend conductor" that would do the stuff that really matters including servicing dome sort of job-queue (now dbomatic) and
communicating with the other components <<
including polling where necessary (now delayed job).
Then the web part would be a thin layer to "display the results" -- the web ui part.
As opposed to current situation where we have conductor the web part that:
1) serves the web users 2) serves the API 3) accepts the "RESTful" callbacks from ImageFactory 4) also does some communication with DC and maybe other parts while answering a client's request
IMHO this is a mess from the architecture point of view.
From my point of view it is much more important to solve this mess and
have the communication >> happen in the right place << than deciding whether the communication is HTTP based (REST, Message, RPC, whatever) or if we use MQ.
The question of MQ of then comes secondary.
I don't know why you guys failed with the MQ in the past but I see the MQs today at a similar level as SQL databases. It's an industrial verified way of doing communication in situations such as ours.
We cannot directly link because:
a) we use a REST proxy for REST provider APIs (the deltacloud) although it's written in the same language as Conductor and it's stateless, it's not a library and it's to stay that way I understood.
b) we use components written in other languages (the ImageFactory)
c) we might want to support a scenario where the individual components run on different machines.
We have more then a pair of communicating parties.
We have more the one party communicating with other parties.
We have or want to have optional components.
There are reasons for using MQ over ad-hoc communication between components and I would thing that it's not necessary to write that, but:
* as "Steve Loranz" pointed out: there's just one point to communicate with in case of MQ
* reliability -- not easy to get this with ad-hoc solution
* then of course you have the bindings for all the necessary language
* that someone else is using and testing for you, you have known and documented ways of "doing it" for various types of message exchange scenarios
* and then you have the MQ implemented, debugged, tested and working
* message format, error handling etc.
see the problems we have when trying to handle the various error conditions -- how many times do you get a reasonable error message in conductor when IMF or DC fails? this is also easier to do with a MQ
* we can go on -- google knows
To get job done, its good to write less code. It might not be the scientific approach but it works (IMHO).
Somewhere in the thread someone said that it seems we decided to use MQ because it is "cool" and seek the reasons why to do so. I don't see it this way.
What I see much more the decision NOT to use the MQ and ignoring the good reasons to use one.
Then I see (well no longer since the new year) use of "cool" noSQL thing with no reason in a project that already had enough complexity.
Then see big effort on using "cool and in" RESTful API in a situation that really is about events.
But after the short time being on the team I am already tired of this topic and as I said the architecture of Conductor seems to be a much bigger problem for me then just the messaging between the components.
So that's all of my IMHO. Shoot me if you please.
On Thu, Jan 24, 2013 at 10:10:47AM +0100, Martin Povolny wrote:
*SNIP*
There are reasons for using MQ over ad-hoc communication between components and I would thing that it's not necessary to write that, but:
- as "Steve Loranz" pointed out: there's just one point to communicate with
in case of MQ
sorry for messing up, it was not Steve Loranz, it was David Lutterkort
*SNIP*
On Thu, Jan 24, 2013 at 10:10:47AM +0100, Martin Povolny wrote:
I was not here when this decisions where made and did not hear any reasonable argument why the MQ was removed from the project.
I'm trying to find the mailing list thread, but I think it predates the aeolus-devel.
But suffice it to say that a lot of smart people thought about it and the community decided it was the right path forward at the time. That's not to say that it means we shouldn't consider re-adding it, just that your not seeing "any reasonable argument" doesn't mean that the decision at the time was wrong.
So just my IMHO
Conductor has 2 components that are sort of "daemons"
- dbomatic
- delayed job
Then there is the web ui part.
The communication layer for the 3 is the dababase.
I wouldn't characterize it as "communication" through the database, though it might be technically correct.
dbomatic is a script that polls Deltacloud and updates the database. It predates our use of delayed_job. I think dbomatic is universally accepted as something that needs major overhaul at this point, and does a lot of things in weird ways. But the whole concept is doing what you seem to propose later in this message -- moving the long-running stuff away from the core UI/API code.
Similarly, delayed_job runs jobs enqueued in the database. It's widely used in the Rails community, so it's not as if it's unholy thing writing to the database like dbomatic started out as. It's true that it reads things written to the database by Conductor, so in that sense it's 'communicating' through the database, but I see nothing wrong with this approach, and presumably all of the other people using and contributing to delayed_job would agree.
IMHO it would make much more sense to have a "backend conductor" that would do the stuff that really matters including servicing dome sort of job-queue (now dbomatic) and
communicating with the other components <<
including polling where necessary (now delayed job).
Then the web part would be a thin layer to "display the results" -- the web ui part.
As opposed to current situation where we have conductor the web part that:
- serves the web users
- serves the API
This was a separate thread, but this is a common pattern in Rails. Controllers can service web and API requests from the same methods, reducing code duplication or inadvertent divergence between API and web functionality.
- accepts the "RESTful" callbacks from ImageFactory
Isn't accepting callbacks just an extension of the API?
- also does some communication with DC and maybe other parts while
answering a client's request
I don't see how we could separate this out -- it's often integral to servicing the user's requests (web or API). Where it can happen in the background, we should certainly be throwing those tasks into delayed_job, but I think we're already doing that today.
IMHO this is a mess from the architecture point of view.
I disagree. There's room for improvement, for sure, but I don't think breaking Conductor apart in ways that aren't commonly practiced in the Rails community is going to do anything but make the thing more of a confusing mess.
From my point of view it is much more important to solve this mess and have the communication >> happen in the right place << than deciding whether the communication is HTTP based (REST, Message, RPC, whatever) or if we use MQ.
I think I agree with this.
The question of MQ of then comes secondary.
I don't know why you guys failed with the MQ in the past but I see the MQs today at a similar level as SQL databases. It's an industrial verified way of doing communication in situations such as ours.
I agree that it's a stable and successful way of doing things. I'm just not convinced it's the right choice for us.
We cannot directly link because:
a) we use a REST proxy for REST provider APIs (the deltacloud) although it's written in the same language as Conductor and it's stateless, it's not a library and it's to stay that way I understood.
I think the library thing might be a point of minor controversy. I know some have expressed interest in using it this way, but it sounds like it's not something Deltacloud is interested in implementing.
In any case, though, they present a REST API which we use with great results. I believe we could mount it as a Rack app if we wanted, but I'm not sure that would really help us any.
b) we use components written in other languages (the ImageFactory)
Sure. This can easily be solved using an HTTP API (as we do today) or AMQP for exchanging data. We previously tried AMQP and fairly recently switched to HTTP.
c) we might want to support a scenario where the individual components run on different machines.
Yes, and I worry my previous reply may have come across as dismissing this. This is definitely a possibility, and something we should support.
We have more then a pair of communicating parties.
We have more the one party communicating with other parties.
And indeed, AMQP makes this problem a little bit easier, though I'll note that we've already got something in place here, so it'd be reimplementing something that we've already solved for the sake of swapping in something slightly cleaner for this use case.
We have or want to have optional components.
Interestingly, this is one of my arguments _against_ using AMQP. HTTP is sort of a 'lowest common denominator' that's pretty easy to implement or inteface with. For purpose-built applications that have clearly-defined ways of connecting, AMQP is probably a big win. But for products that can work together or be used independently, I'm not sure everyone would want to write an AMQP interface to their application.
There are reasons for using MQ over ad-hoc communication between components and I would thing that it's not necessary to write that, but:
- as "Steve Loranz" pointed out: there's just one point to communicate with
in case of MQ
This is a bit easier, admittedly, but I haven't seen the current system as being too complicated. I think the most complicated is Conductor, which has to talk to Imagefactory and Deltacloud. (And I'm not sure that Deltacloud will ever use AMQP.)
- reliability -- not easy to get this with ad-hoc solution
I'll grant that AMQP would be more reliable. My question, really, is: are we having such a reliability problem that it's worth rewriting the entire thing?
- then of course you have the bindings for all the necessary language
I think both languages are equal here. Is there any language out there that has AMQP bindings but no support for HTTP? Either way, it's a moot point for us: Factory (Python) and Conductor (Ruby) have both used AMQP previously and now use REST calls, so both will clearly work either way. I imagine the same could be said if we ever implement C or Java or whatever apps -- either way will work.
- that someone else is using and testing for you, you have known and documented ways of "doing it" for various types of message exchange scenarios
Well the use and testing occurs at the protocol level, versus our implementation. I could make the same claim of HTTP.
AMQP likely does have best practices around this that are more advanced than with webhooks, though there are plenty of projects (GitHub, WordPress, etc.) using them successfully.
But I don't think we're having a problem right now with figuring out how to implement webhooks -- they're already there -- so I'm not sure this one matters to us.
- and then you have the MQ implemented, debugged, tested and working
We already have this more or less for our REST-based system, do we not? Switching would _require_ implementation, debugging, and testing. It's surely doable, but I don't think this is a reason to switch.
message format, error handling etc.
see the problems we have when trying to handle the various error conditions -- how many times do you get a reasonable error message in conductor when IMF or DC fails? this is also easier to do with a MQ
We do a bad job with error-handling, it's true. It's especially bad between components. But the problem is that we just don't do a good job of spelling out what the possible errors are or how we should present them. This isn't going to change with a message bus -- it's going to change when we actually fix our handling of errors.
- we can go on -- google knows
But this is exactly my point here -- I don't think the overall advantages of AMQP matter here. What matters is how it will make _our project_ better.
To get job done, its good to write less code. It might not be the scientific approach but it works (IMHO).
I agree with that, but I don't think switching our communication protocols will help this.
Somewhere in the thread someone said that it seems we decided to use MQ because it is "cool" and seek the reasons why to do so. I don't see it this way.
What I see much more the decision NOT to use the MQ and ignoring the good reasons to use one.
There's admittedly some lingering animosity towards AMQP here from the last time we used it -- we put a bunch of effort into making it work, had all sorts of problems with it, and then decided to rip it out and reimplement REST callbacks. Those that were involved are probably going to be pretty reluctant to now rip out the REST bits and implement AMQP.
What I'm interested in aren't the "good reasons" that AMQP is a superior messaging protocol, but the reasons it will make _our project_ better. The only one I've heard so far is that message delivery will be more robust. That would be an improvement but I'm not sure it's worth the overhead of implementing this. Maybe I'm wrong, though.
Then I see (well no longer since the new year) use of "cool" noSQL thing with no reason in a project that already had enough complexity.
I'm not sure what this refers to, to be honest. Image Warehouse used Mongo, if that's what you're referring to. That has been painful and we're ripping the whole thing out.
Then see big effort on using "cool and in" RESTful API in a situation that really is about events.
Deltacloud and Conductor have been using REST (or at least something like it) for the past several years. Before I even joined the company a couple of years ago I was reading the Deltacloud documentation about its HATEOAS API.
But after the short time being on the team I am already tired of this topic and as I said the architecture of Conductor seems to be a much bigger problem for me then just the messaging between the components.
So that's all of my IMHO. Shoot me if you please.
Heh, in fairness, +1 to this -- I, too, am tired of this discussion.
-- Matt
On Thu, Jan 24, 2013 at 01:55:09PM -0500, Matt Wagner wrote:
On Thu, Jan 24, 2013 at 10:10:47AM +0100, Martin Povolny wrote:
I was not here when this decisions where made and did not hear any reasonable argument why the MQ was removed from the project.
I'm trying to find the mailing list thread, but I think it predates the aeolus-devel.
But suffice it to say that a lot of smart people thought about it and the community decided it was the right path forward at the time. That's not to say that it means we shouldn't consider re-adding it, just that your not seeing "any reasonable argument" doesn't mean that the decision at the time was wrong.
Can we put the brakes on this whole thread please?
The "middle" of cloud engine -- that is, the bit between the webapp itself, and the Deltacloud API, currently represented by db-omatic and task-omatic -- has been a poor stepchild in the app for quite a long time, and deliberately so.
Condor used to fill this gap and we ripped it out having determined it was too unwieldy for our needs. At the time, we vowed we would not replace Condor with anything more ambitious than db-omatic until we had validated that we really, really needed it. Along with Condor went the AMQP infrastructure, which justly suffered the same fate.
We are now in a situation where it may be appropriate to consider building something in the "middle" that is more robust and architecturally sane than db-omatic. Such a thing would both track the state of things Deltacloud knows about, and probably also do proper session maintenance (so that we don't have to build up and tear down a new session every time we connect to EC2 or VMWare or RHEV). Seems to me this thing has no need to be tied directly to Conductor in any way at all -- rather, it should be a convenient add-on to Deltacloud that could be used by any Deltacloud consumer. Lutter has already started thinking about the design of the thing -- we'd probably call it Deltacloud Tracker or something like that.
I would suggest that it should be possible to register listeners with Tracker and get state updates, to eliminate polling. Maybe it's a good idea to use messaging for this, maybe it's not -- I have no idea. But I think we need to figure out exactly what it does and design the API before we start thinking about what mechanism we're going to use to get information in and out of it -- that is, in my view, entirely a judgement call based on the tradeoff between robustness and flexibility.
I will note that the presence of Heat is also somewhat relevant here, since it does some level of state tracking. We'll need to figure out how it should tie in with the big picture as well.
So, please -- let's stop fretting about message bus vs. no message bus, and focus on the big picture of what needs to happen with the app. Once we understand what we want to achieve, the right way to proceed will become obvious.
--Hugh
So just my IMHO
Conductor has 2 components that are sort of "daemons"
- dbomatic
- delayed job
Then there is the web ui part.
The communication layer for the 3 is the dababase.
I wouldn't characterize it as "communication" through the database, though it might be technically correct.
dbomatic is a script that polls Deltacloud and updates the database. It predates our use of delayed_job. I think dbomatic is universally accepted as something that needs major overhaul at this point, and does a lot of things in weird ways. But the whole concept is doing what you seem to propose later in this message -- moving the long-running stuff away from the core UI/API code.
Similarly, delayed_job runs jobs enqueued in the database. It's widely used in the Rails community, so it's not as if it's unholy thing writing to the database like dbomatic started out as. It's true that it reads things written to the database by Conductor, so in that sense it's 'communicating' through the database, but I see nothing wrong with this approach, and presumably all of the other people using and contributing to delayed_job would agree.
IMHO it would make much more sense to have a "backend conductor" that would do the stuff that really matters including servicing dome sort of job-queue (now dbomatic) and
communicating with the other components <<
including polling where necessary (now delayed job).
Then the web part would be a thin layer to "display the results" -- the web ui part.
As opposed to current situation where we have conductor the web part that:
- serves the web users
- serves the API
This was a separate thread, but this is a common pattern in Rails. Controllers can service web and API requests from the same methods, reducing code duplication or inadvertent divergence between API and web functionality.
- accepts the "RESTful" callbacks from ImageFactory
Isn't accepting callbacks just an extension of the API?
- also does some communication with DC and maybe other parts while
answering a client's request
I don't see how we could separate this out -- it's often integral to servicing the user's requests (web or API). Where it can happen in the background, we should certainly be throwing those tasks into delayed_job, but I think we're already doing that today.
IMHO this is a mess from the architecture point of view.
I disagree. There's room for improvement, for sure, but I don't think breaking Conductor apart in ways that aren't commonly practiced in the Rails community is going to do anything but make the thing more of a confusing mess.
From my point of view it is much more important to solve this mess and have the communication >> happen in the right place << than deciding whether the communication is HTTP based (REST, Message, RPC, whatever) or if we use MQ.
I think I agree with this.
The question of MQ of then comes secondary.
I don't know why you guys failed with the MQ in the past but I see the MQs today at a similar level as SQL databases. It's an industrial verified way of doing communication in situations such as ours.
I agree that it's a stable and successful way of doing things. I'm just not convinced it's the right choice for us.
We cannot directly link because:
a) we use a REST proxy for REST provider APIs (the deltacloud) although it's written in the same language as Conductor and it's stateless, it's not a library and it's to stay that way I understood.
I think the library thing might be a point of minor controversy. I know some have expressed interest in using it this way, but it sounds like it's not something Deltacloud is interested in implementing.
In any case, though, they present a REST API which we use with great results. I believe we could mount it as a Rack app if we wanted, but I'm not sure that would really help us any.
b) we use components written in other languages (the ImageFactory)
Sure. This can easily be solved using an HTTP API (as we do today) or AMQP for exchanging data. We previously tried AMQP and fairly recently switched to HTTP.
c) we might want to support a scenario where the individual components run on different machines.
Yes, and I worry my previous reply may have come across as dismissing this. This is definitely a possibility, and something we should support.
We have more then a pair of communicating parties.
We have more the one party communicating with other parties.
And indeed, AMQP makes this problem a little bit easier, though I'll note that we've already got something in place here, so it'd be reimplementing something that we've already solved for the sake of swapping in something slightly cleaner for this use case.
We have or want to have optional components.
Interestingly, this is one of my arguments _against_ using AMQP. HTTP is sort of a 'lowest common denominator' that's pretty easy to implement or inteface with. For purpose-built applications that have clearly-defined ways of connecting, AMQP is probably a big win. But for products that can work together or be used independently, I'm not sure everyone would want to write an AMQP interface to their application.
There are reasons for using MQ over ad-hoc communication between components and I would thing that it's not necessary to write that, but:
- as "Steve Loranz" pointed out: there's just one point to communicate with
in case of MQ
This is a bit easier, admittedly, but I haven't seen the current system as being too complicated. I think the most complicated is Conductor, which has to talk to Imagefactory and Deltacloud. (And I'm not sure that Deltacloud will ever use AMQP.)
- reliability -- not easy to get this with ad-hoc solution
I'll grant that AMQP would be more reliable. My question, really, is: are we having such a reliability problem that it's worth rewriting the entire thing?
- then of course you have the bindings for all the necessary language
I think both languages are equal here. Is there any language out there that has AMQP bindings but no support for HTTP? Either way, it's a moot point for us: Factory (Python) and Conductor (Ruby) have both used AMQP previously and now use REST calls, so both will clearly work either way. I imagine the same could be said if we ever implement C or Java or whatever apps -- either way will work.
- that someone else is using and testing for you, you have known and documented ways of "doing it" for various types of message exchange scenarios
Well the use and testing occurs at the protocol level, versus our implementation. I could make the same claim of HTTP.
AMQP likely does have best practices around this that are more advanced than with webhooks, though there are plenty of projects (GitHub, WordPress, etc.) using them successfully.
But I don't think we're having a problem right now with figuring out how to implement webhooks -- they're already there -- so I'm not sure this one matters to us.
- and then you have the MQ implemented, debugged, tested and working
We already have this more or less for our REST-based system, do we not? Switching would _require_ implementation, debugging, and testing. It's surely doable, but I don't think this is a reason to switch.
message format, error handling etc.
see the problems we have when trying to handle the various error conditions -- how many times do you get a reasonable error message in conductor when IMF or DC fails? this is also easier to do with a MQ
We do a bad job with error-handling, it's true. It's especially bad between components. But the problem is that we just don't do a good job of spelling out what the possible errors are or how we should present them. This isn't going to change with a message bus -- it's going to change when we actually fix our handling of errors.
- we can go on -- google knows
But this is exactly my point here -- I don't think the overall advantages of AMQP matter here. What matters is how it will make _our project_ better.
To get job done, its good to write less code. It might not be the scientific approach but it works (IMHO).
I agree with that, but I don't think switching our communication protocols will help this.
Somewhere in the thread someone said that it seems we decided to use MQ because it is "cool" and seek the reasons why to do so. I don't see it this way.
What I see much more the decision NOT to use the MQ and ignoring the good reasons to use one.
There's admittedly some lingering animosity towards AMQP here from the last time we used it -- we put a bunch of effort into making it work, had all sorts of problems with it, and then decided to rip it out and reimplement REST callbacks. Those that were involved are probably going to be pretty reluctant to now rip out the REST bits and implement AMQP.
What I'm interested in aren't the "good reasons" that AMQP is a superior messaging protocol, but the reasons it will make _our project_ better. The only one I've heard so far is that message delivery will be more robust. That would be an improvement but I'm not sure it's worth the overhead of implementing this. Maybe I'm wrong, though.
Then I see (well no longer since the new year) use of "cool" noSQL thing with no reason in a project that already had enough complexity.
I'm not sure what this refers to, to be honest. Image Warehouse used Mongo, if that's what you're referring to. That has been painful and we're ripping the whole thing out.
Then see big effort on using "cool and in" RESTful API in a situation that really is about events.
Deltacloud and Conductor have been using REST (or at least something like it) for the past several years. Before I even joined the company a couple of years ago I was reading the Deltacloud documentation about its HATEOAS API.
But after the short time being on the team I am already tired of this topic and as I said the architecture of Conductor seems to be a much bigger problem for me then just the messaging between the components.
So that's all of my IMHO. Shoot me if you please.
Heh, in fairness, +1 to this -- I, too, am tired of this discussion.
-- Matt
On Thu, Jan 24, 2013 at 02:40:33PM -0500, Hugh Brock wrote:
On Thu, Jan 24, 2013 at 01:55:09PM -0500, Matt Wagner wrote:
On Thu, Jan 24, 2013 at 10:10:47AM +0100, Martin Povolny wrote:
I was not here when this decisions where made and did not hear any reasonable argument why the MQ was removed from the project.
I'm trying to find the mailing list thread, but I think it predates the aeolus-devel.
But suffice it to say that a lot of smart people thought about it and the community decided it was the right path forward at the time. That's not to say that it means we shouldn't consider re-adding it, just that your not seeing "any reasonable argument" doesn't mean that the decision at the time was wrong.
Can we put the brakes on this whole thread please?
The "middle" of cloud engine -- that is, the bit between the webapp itself, and the Deltacloud API, currently represented by db-omatic and task-omatic -- has been a poor stepchild in the app for quite a long time, and deliberately so.
Condor used to fill this gap and we ripped it out having determined it was too unwieldy for our needs. At the time, we vowed we would not replace Condor with anything more ambitious than db-omatic until we had validated that we really, really needed it. Along with Condor went the AMQP infrastructure, which justly suffered the same fate.
We are now in a situation where it may be appropriate to consider building something in the "middle" that is more robust and architecturally sane than db-omatic. Such a thing would both track the state of things Deltacloud knows about, and probably also do proper session maintenance (so that we don't have to build up and tear down a new session every time we connect to EC2 or VMWare or RHEV). Seems to me this thing has no need to be tied directly to Conductor in any way at all -- rather, it should be a convenient add-on to Deltacloud that could be used by any Deltacloud consumer. Lutter has already started thinking about the design of the thing -- we'd probably call it Deltacloud Tracker or something like that.
As I wrote in the previous mail the overall architecture and the missing central part is a much more important question than the choice of the communication mechanism.
Being it some "backend conductor" as I suggested on several occasions or "deltacloud tracker" as Jan and probably others suggested is a question of how much responsibilities we put into it and how we name it. And it would be a much more satisfying debate to have.
I am glad that there seems to be a common idea that we miss such component.
I would suggest that it should be possible to register listeners with Tracker and get state updates, to eliminate polling. Maybe it's a good idea to use messaging for this, maybe it's not -- I have no idea. But I think we need to figure out exactly what it does and design the API before we start thinking about what mechanism we're going to use to get information in and out of it -- that is, in my view, entirely a judgement call based on the tradeoff between robustness and flexibility.
Thinking about the API in terms of services that the consumer might need from it before discussing details about transport and presentation is what I would love to see in the discussion on the API in Conductor.
Instead we have discussion whether we need to expose state machines (that we even don't have!) because the data representation in the RESTful API would look better if we had one!
Then, please, excuse me when freaking out when reading it while not having an API function to launch a thing.
I'm going to take a cold shower and a beer and let's start talking in the following days about the state machines that are missing in Conductor for launching images, deployments and building images and that are so badly needed to get right such simple things as the correct calculation of the uptime of an instance.
I will note that the presence of Heat is also somewhat relevant here, since it does some level of state tracking. We'll need to figure out how it should tie in with the big picture as well.
So, please -- let's stop fretting about message bus vs. no message bus, and focus on the big picture of what needs to happen with the app. Once we understand what we want to achieve, the right way to proceed will become obvious.
--Hugh
So just my IMHO
Conductor has 2 components that are sort of "daemons"
- dbomatic
- delayed job
Then there is the web ui part.
The communication layer for the 3 is the dababase.
I wouldn't characterize it as "communication" through the database, though it might be technically correct.
dbomatic is a script that polls Deltacloud and updates the database. It predates our use of delayed_job. I think dbomatic is universally accepted as something that needs major overhaul at this point, and does a lot of things in weird ways. But the whole concept is doing what you seem to propose later in this message -- moving the long-running stuff away from the core UI/API code.
Similarly, delayed_job runs jobs enqueued in the database. It's widely used in the Rails community, so it's not as if it's unholy thing writing to the database like dbomatic started out as. It's true that it reads things written to the database by Conductor, so in that sense it's 'communicating' through the database, but I see nothing wrong with this approach, and presumably all of the other people using and contributing to delayed_job would agree.
IMHO it would make much more sense to have a "backend conductor" that would do the stuff that really matters including servicing dome sort of job-queue (now dbomatic) and
communicating with the other components <<
including polling where necessary (now delayed job).
Then the web part would be a thin layer to "display the results" -- the web ui part.
As opposed to current situation where we have conductor the web part that:
- serves the web users
- serves the API
This was a separate thread, but this is a common pattern in Rails. Controllers can service web and API requests from the same methods, reducing code duplication or inadvertent divergence between API and web functionality.
- accepts the "RESTful" callbacks from ImageFactory
Isn't accepting callbacks just an extension of the API?
- also does some communication with DC and maybe other parts while
answering a client's request
I don't see how we could separate this out -- it's often integral to servicing the user's requests (web or API). Where it can happen in the background, we should certainly be throwing those tasks into delayed_job, but I think we're already doing that today.
IMHO this is a mess from the architecture point of view.
I disagree. There's room for improvement, for sure, but I don't think breaking Conductor apart in ways that aren't commonly practiced in the Rails community is going to do anything but make the thing more of a confusing mess.
From my point of view it is much more important to solve this mess and have the communication >> happen in the right place << than deciding whether the communication is HTTP based (REST, Message, RPC, whatever) or if we use MQ.
I think I agree with this.
The question of MQ of then comes secondary.
I don't know why you guys failed with the MQ in the past but I see the MQs today at a similar level as SQL databases. It's an industrial verified way of doing communication in situations such as ours.
I agree that it's a stable and successful way of doing things. I'm just not convinced it's the right choice for us.
We cannot directly link because:
a) we use a REST proxy for REST provider APIs (the deltacloud) although it's written in the same language as Conductor and it's stateless, it's not a library and it's to stay that way I understood.
I think the library thing might be a point of minor controversy. I know some have expressed interest in using it this way, but it sounds like it's not something Deltacloud is interested in implementing.
In any case, though, they present a REST API which we use with great results. I believe we could mount it as a Rack app if we wanted, but I'm not sure that would really help us any.
b) we use components written in other languages (the ImageFactory)
Sure. This can easily be solved using an HTTP API (as we do today) or AMQP for exchanging data. We previously tried AMQP and fairly recently switched to HTTP.
c) we might want to support a scenario where the individual components run on different machines.
Yes, and I worry my previous reply may have come across as dismissing this. This is definitely a possibility, and something we should support.
We have more then a pair of communicating parties.
We have more the one party communicating with other parties.
And indeed, AMQP makes this problem a little bit easier, though I'll note that we've already got something in place here, so it'd be reimplementing something that we've already solved for the sake of swapping in something slightly cleaner for this use case.
We have or want to have optional components.
Interestingly, this is one of my arguments _against_ using AMQP. HTTP is sort of a 'lowest common denominator' that's pretty easy to implement or inteface with. For purpose-built applications that have clearly-defined ways of connecting, AMQP is probably a big win. But for products that can work together or be used independently, I'm not sure everyone would want to write an AMQP interface to their application.
There are reasons for using MQ over ad-hoc communication between components and I would thing that it's not necessary to write that, but:
- as "Steve Loranz" pointed out: there's just one point to communicate with
in case of MQ
This is a bit easier, admittedly, but I haven't seen the current system as being too complicated. I think the most complicated is Conductor, which has to talk to Imagefactory and Deltacloud. (And I'm not sure that Deltacloud will ever use AMQP.)
- reliability -- not easy to get this with ad-hoc solution
I'll grant that AMQP would be more reliable. My question, really, is: are we having such a reliability problem that it's worth rewriting the entire thing?
- then of course you have the bindings for all the necessary language
I think both languages are equal here. Is there any language out there that has AMQP bindings but no support for HTTP? Either way, it's a moot point for us: Factory (Python) and Conductor (Ruby) have both used AMQP previously and now use REST calls, so both will clearly work either way. I imagine the same could be said if we ever implement C or Java or whatever apps -- either way will work.
- that someone else is using and testing for you, you have known and documented ways of "doing it" for various types of message exchange scenarios
Well the use and testing occurs at the protocol level, versus our implementation. I could make the same claim of HTTP.
AMQP likely does have best practices around this that are more advanced than with webhooks, though there are plenty of projects (GitHub, WordPress, etc.) using them successfully.
But I don't think we're having a problem right now with figuring out how to implement webhooks -- they're already there -- so I'm not sure this one matters to us.
- and then you have the MQ implemented, debugged, tested and working
We already have this more or less for our REST-based system, do we not? Switching would _require_ implementation, debugging, and testing. It's surely doable, but I don't think this is a reason to switch.
message format, error handling etc.
see the problems we have when trying to handle the various error conditions -- how many times do you get a reasonable error message in conductor when IMF or DC fails? this is also easier to do with a MQ
We do a bad job with error-handling, it's true. It's especially bad between components. But the problem is that we just don't do a good job of spelling out what the possible errors are or how we should present them. This isn't going to change with a message bus -- it's going to change when we actually fix our handling of errors.
- we can go on -- google knows
But this is exactly my point here -- I don't think the overall advantages of AMQP matter here. What matters is how it will make _our project_ better.
To get job done, its good to write less code. It might not be the scientific approach but it works (IMHO).
I agree with that, but I don't think switching our communication protocols will help this.
Somewhere in the thread someone said that it seems we decided to use MQ because it is "cool" and seek the reasons why to do so. I don't see it this way.
What I see much more the decision NOT to use the MQ and ignoring the good reasons to use one.
There's admittedly some lingering animosity towards AMQP here from the last time we used it -- we put a bunch of effort into making it work, had all sorts of problems with it, and then decided to rip it out and reimplement REST callbacks. Those that were involved are probably going to be pretty reluctant to now rip out the REST bits and implement AMQP.
What I'm interested in aren't the "good reasons" that AMQP is a superior messaging protocol, but the reasons it will make _our project_ better. The only one I've heard so far is that message delivery will be more robust. That would be an improvement but I'm not sure it's worth the overhead of implementing this. Maybe I'm wrong, though.
Then I see (well no longer since the new year) use of "cool" noSQL thing with no reason in a project that already had enough complexity.
I'm not sure what this refers to, to be honest. Image Warehouse used Mongo, if that's what you're referring to. That has been painful and we're ripping the whole thing out.
Then see big effort on using "cool and in" RESTful API in a situation that really is about events.
Deltacloud and Conductor have been using REST (or at least something like it) for the past several years. Before I even joined the company a couple of years ago I was reading the Deltacloud documentation about its HATEOAS API.
But after the short time being on the team I am already tired of this topic and as I said the architecture of Conductor seems to be a much bigger problem for me then just the messaging between the components.
So that's all of my IMHO. Shoot me if you please.
Heh, in fairness, +1 to this -- I, too, am tired of this discussion.
-- Matt
-- == Hugh Brock, hbrock@redhat.com == == Engineering Manager, Cloud BU == == Aeolus Project: Manage virtual infrastructure across clouds. == == http://aeolusproject.org ==
"I know that you believe you understand what you think I said, but I’m not sure you realize that what you heard is not what I meant." --Robert McCloskey
Matt, please, do not take anything I write personally or as an offense. I started with saying that all these as IMHOs. Most my ideas are based on my previous experience from outside this project and although I think that 4 months is enough time to get the idea of Conductor, I surely don't have much idea about the history of the project, don't know all the components in much detail and cannot yet understand reasons that led to previous decisions that might seem strange to me.
I will try to address the points you made one by one, no offense.
On Thu, Jan 24, 2013 at 01:55:09PM -0500, Matt Wagner wrote:
On Thu, Jan 24, 2013 at 10:10:47AM +0100, Martin Povolny wrote:
I was not here when this decisions where made and did not hear any reasonable argument why the MQ was removed from the project.
I'm trying to find the mailing list thread, but I think it predates the aeolus-devel.
But suffice it to say that a lot of smart people thought about it and the community decided it was the right path forward at the time. That's not to say that it means we shouldn't consider re-adding it, just that your not seeing "any reasonable argument" doesn't mean that the decision at the time was wrong.
Sorry, that is not an argument. It also does not mean that the decision was right.
So just my IMHO
Conductor has 2 components that are sort of "daemons"
- dbomatic
- delayed job
Then there is the web ui part.
The communication layer for the 3 is the dababase.
I wouldn't characterize it as "communication" through the database, though it might be technically correct.
I would. That is exactly what the components are doing. One writes something in the DB the other reads it and acts upon it, then it writes something back for the first one. That is communication.
dbomatic is a script that polls Deltacloud and updates the database. It predates our use of delayed_job. I think dbomatic is universally accepted as something that needs major overhaul at this point, and does a lot of things in weird ways. But the whole concept is doing what you seem to propose later in this message -- moving the long-running stuff away from the core UI/API code.
Similarly, delayed_job runs jobs enqueued in the database. It's widely used in the Rails community, so it's not as if it's unholy thing writing to the database like dbomatic started out as. It's true that it reads things written to the database by Conductor, so in that sense it's 'communicating' through the database, but I see nothing wrong with this approach, and presumably all of the other people using and contributing to delayed_job would agree.
Sorry, but a few IMHOs here. Dbomatic cannot even handle database restart. When I first saw it, I thought "Seriously? Is it possible? No Way! WAT!"
And the argument that delayed_job is widely used in the Rails community is not valid for me.
It justs gives me a hope that it might handle database disconnects better then the dbomatic. Does it?
IMHO it would make much more sense to have a "backend conductor" that would do the stuff that really matters including servicing dome sort of job-queue (now dbomatic) and
communicating with the other components <<
including polling where necessary (now delayed job).
Then the web part would be a thin layer to "display the results" -- the web ui part.
As opposed to current situation where we have conductor the web part that:
- serves the web users
- serves the API
This was a separate thread, but this is a common pattern in Rails. Controllers can service web and API requests from the same methods, reducing code duplication or inadvertent divergence between API and web functionality.
Sorry I don't play this game. I was taught to think critically and it someone does something I don't need to do the same.
The present state of the controllers is a mess and splitting responsibilities is a way out of it. Surely you need more then just a model and a controller if you have complicated logic.
- accepts the "RESTful" callbacks from ImageFactory
Isn't accepting callbacks just an extension of the API?
Sure it is. But again: The complexity. Then the error handling. Then the Single responsibility principle.
- also does some communication with DC and maybe other parts while
answering a client's request
I don't see how we could separate this out -- it's often integral to servicing the user's requests (web or API). Where it can happen in the background, we should certainly be throwing those tasks into delayed_job, but I think we're already doing that today.
Yes, more tasks are being moved to delayed job, it helps but it is not enough.
IMHO this is a mess from the architecture point of view.
I disagree. There's room for improvement, for sure, but I don't think breaking Conductor apart in ways that aren't commonly practiced in the Rails community is going to do anything but make the thing more of a confusing mess.
Yes we do not share the same view here at all.
From my point of view it is much more important to solve this mess and have the communication >> happen in the right place << than deciding whether the communication is HTTP based (REST, Message, RPC, whatever) or if we use MQ.
I think I agree with this.
The question of MQ of then comes secondary.
I don't know why you guys failed with the MQ in the past but I see the MQs today at a similar level as SQL databases. It's an industrial verified way of doing communication in situations such as ours.
I agree that it's a stable and successful way of doing things. I'm just not convinced it's the right choice for us.
We cannot directly link because:
a) we use a REST proxy for REST provider APIs (the deltacloud) although it's written in the same language as Conductor and it's stateless, it's not a library and it's to stay that way I understood.
I think the library thing might be a point of minor controversy. I know some have expressed interest in using it this way, but it sounds like it's not something Deltacloud is interested in implementing.
Yes. I understood respected deltacloud people do not want to have a library. And they have a successful project, their word has it's weight.
But is does not change my opinion that it would be better to have a library to translate one REST call into another rather then having a REST service to do the REST call for me.
In any case, though, they present a REST API which we use with great results. I believe we could mount it as a Rack app if we wanted, but I'm not sure that would really help us any.
Yes, that idea came to me too. Why don't we mount it? But I am not such Rails expert to know it one can mount a Rack app into a Rails app and if it would be beneficial i.e. we could then call the API directly.
b) we use components written in other languages (the ImageFactory)
Sure. This can easily be solved using an HTTP API (as we do today) or AMQP for exchanging data. We previously tried AMQP and fairly recently switched to HTTP.
Well I am not sure about the "easy". I am afraid the devil is in the detail and looking at the hooks and e.g. at the credentials being passed I thing the problem is more complex than it might seem at the first sight.
c) we might want to support a scenario where the individual components run on different machines.
Yes, and I worry my previous reply may have come across as dismissing this. This is definitely a possibility, and something we should support.
We have more then a pair of communicating parties.
We have more the one party communicating with other parties.
And indeed, AMQP makes this problem a little bit easier, though I'll note that we've already got something in place here, so it'd be reimplementing something that we've already solved for the sake of swapping in something slightly cleaner for this use case.
We have or want to have optional components.
Interestingly, this is one of my arguments _against_ using AMQP. HTTP is sort of a 'lowest common denominator' that's pretty easy to implement or inteface with. For purpose-built applications that have clearly-defined ways of connecting, AMQP is probably a big win. But for products that can work together or be used independently, I'm not sure everyone would want to write an AMQP interface to their application.
I see this argument in various contexts, let me present a twisted one:
We have a Rails engine that allows a Rails app to use a python written ImageFactory. What is the common denominator here?
The Rails I guess. We hope other Rails projects will be able to use the TIM. That's a small niche, isn't it?
Here you argue that the AMQP might be limiting the usage of the components we create if we use it for internal communication.
How big is the Rails app wanting to use ImageFactory niche compared to the niche of tools that can and want to connect to an AMQP service?
Do we create the components to be easily used by other projects?
Or do we create a components that fit best our internal needs?
Where stands TIM and where stands the potential usage of AMQP?
There are reasons for using MQ over ad-hoc communication between components and I would thing that it's not necessary to write that, but:
- as "Steve Loranz" pointed out: there's just one point to communicate with
in case of MQ
This is a bit easier, admittedly, but I haven't seen the current system as being too complicated. I think the most complicated is Conductor, which has to talk to Imagefactory and Deltacloud. (And I'm not sure that Deltacloud will ever use AMQP.)
You have to know better then me.
I don't have yet a good idea about how hard it can be to persuade other project to change things in such way that would be beneficial to other project funded by the same company. But I have a feeling that it might be difficult ;-)
- reliability -- not easy to get this with ad-hoc solution
I'll grant that AMQP would be more reliable. My question, really, is: are we having such a reliability problem that it's worth rewriting the entire thing?
I don't thing it would be necessary to rewrite the whole thing if we used AMQP.
- then of course you have the bindings for all the necessary language
I think both languages are equal here. Is there any language out there that has AMQP bindings but no support for HTTP? Either way, it's a moot point for us: Factory (Python) and Conductor (Ruby) have both used AMQP previously and now use REST calls, so both will clearly work either way. I imagine the same could be said if we ever implement C or Java or whatever apps -- either way will work.
The difference is the level of services provided. I hope we do not need to argue about the fact that HTTP is much lower lever service and much more has to be done around a HTTP request than around a AMQP request.
This brings up the error handling as an example again.
- that someone else is using and testing for you, you have known and documented ways of "doing it" for various types of message exchange scenarios
Well the use and testing occurs at the protocol level, versus our implementation. I could make the same claim of HTTP.
The same argument as above applies here.
AMQP likely does have best practices around this that are more advanced than with webhooks, though there are plenty of projects (GitHub, WordPress, etc.) using them successfully.
But I don't think we're having a problem right now with figuring out how to implement webhooks -- they're already there -- so I'm not sure this one matters to us.
They are there but how much work has to be done around them to from now on so that we can say: "It is done"?
Was it yesterday then you patched the ugly errors that where in the Conductor log on IMF callbacks?
- and then you have the MQ implemented, debugged, tested and working
We already have this more or less for our REST-based system, do we not? Switching would _require_ implementation, debugging, and testing. It's surely doable, but I don't think this is a reason to switch.
It's not a reason to which. It's just one more argument for using AMQP. You said there where no reasons or at least I understood so.
message format, error handling etc.
see the problems we have when trying to handle the various error conditions -- how many times do you get a reasonable error message in conductor when IMF or DC fails? this is also easier to do with a MQ
We do a bad job with error-handling, it's true. It's especially bad between components. But the problem is that we just don't do a good job of spelling out what the possible errors are or how we should present them. This isn't going to change with a message bus -- it's going to change when we actually fix our handling of errors.
I don't agree here. I find the error handling generally easier with a higher level tool. And it applies when comparing HTTP and AMQP for sure.
- we can go on -- google knows
But this is exactly my point here -- I don't think the overall advantages of AMQP matter here. What matters is how it will make _our project_ better.
I think that that was shown.
To get job done, its good to write less code. It might not be the scientific approach but it works (IMHO).
I agree with that, but I don't think switching our communication protocols will help this.
We have different opinions here too.
Somewhere in the thread someone said that it seems we decided to use MQ because it is "cool" and seek the reasons why to do so. I don't see it this way.
What I see much more the decision NOT to use the MQ and ignoring the good reasons to use one.
There's admittedly some lingering animosity towards AMQP here from the last time we used it -- we put a bunch of effort into making it work, had all sorts of problems with it, and then decided to rip it out and reimplement REST callbacks. Those that were involved are probably going to be pretty reluctant to now rip out the REST bits and implement AMQP.
What I'm interested in aren't the "good reasons" that AMQP is a superior messaging protocol, but the reasons it will make _our project_ better. The only one I've heard so far is that message delivery will be more robust. That would be an improvement but I'm not sure it's worth the overhead of implementing this. Maybe I'm wrong, though.
Then I see (well no longer since the new year) use of "cool" noSQL thing with no reason in a project that already had enough complexity.
I'm not sure what this refers to, to be honest. Image Warehouse used Mongo, if that's what you're referring to. That has been painful and we're ripping the whole thing out.
Sure Mongo.
Then see big effort on using "cool and in" RESTful API in a situation that really is about events.
Deltacloud and Conductor have been using REST (or at least something like it) for the past several years. Before I even joined the company a couple of years ago I was reading the Deltacloud documentation about its HATEOAS API.
The idea of a REST client needing no prior knowledge about how to interact with any particular application or server beyond a generic understanding of hypermedia may work for simple things.
Just an IMHO again. But look how painful is for Conductor to get around places where the DC does not provide enough abstraction (e.g. error states). No prior knowledge except of generic hypermedia? It's a dream that fells apart then a more complex problem is to be solved. It's not practical.
But after the short time being on the team I am already tired of this topic and as I said the architecture of Conductor seems to be a much bigger problem for me then just the messaging between the components.
So that's all of my IMHO. Shoot me if you please.
Heh, in fairness, +1 to this -- I, too, am tired of this discussion.
-- Matt
Happy hacking!
On Thu, Jan 24, 2013 at 6:18 PM, Martin Povolny mpovolny@redhat.com wrote:
Matt, please, do not take anything I write personally or as an offense. I started with saying that all these as IMHOs. Most my ideas are based on my previous experience from outside this project and although I think that 4 months is enough time to get the idea of Conductor, I surely don't have much idea about the history of the project, don't know all the components in much detail and cannot yet understand reasons that led to previous decisions that might seem strange to me.
I will try to address the points you made one by one, no offense.
On Thu, Jan 24, 2013 at 01:55:09PM -0500, Matt Wagner wrote:
On Thu, Jan 24, 2013 at 10:10:47AM +0100, Martin Povolny wrote:
I was not here when this decisions where made and did not hear any reasonable argument why the MQ was removed from the project.
I'm trying to find the mailing list thread, but I think it predates the aeolus-devel.
But suffice it to say that a lot of smart people thought about it and the community decided it was the right path forward at the time. That's not to say that it means we shouldn't consider re-adding it, just that your not seeing "any reasonable argument" doesn't mean that the decision at the time was wrong.
Sorry, that is not an argument. It also does not mean that the decision was right.
I can provide a bit of historical perspective here. I will first point out, though, that I totally agree with Hugh; you need to define what user problems you are trying to solve, and then pick a messaging mechanism. Doing it the other way around is what got us into trouble to start with.
When the project was first spinning up, there was a lot of pressure to use AMQP, and more specifically, QMF. However, it turned out that there were a few problems, some with QMF, and some with AMQP. The main problem with QMF was that it was not well maintained at the time, and had numerous bugs and inefficiencies[1]. It wasn't clear who was going to step up and improve it. The main problem with AMQP had to do with the general complexity of setting it up. Aeolus Conductor is very complicated to set up; the fact that aeolus-configure exists is testament to that. In addition to the complexity of the Conductor, we were adding in a whole messaging layer that a sysadmin would have to understand. Further, since the whole point of AMQP is to be multi-machine, it is not trivial to write scripting that will set it up[2].
So basically, the QMF support in imagefactory was adding a whole lot of complexity, for unclear gain. Since it was the only component using QMF at the time, it was simple enough to switch it over to a REST interface and dump the complexity.
That all being said, I will re-iterate my starting point. The most important thing to do is to clearly define what it is the users want to do. Not what they might want to do in some nebulous future, but what features they are asking for right now. If it turns out that AMQP is the best solution for the task, then that is fine; but you can't start with that and work backwards.
Regards, Chris
[1] I will hasten to add that I have no idea if this is still the case. It very well may have been fixed by now. [2] This is of course possible, using something like puppet. However, you have now added 3 new things that the sysadmin would need to understand.
On 01/23/2013 11:51 PM, Steve Loranz wrote:
On Jan 23, 2013, at 4:21 PM, Matt Wagner matt.wagner@redhat.com wrote:
On Wed, Jan 23, 2013 at 01:42:31PM -0500, Mo Morsi wrote:
On 01/23/2013 01:16 PM, Bryan Kearney wrote:
Well, just thinking out loud.. can any component go into the cloud which may not be in the same VPN space?
-- bk
I'd imagine this would come down to the security policy of the organization deploying to the cloud, namely how lax the firewall can be to permit connections from the cloud as well as the ip address assignment on the cloud instances launched.
For a while now, I've been watching this discussion thinking, "It feels like we decided a while back that AMQP would be cool, and are now trying to work backwards to come up with reasons to justify it." Perhaps I'm just not forward-looking enough, or perhaps I'm overlooking an important detail, but that's sort of how it feels to me.
I think networking is an edge case, and a minor detail. We should absolutely try to support running components on different boxes (though I don't believe we have ever properly tested or supported it). But if
Having Aeolus components on different boxes is not edge case for me. I think that having Imagefactory on separate box will be quite often production setup requirement: if a production box is not bare metal and doesn't support nested virtualization then separate box is the only option. Also Imagefactroy HW requirements (more storage, less CPU) are different from other Aeolus services (more CPU, less storage). Personally I use remote Imagefactory or Deltacloud services quite often.
you break Aeolus up and run it on disparate network segments, you're going to have to handle somehow patching things together. I don't think we should choose how we handle inter-component messaging based on what happens if you break components up on different networks and refuse to set up a VPN or appropriate port-forwarding rules.
Agree. I wouldn't care about cross VPN support.
I don't mean to single out networking in general, though; it's just the latest in the discussion. I just worry that the discussion is largely theoretical and academic, focused on how different means of inter-component communications differ. That might be a good conversation to have if we didn't already have all our components using one. What I'm missing from this discussion is an exploration of what issues we're actually experiencing today with our HTTP callbacks system, and whether the overhead of switching to AMQP is a worthwhile trade-off. Is changing Factory and Conductor to use AMQP worthwhile to prevent the issue where if you shut down one of the two components in the middle of an exchange of messages, some messages might be lost? Could that better be solved by implementing some queuing or polling? Or, is it fair to say that if you send a job to Factory from Conductor and then shut down Conductor, it's just expected that the updated status might be missed?
There are various not-theoretical failures which can cause that a callback is not delivered, off-hand examples: - network/firewall error - vpn is down - callback receiver is down - rails proxy (in our case apache) is down
My impression is that overall opinion is that a failure can occur so sporadically that there is no reason to take care of them. Based on my experience I believe this is wrong. I hit all above errors myself when testing imagefactory (except the last apache proxy error, but only because I was accessing rails directly).
Another opinion mentioned here and on IRC was that failures could be covered by additional polling/status checking. This is little bit suboptimal: 1) it's not sufficient to check objects states after a service is back after a failure (callback failure can occur even if both ends are running). This polling would have to be done repeatedly, as a receiver you don't know if a callback delivery failed or if it wasn't sent yet 2) you can use just polling in such case and get rid of callbacks at all 3) you have two ways through which the object state is changed (through callbacks and through polling) 4) you have to take care of potential race conditions between polling and callbacks 5) polling is not efficient And I want to emphasize that it's not a problem only of Imagefactory<->Conductor communication but all components I mentioned in the first mail. 6) means bunch of additional code/logic on receiver side
Can we agree on the assumption that current callback solution is not sufficient and that more robustness is required?
IMO we have 2 options then: 1) make callback system more robust in *each* component. As I said before: if general opinion is that this is preferred solution, I'm fine with it. Though to me this sounds like reinventing of wheel to some extent. 2) use message bus instead of callbacks - this brings you all required features out of the box, also I would expect this means less coding and delegating the problem to third party system which is designed exactly for this purpose, but the feedback so far was quite negative about this option.
It's not my intention to vehemently oppose AMQP, and I certainly don't mean to suggest that it shouldn't be discussed. I just don't find the current conversation terribly productive at making the case for why we should switch.
-- Matt
imagefactory started off using QMF. It's what we were told was decided on when Aeolus was designed. But conductor was having a difficult time using it because it meant having a separate thread or process to bridge conductor to the broker. So, in the summer of 2011, there was a number of discussions where developers on both the imagefactory and conductor teams came out in favor of imagefactory offering a REST interface. We did, and near the end of January of 2012, we officially removed the QMF interface from the source tree when it started to seem like QPID/QMF was failing to gain traction in the wider community.
I'm not saying the conversation shouldn't happen either. What I am saying is that it should pick up where it was left 18 months ago with the question of if the challenges of conductor actually connecting to a broker are easier to deal with now than they were in 2011 when they were great enough to decide to switch course.
Could someone please send me a link to the discussion (if the discussion was on the mailing list) - I can't find it, so I can't pick up conversation from that point.
-steve
Jan
On Jan 23, 2013, at 12:14 PM, Mo Morsi mmorsi@redhat.com wrote:
There had been discussions at various points to support separating the imagefactory frontend / backends so as to facilitate things like this but that never got implemented
Yes it did. https://github.com/aeolusproject/imagefactory/tree/master/imgfac/secondary
It's currently experimental and could use some real testing, but it's there.
-steve
On 01/23/2013 01:30 PM, Steve Loranz wrote:
On Jan 23, 2013, at 12:14 PM, Mo Morsi mmorsi@redhat.com wrote:
There had been discussions at various points to support separating the imagefactory frontend / backends so as to facilitate things like this but that never got implemented
Yes it did. https://github.com/aeolusproject/imagefactory/tree/master/imgfac/secondary
It's currently experimental and could use some real testing, but it's there.
-steve
Ah interesting, thanks for the update.
-Mo
On Jan 23, 2013, at 12:14 PM, Mo Morsi mmorsi@redhat.com wrote:
There had been discussions at various points to support separating the imagefactory frontend / backends so as to facilitate things like this but that never got implemented
Yes it did. https://github.com/aeolusproject/imagefactory/tree/master/imgfac/secondary
It's currently experimental and could use some real testing, but it's there.
-steve
On 01/08, Hugh Brock wrote:
Maybe a crazy idea, but Torquebox already includes messaging, background jobs, services and all other things mentioned in the original email.
I have PoC of running Conductor inside Torquebox (using jRuby). I understand it might be a big step but definitely worth to at least investigate as an option :-)
Just my .20cents.
-- Michal
I have hated on message bus solutions publicly in the past, so it seems appropriate for me to weigh in now :).
What you're suggesting appears to make good sense. However, based on past (bitter, painful, very expensive) experience, I would like us to approach the message bus question very carefully, according to a few principles:
Any message bus, regardless how robust or stable, introduces complexity and dependencies to our code. Any proposal to add message bus use to one of our components should include some consideration of the costs and benefits. In other words, I want to see a solid justification of why REST callbacks are inadequate for a particular API connection before we dive into AMQP.
The message bus we choose should be one that other upstream cloud projects commonly use. (What is OpenStack using, for example? Is oVirt using anything?). It should also be available across our target developer and end user platforms.
The message bus we choose must support all the encryption and authentication mechanisms that the app supports. This means LDAP, oAuth, and (eventually) kerberos.
Unless it is incredibly expensive to build it this way, I'd like the message bus to be optional wherever possible -- meaning, fall back to a simple listener/callback over REST architecture whenever possible.
Darts welcome :)...
--Hugh
-- == Hugh Brock, hbrock@redhat.com == == Engineering Manager, Cloud BU == == Aeolus Project: Manage virtual infrastructure across clouds. == == http://aeolusproject.org ==
"I know that you believe you understand what you think I said, but I’m not sure you realize that what you heard is not what I meant." --Robert McCloskey
On 01/22/2013 08:01 AM, Michal Fojtik wrote:
On 01/08, Hugh Brock wrote:
Maybe a crazy idea, but Torquebox already includes messaging, background jobs, services and all other things mentioned in the original email.
I have PoC of running Conductor inside Torquebox (using jRuby). I understand it might be a big step but definitely worth to at least investigate as an option :-)
Just my .20cents.
-- Michal
We've talked about using torquebox on-and-off over the last couple years. The growing consensus there seems to be that we do want to be able to run on it, but not at the expense of getting locked in to using _only_ torquebox. In other words, the main issue with going to Torquebox would be to make sure that we _don't_ start requiring the use of services that only exist in torquebox -- so to use the above services we'd have to make each one of them either pluggable (so we could also use pure ruby alternatives) or optional.
Scott
On 01/22, Scott Seago wrote:
On 01/22/2013 08:01 AM, Michal Fojtik wrote:
On 01/08, Hugh Brock wrote:
Maybe a crazy idea, but Torquebox already includes messaging, background jobs, services and all other things mentioned in the original email.
I have PoC of running Conductor inside Torquebox (using jRuby). I understand it might be a big step but definitely worth to at least investigate as an option :-)
Just my .20cents.
-- Michal
We've talked about using torquebox on-and-off over the last couple years. The growing consensus there seems to be that we do want to be able to run on it, but not at the expense of getting locked in to using _only_ torquebox. In other words, the main issue with going to Torquebox would be to make sure that we _don't_ start requiring the use of services that only exist in torquebox -- so to use the above services we'd have to make each one of them either pluggable (so we could also use pure ruby alternatives) or optional.
Understood. I played with TB a bit and I the messaging and other classes/models could be easily substituted by something else.
I mean this whole messaging thing could be very flexible (like using something like 'Aeolus::Messaging' interface with :publish and :subscribe methods. Then the messaging backend could be Torquebox or whatever else that can handle publishing and subscribing ;-) (AMQP, etc)
-- Michal
Scott
On Tue, 2013-01-22 at 09:57 -0500, Scott Seago wrote:
On 01/22/2013 08:01 AM, Michal Fojtik wrote:
On 01/08, Hugh Brock wrote:
Maybe a crazy idea, but Torquebox already includes messaging, background jobs, services and all other things mentioned in the original email.
I have PoC of running Conductor inside Torquebox (using jRuby). I understand it might be a big step but definitely worth to at least investigate as an option :-)
Just my .20cents.
-- Michal
We've talked about using torquebox on-and-off over the last couple years. The growing consensus there seems to be that we do want to be able to run on it, but not at the expense of getting locked in to using _only_ torquebox.
One of the things that makes Ruby so thorny is the endless number of runtime/deployment options: starting with the interpreter, through to the app server, there's lots and lots of variations, most of which suck in one way or another.
We are using MRI basically because it is the default, and thought to be widely adopted. Especially MRI 1.8 had little redeeming features beyond that - we'll see how 1.9 will blow up differently.
It might be a good exercise to agree on a set of requirements for our Ruby env, and then try to figure out which of the options best fits our needs. It would be great if we could close the door on 'you can develop/deploy in these 10 ways' and just settle on one environment.
The fact that lots of 'stuff' is available in Torquebox right off the bat makes it very attractive - though there may be other reasons why it's a no go.
So here's my list of runtime requirements: * Ease of setting up a dev environment on important platforms (Fedora, RHEL, OSX, maybe Windows) and of using it * Ease of production deployment * Stability for production (anything that's not stable in dev isn't worth considering) * Availability of gems/libraries/addon functionality
David
On 01/22/2013 12:42 PM, David Lutterkort wrote:
On Tue, 2013-01-22 at 09:57 -0500, Scott Seago wrote:
On 01/22/2013 08:01 AM, Michal Fojtik wrote:
On 01/08, Hugh Brock wrote:
Maybe a crazy idea, but Torquebox already includes messaging, background jobs, services and all other things mentioned in the original email.
I have PoC of running Conductor inside Torquebox (using jRuby). I understand it might be a big step but definitely worth to at least investigate as an option :-)
Just my .20cents.
-- Michal
We've talked about using torquebox on-and-off over the last couple years. The growing consensus there seems to be that we do want to be able to run on it, but not at the expense of getting locked in to using _only_ torquebox.
One of the things that makes Ruby so thorny is the endless number of runtime/deployment options: starting with the interpreter, through to the app server, there's lots and lots of variations, most of which suck in one way or another.
We are using MRI basically because it is the default, and thought to be widely adopted. Especially MRI 1.8 had little redeeming features beyond that - we'll see how 1.9 will blow up differently.
It might be a good exercise to agree on a set of requirements for our Ruby env, and then try to figure out which of the options best fits our needs. It would be great if we could close the door on 'you can develop/deploy in these 10 ways' and just settle on one environment.
The fact that lots of 'stuff' is available in Torquebox right off the bat makes it very attractive - though there may be other reasons why it's a no go.
So here's my list of runtime requirements: * Ease of setting up a dev environment on important platforms (Fedora, RHEL, OSX, maybe Windows) and of using it * Ease of production deployment * Stability for production (anything that's not stable in dev isn't worth considering) * Availability of gems/libraries/addon functionality
David
Performance can vary greatly between ruby runtimes.
Also relating to production deployment, how easy it is to 'sell' the platform, eg JRuby might be an easier sell in shops that have already deployed Java, though MRI is seemingly the most popular Ruby interpreter overall (though this is hard to gauge / quantify).
-Mo
On 01/08/2013 01:41 PM, Jan Provaznik wrote:
Hi folks, I'd like to discuss a topic which was briefly touched on Aeolus developer Conference in Brno. Many components of Aeolus project need to send notifications to other components: Heat -> Conductor (deployment/instance state changes) Conductor -> Winged Monkey (deployment/instance state changes) Imagefactory -> Conductor (notifications about image build&upload state) DC tracker* -> Heat (notifications about instance state changes) DC tracker -> Conductor (notifications about other provider resource changes, realms availability, hw profile changes)
- DC Tracker - this component is actually not agreed yet, it was just
a proposal some time ago, but I believe this will be needed.
As far as I know there is no a unified plan how to deal with notifications between components. At this point notifications are implemented only in Imagefactory. This implementation is for now quite simple - a notification callback is sent back as http PUT request. It doesn't cover any failure situations (network error, auth error, Conductor is not running...), so if a request is not successful, no retry is done.
I think we need more robust notification system between all components above, this system should support at least:
- retry on failure
- keep correct order of notifications
- support authentication
And here it comes...
Why not use a message bus (an AMQP implementation) for communication between all involved components?
- it supports all required features out of the box
- clients exists for all languages involved in Aeolus project
- notifications will be solved in the same way between all components
Or is there some other solution how to solve notifications as painless as possible while keeping required robustness? What is your preferred solution of this problem?
A message bus can be used currently. An ESB is a prime example of how you can take what we have now and deploy Aeolus with a messaging architecture. In my opinion using a message bus should be a deployment decision. Using ReST and callbacks in the way we do, keeps things flexible, simple, allows us to easily 'componentize' Aeolus as well as allowing users to configure a messaging architecture in a production environment using something like JBossESB or SwitchYard.
We could argue that we should provide a configure parameter that sets up an ESB and drops in queues/topics, but I think this is beyond the scope of the project (Maybe if some companies want to offer support for setting up production environments using messaging, they could offer this as part of a service ;) )
Another reason for sticking with ReST I think is the fact that we are so far down the line now, it would be quite an undertaking to add messaging in all the projects.
Alternatively, how about we write a library that handles ReST callbacks, this should keep it consistent, and we can reuse it across projects. Maybe an extension to ActiveResource?
Cheers
Martyn
Jan
On Wed, Jan 09, 2013 at 10:53:58AM +0000, Martyn Taylor wrote:
On 01/08/2013 01:41 PM, Jan Provaznik wrote:
Hi folks, I'd like to discuss a topic which was briefly touched on Aeolus developer Conference in Brno. Many components of Aeolus project need to send notifications to other components: Heat -> Conductor (deployment/instance state changes) Conductor -> Winged Monkey (deployment/instance state changes) Imagefactory -> Conductor (notifications about image build&upload state) DC tracker* -> Heat (notifications about instance state changes) DC tracker -> Conductor (notifications about other provider resource changes, realms availability, hw profile changes)
- DC Tracker - this component is actually not agreed yet, it was
just a proposal some time ago, but I believe this will be needed.
As far as I know there is no a unified plan how to deal with notifications between components. At this point notifications are implemented only in Imagefactory. This implementation is for now quite simple - a notification callback is sent back as http PUT request. It doesn't cover any failure situations (network error, auth error, Conductor is not running...), so if a request is not successful, no retry is done.
I think we need more robust notification system between all components above, this system should support at least:
- retry on failure
- keep correct order of notifications
- support authentication
And here it comes...
Why not use a message bus (an AMQP implementation) for communication between all involved components?
- it supports all required features out of the box
- clients exists for all languages involved in Aeolus project
- notifications will be solved in the same way between all components
Or is there some other solution how to solve notifications as painless as possible while keeping required robustness? What is your preferred solution of this problem?
A message bus can be used currently. An ESB is a prime example of how you can take what we have now and deploy Aeolus with a messaging architecture. In my opinion using a message bus should be a deployment decision. Using ReST and callbacks in the way we do, keeps things flexible, simple, allows us to easily 'componentize' Aeolus as well as allowing users to configure a messaging architecture in a production environment using something like JBossESB or SwitchYard.
We could argue that we should provide a configure parameter that sets up an ESB and drops in queues/topics, but I think this is beyond the scope of the project (Maybe if some companies want to offer support for setting up production environments using messaging, they could offer this as part of a service ;) )
Another reason for sticking with ReST I think is the fact that we are so far down the line now, it would be quite an undertaking to add messaging in all the projects.
Alternatively, how about we write a library that handles ReST callbacks, this should keep it consistent, and we can reuse it across projects. Maybe an extension to ActiveResource?
Cheers
Martyn
This is a pretty interesting idea and is something Jay G was looking at quite a while back IIRC. We'd need to constrain it such that we don't wind up writing yet-another-messaging-system, but I find the basic idea pretty appealing.
--H
On Wed, Jan 09, 2013 at 10:53:58AM +0000, Martyn Taylor wrote: <enormous snipping>
A message bus can be used currently. An ESB is a prime example of how you can take what we have now and deploy Aeolus with a messaging architecture.
I'm not sure I follow what you mean here. Do you mean that it currently works as a drop-in replacement?
In my opinion using a message bus should be a deployment decision.
+1
Using ReST and callbacks in the way we do, keeps things flexible, simple, allows us to easily 'componentize' Aeolus as well as allowing users to configure a messaging architecture in a production environment using something like JBossESB or SwitchYard.
Yes! This is what I was going to reply with, but you stated it more eloquently.
If I write $small_component that I want to interface with Aeolus, but might have utility outside of Aeolus too (and this is exactly what we're trying to do with our components!), I might have no use for an AMQP client. Even if it's something easy to add in, I'd be annoyed adding the dependency just for Aeolus.
In fairness, there's a possibility that $small_component wouldn't have reason to speak HTTP outside of Aeolus, either. But that strikes me as a bit less likely, because it's what all (?) the cloud providers are using for their APIs, and what all the Aeolus components are using to interact already.
Another reason for sticking with ReST I think is the fact that we are so far down the line now, it would be quite an undertaking to add messaging in all the projects.
+1, it would be a lot of work and I'm not sure how much it would gain us.
Alternatively, how about we write a library that handles ReST callbacks, this should keep it consistent, and we can reuse it across projects. Maybe an extension to ActiveResource?
+10k. (If one doesn't already exist.)
The other thing that would be nice is if it supported a queuing system, to handle retries and whatnot. Some components might not be using one, but where we did have a queuing system in place it would be nice if there was a way to use it. (Both in the case that you need to retry, but also in the case that you just don't want to be doing this stuff in the foreground from a web app...)
-- Matt
On 01/09/2013 09:52 PM, Matt Wagner wrote:
On Wed, Jan 09, 2013 at 10:53:58AM +0000, Martyn Taylor wrote:
<enormous snipping> > A message bus can be used currently. An ESB is a prime example of > how you can take what we have now and deploy Aeolus with a messaging > architecture. I'm not sure I follow what you mean here. Do you mean that it currently works as a drop-in replacement?
Yes you can create interceptors that catch messages and stick them on a queue.
In my opinion using a message bus should be a deployment decision.
+1
Using ReST and callbacks in the way we do, keeps things flexible, simple, allows us to easily 'componentize' Aeolus as well as allowing users to configure a messaging architecture in a production environment using something like JBossESB or SwitchYard.
Yes! This is what I was going to reply with, but you stated it more eloquently.
If I write $small_component that I want to interface with Aeolus, but might have utility outside of Aeolus too (and this is exactly what we're trying to do with our components!), I might have no use for an AMQP client. Even if it's something easy to add in, I'd be annoyed adding the dependency just for Aeolus.
In fairness, there's a possibility that $small_component wouldn't have reason to speak HTTP outside of Aeolus, either. But that strikes me as a bit less likely, because it's what all (?) the cloud providers are using for their APIs, and what all the Aeolus components are using to interact already.
Another reason for sticking with ReST I think is the fact that we are so far down the line now, it would be quite an undertaking to add messaging in all the projects.
+1, it would be a lot of work and I'm not sure how much it would gain us.
Alternatively, how about we write a library that handles ReST callbacks, this should keep it consistent, and we can reuse it across projects. Maybe an extension to ActiveResource?
+10k. (If one doesn't already exist.)
The other thing that would be nice is if it supported a queuing system, to handle retries and whatnot. Some components might not be using one, but where we did have a queuing system in place it would be nice if there was a way to use it. (Both in the case that you need to retry, but also in the case that you just don't want to be doing this stuff in the foreground from a web app...)
-- Matt
On 01/09/2013 10:52 PM, Matt Wagner wrote:
On Wed, Jan 09, 2013 at 10:53:58AM +0000, Martyn Taylor wrote:
<enormous snipping> > A message bus can be used currently. An ESB is a prime example of > how you can take what we have now and deploy Aeolus with a messaging > architecture.
I'm not sure I follow what you mean here. Do you mean that it currently works as a drop-in replacement?
In my opinion using a message bus should be a deployment decision.
+1
Using ReST and callbacks in the way we do, keeps things flexible, simple, allows us to easily 'componentize' Aeolus as well as allowing users to configure a messaging architecture in a production environment using something like JBossESB or SwitchYard.
Yes! This is what I was going to reply with, but you stated it more eloquently.
If I write $small_component that I want to interface with Aeolus, but might have utility outside of Aeolus too (and this is exactly what we're trying to do with our components!), I might have no use for an AMQP client. Even if it's something easy to add in, I'd be annoyed adding the dependency just for Aeolus.
In fairness, there's a possibility that $small_component wouldn't have reason to speak HTTP outside of Aeolus, either. But that strikes me as a bit less likely, because it's what all (?) the cloud providers are using for their APIs, and what all the Aeolus components are using to interact already.
Another reason for sticking with ReST I think is the fact that we are so far down the line now, it would be quite an undertaking to add messaging in all the projects.
+1, it would be a lot of work and I'm not sure how much it would gain us.
Alternatively, how about we write a library that handles ReST callbacks, this should keep it consistent, and we can reuse it across projects. Maybe an extension to ActiveResource?
+10k. (If one doesn't already exist.)
The other thing that would be nice is if it supported a queuing system, to handle retries and whatnot. Some components might not be using one, but where we did have a queuing system in place it would be nice if there was a way to use it. (Both in the case that you need to retry, but also in the case that you just don't want to be doing this stuff in the foreground from a web app...)
-- Matt
Hi, thanks all for feedback (going to reply only on this one place). There are valid points/arguments in all yours replies.
Interesting is Martyn's proposal to leave message bus usage on deployment decision. This sounds good to me, but it also means that simple non-robust callback support is sufficient for default deployment (w/o ESB). I don't think this is true - I can't imagine there are many situations where the simple callback system is sufficient.
Do you guys know about any Aeolus use case where you could use simple (no-robust) callback system? I don't, but I'd love to know some examples.
I'm fine with going the REST callbacks way if all 3 requirements I mentioned in orig mail (retry, order, authentication) are supported by each Aeolus component which provides callback support.
Honestly, I thought that add message bus support into each component is simpler than implement robust callback system (which sounds like reinventing message bus wheel to me to some extent, but there are probably some existing solutions already). But based on your feedbacks where bus+pain is mentioned too often, I admit that it might be not so easy.
Jan
On Thu, Jan 10, 2013 at 01:58:10PM +0100, Jan Provaznik wrote:
On 01/09/2013 10:52 PM, Matt Wagner wrote:
On Wed, Jan 09, 2013 at 10:53:58AM +0000, Martyn Taylor wrote:
<enormous snipping> >A message bus can be used currently. An ESB is a prime example of >how you can take what we have now and deploy Aeolus with a messaging >architecture.
I'm not sure I follow what you mean here. Do you mean that it currently works as a drop-in replacement?
In my opinion using a message bus should be a deployment decision.
+1
Using ReST and callbacks in the way we do, keeps things flexible, simple, allows us to easily 'componentize' Aeolus as well as allowing users to configure a messaging architecture in a production environment using something like JBossESB or SwitchYard.
Yes! This is what I was going to reply with, but you stated it more eloquently.
If I write $small_component that I want to interface with Aeolus, but might have utility outside of Aeolus too (and this is exactly what we're trying to do with our components!), I might have no use for an AMQP client. Even if it's something easy to add in, I'd be annoyed adding the dependency just for Aeolus.
In fairness, there's a possibility that $small_component wouldn't have reason to speak HTTP outside of Aeolus, either. But that strikes me as a bit less likely, because it's what all (?) the cloud providers are using for their APIs, and what all the Aeolus components are using to interact already.
Another reason for sticking with ReST I think is the fact that we are so far down the line now, it would be quite an undertaking to add messaging in all the projects.
+1, it would be a lot of work and I'm not sure how much it would gain us.
Alternatively, how about we write a library that handles ReST callbacks, this should keep it consistent, and we can reuse it across projects. Maybe an extension to ActiveResource?
+10k. (If one doesn't already exist.)
The other thing that would be nice is if it supported a queuing system, to handle retries and whatnot. Some components might not be using one, but where we did have a queuing system in place it would be nice if there was a way to use it. (Both in the case that you need to retry, but also in the case that you just don't want to be doing this stuff in the foreground from a web app...)
-- Matt
Hi, thanks all for feedback (going to reply only on this one place). There are valid points/arguments in all yours replies.
Interesting is Martyn's proposal to leave message bus usage on deployment decision. This sounds good to me, but it also means that simple non-robust callback support is sufficient for default deployment (w/o ESB). I don't think this is true - I can't imagine there are many situations where the simple callback system is sufficient.
Do you guys know about any Aeolus use case where you could use simple (no-robust) callback system? I don't, but I'd love to know some examples.
I'm fine with going the REST callbacks way if all 3 requirements I mentioned in orig mail (retry, order, authentication) are supported by each Aeolus component which provides callback support.
Honestly, I thought that add message bus support into each component is simpler than implement robust callback system (which sounds like reinventing message bus wheel to me to some extent, but there are probably some existing solutions already). But based on your feedbacks where bus+pain is mentioned too often, I admit that it might be not so easy.
Jan
Gentleman, before jumping into this discussion I'd like to ask you not to hurry much with a decision.
I have recently read a book that is very relevant to our discussion:
http://martinfowler.com/books/sdp.html
I was very interesting reading for me, especially since I am not a big fan of the "do everything REST" idea that seems to be so popular these days.
I gently suggest that you take a look at this book, there are situations described and considerations explained that closely resemble our concerns and problems.
Wish all a productive week!
On Thu, 2013-01-10 at 13:58 +0100, Jan Provaznik wrote:
Interesting is Martyn's proposal to leave message bus usage on deployment decision. This sounds good to me, but it also means that simple non-robust callback support is sufficient for default deployment (w/o ESB). I don't think this is true - I can't imagine there are many situations where the simple callback system is sufficient.
I would be very careful introducing deployment options here - the communications between components is a core piece of the architecture and not all that easy to debug and troubleshoot; by allowing users to swap these out, you'll introduce a lot of QA/support overhead, and it's not unlikely that one of these variants will be buggy enough that people won't use them.
David
On Wed, 2013-01-09 at 10:53 +0000, Martyn Taylor wrote:
Alternatively, how about we write a library that handles ReST callbacks, this should keep it consistent, and we can reuse it across projects. Maybe an extension to ActiveResource?
This is a very appealing idea; it's worth looking around what's out there (e.g. pubsubhubbub) so that we don't end up with yet another messaging-over-HTTP scheme if amqp/qpid isn't an option (it is for OpenStack)
David
On 01/09/2013 11:53 AM, Martyn Taylor wrote:
Alternatively, how about we write a library that handles ReST callbacks, this should keep it consistent, and we can reuse it across projects. Maybe an extension to ActiveResource?
Only a nit: some components are written in python (Heat, Imagefactory), so 2 different implementations (ruby, python) would be needed. Extension to ActiveResource restricts usage to rails based apps (unless you are OK to include ARes e.g. in some Sinatra app). So list of Aeolus components where such library could be reused shrinks to Conductor only.
Jan
aeolus-devel@lists.fedorahosted.org