Hey everyone,
One of the proposed features for this iteration is Status Reporting. There are four main components to it:
* Handling changes that happen on the backend provider * Storing & displaying user actions * Generating and exporting reports * API to notify and get notified by external services (Matahari)
Below, I'll go over each of these and comment on what we have and what needs to happen. At the end I wrote some features and tasks that I think should go into Redmine.
Please, do speak your mind and write your suggestions and feedback in this thread.
Out-of-band Changes ===================
Most of the functionality is already in place: when an instance crashes or is stopped outside of Conductor, Condor picks it up and updates the status accordingly.
Additionally, we calculate instance's uptime and as far as I could tell, it's as accurate as we can reasonably get without running an agent directly on the instance (hello, Matahari).
What's missing here is mostly the UI: Instance's uptime is not displayed at all. We do show uptime for Deployments, but that one is not calculated correctly, so that needs to be fixed.
It would also improve the user experience, if we updated these values automatically -- without having to reload the page.
So when you launched a deployment, you'd get the deployment-details page that shows its status as 'pending'. Once the deployment is running, the status would change to 'running' automatically and the uptime would start to count.
Doing this consistently across the app would be great but isn't trivial. If we just hacked a JavaScript snipet for every time we needed this, it would soon became a mess. Libraries like Backbone.js[1] should help, but it takes some time and effort getting them in. Backbone in particular isn't a drop-in solution. Others may be. Suggestions?
User Actions ============
Again, some of it is in place already: we have a model called Event that is hooked to the `after_save` callback of Instance and it writes a timestamped entry every time an instance changes.
What needs to be done here:
* Better UI - Right now we show instance's history in the ugly & forgotten instance#show page. It would make sense to make it look prettier, and possibly display the events on the Deployment page for all its associated instances.
* Make the event log persistent
When an instance gets deleted now, all events get deleted with it. What we want to do instead is to hide the instance from the UI but keep it and the events around so that we can still generate correct reports.
Matt Wagner suggested a couple of gems[2][3] that should handle this pretty transparently. They're worth looking into as doing this by ourselves could get tricky.
* More events as needed
Generating And Exporting Status Reports =======================================
Right now, this will mean creating a simple CSV file that can be downloaded.
Possibly something like:
GET /conductor/reports
plus a way of accessing that from the UI.
The first implementation of this will be for admins only, thus no permission checking is needed. Same goes for showing a history on a single object: if you can see the instance, you should be able to see its history as well.
However, we'll probably end up more granular ways of displaying the data: for instance a user will want to see what happened with their resources. Similarly, an admin may want to check history of a particular user or a pool. Or seeing just the last moth's activity.
For this we'll need to add permission checks and some querying capability + UI. That's out of this iteration's scope, though.
Matahari And Other External Services ====================================
Again, this is outside of this iteration's scope.
As we'll want to get notification from Matahari (and maybe other services) we'll need a way to get notified about events that should be logged but that Conductor itself cannot detect.
From talking to Matahari folks, it seems they need Conductor to let them know when an instance was started and stopped.
If I understand it correctly, the whole notion of Config Server is to have something that communicates with the in-instance agents (matahari). Thus, we may want to hijack that and send the messages through Config Server rather than directly.
This will save us the trouble of creating yet another communication interface and discovering the matahari agents. Rather, we'll piggyback on something that's already planned anyway.
As for other services notifying Conductor, this can be as simple as:
POST /conductor/instances/3141/events
So, like, fyi, that thingy over over here has changed.
Possibly with a timestamp in the request body so that we can log when the event happened vs. when it was received.
However, this must be tied into whatever way we implement cross-module authentication and authorization, otherwise everyone will be able to create phoney events.
Features & Tasks ================
As a user, I want to see the uptime of my instances and deployments * Display uptime of instances in the UI * Fix the uptime calculation for deployments
As a user, I want to see Conductor always show the current status without having to press <F5> every time * Implement Backbone.js or some other system that will make this easy for us * Dynamically update the instance/deployment status across Conductor * Dynamically update the scoreboards in the Monitor section * Dynamically show/hide errors as they unfold happen on the backend * Periodically update uptime values so that they're always semi up-to-date
As an admin, I want to see a report of everything that happened to instances * Generate a report of all the Events in a machine-processable format * Add a download link for the report into the UI * Make this available to administrators only
As an admin, I want to get the full reports even when the instances have been deleted * Make the event logs persistent even on instance deletion * Preserve the data associated with the deleted object for events/reports
As an admin, I want to be able to query the status report to get to the info I need - note: this is probably out of scope for the Iteration 4. * Filter by a specific timeframe * Filter by a specific user * Filter by a specific pool
As a user, I want to be notified on changes that happen inside of an instance - note: this is out of scope for the Iteration 4. * Implement a way of communicating with Matahari * Provide an API for getting event notifications from external services
As a user, I want to be able to see and query events of objects that I have access to - note: this is out of scope for the Iteration 4. * Display the status reports filtered by user's permissions
[1]: http://documentcloud.github.com/backbone/ [2]: https://github.com/bdurand/acts_as_trashable [3]: https://github.com/technoweenie/acts_as_paranoid
Hi Tomas,
This is an excellent write-up. As a result, most of my notes below are tangential musings. :)
On Wed, Jul 27, 2011 at 08:50:29PM +0200, Tomas Sedovic wrote:
Most of the functionality is already in place: when an instance crashes or is stopped outside of Conductor, Condor picks it up and updates the status accordingly.
It occurs to me that right now, much of this happens as dbomatic tails Condor's EventLog and then directly updates the database. We never really liked this, but there was no better way to do it.
If we're going to be providing an API on top of this, perhaps we can finally make dbomatic use an API instead of directly modifying the database. I'm not sure this is in scope for this iteration, though.
Doing this consistently across the app would be great but isn't trivial. If we just hacked a JavaScript snipet for every time we needed this, it would soon became a mess. Libraries like Backbone.js[1] should help, but it takes some time and effort getting them in. Backbone in particular isn't a drop-in solution. Others may be. Suggestions?
We had talked a while about starting to use Backbone. Did that not happen? (I'm completely impartial here, just curious.)
- Better UI
- Right now we show instance's history in the ugly & forgotten
instance#show page. It would make sense to make it look prettier, and possibly display the events on the Deployment page for all its associated instances.
It's a little bit of a tangent, but it'd be really swell if we could either brush up the Instances page and make it a first-class citizen again, or move the requisite functionality into something else in the new UI. Right now we find the instances page handy but there's no way of getting there.
- Make the event log persistent
When an instance gets deleted now, all events get deleted with it. What we want to do instead is to hide the instance from the UI but keep it and the events around so that we can still generate correct reports.
Matt Wagner suggested a couple of gems[2][3] that should handle this pretty transparently. They're worth looking into as doing this by ourselves could get tricky.
I've used acts_as_paranoid and acts_as_trashable on a few projects in the past. acts_as_paranoid adds a deleted_at and overrides AR's .find, and seems to work flawlessly. (You can do find_with_deleted or find_only_deleted when you actually want to bring back deleted records.)
acts_as_trashable actually deletes the row from the database, but dumps the object as YAML or something and stuffs it into a "trash" table for archival.
Generating And Exporting Status Reports
Right now, this will mean creating a simple CSV file that can be downloaded.
Possibly something like:
GET /conductor/reports
It would be nifty to implement this in a respond_to block:
respond_to do |format| format.html { ... } format.csv { ... } end
Then we can have /conductor/reports/foo give a nice HTML table, and /conductor/reports/foo.csv serve up the CSV file.
Features & Tasks
... snip ...
- Periodically update uptime values so that they're always semi up-to-date
We could actually just periodically sync a timer on the page, and use that on-page control to keep the timers running continually. I'm not sure if having the times increment every second would be more useful than distracting, though.
As an admin, I want to get the full reports even when the instances have been deleted
- Make the event logs persistent even on instance deletion
- Preserve the data associated with the deleted object for events/reports
How far through associations do we want to go? If I launch an instance, stop it, and then delete the deployment it was in when I'm done, should we save the instance record? The deployment? The deployable XML? (I suspect the answer is that we want to save everything, but thought I'd ask to make sure.)
Overall, this looks great!
-- Matt
On 07/27/2011 10:21 PM, Matt Wagner wrote:
Hi Tomas,
This is an excellent write-up. As a result, most of my notes below are tangential musings. :)
On Wed, Jul 27, 2011 at 08:50:29PM +0200, Tomas Sedovic wrote:
Most of the functionality is already in place: when an instance crashes or is stopped outside of Conductor, Condor picks it up and updates the status accordingly.
It occurs to me that right now, much of this happens as dbomatic tails Condor's EventLog and then directly updates the database. We never really liked this, but there was no better way to do it.
If we're going to be providing an API on top of this, perhaps we can finally make dbomatic use an API instead of directly modifying the database. I'm not sure this is in scope for this iteration, though.
This occurred to me as well, but I think it deserves its own thread.
My thinking here is this:
1) we keep Condor/dbomatic as is for now. It ain't ideal, but we got it to work. And we'll add the dead-simple API for other services to notify us about additional changes. That will be for notifications only -- no side effects should be taken as a result.
We plan to have API for controlling conductor, so if someone wants to make a persistent change, they can use that.
This will be very easy to implement with little risk of breakage.
2) Sooner or later, we will need a good flexible way of doing asynchronous operations.
Something where you can say "go to that URL that takes five minutes to load and lemme know when you're done" without blocking the app.
Right now everything except for operating/monitoring backend instances calls uses blocking code. Condor takes care of the Deltacloud calls but does nothing else.
Given how many different services this thing will consist of, spread possibly across networks with various latencies, we will need some way to do async operations.
I think we can use Condor for that but we'll have to change the way we send jobs to it and the way we get the results from.
So that needs to happen and once we get to it we should probably make sure it's done in a robust way that we're comfortable with. And then we can use it for everything -- including what dbomatic does now.
Doing this consistently across the app would be great but isn't trivial. If we just hacked a JavaScript snipet for every time we needed this, it would soon became a mess. Libraries like Backbone.js[1] should help, but it takes some time and effort getting them in. Backbone in particular isn't a drop-in solution. Others may be. Suggestions?
We had talked a while about starting to use Backbone. Did that not happen? (I'm completely impartial here, just curious.)
It did not happen yet. I've spent some time with it, but it wasn't really a drop-in solution. We were close to the release then and the risk of breaking the UI in all sorts of subtle ways was too high.
We do want to use something like that eventually because it will increase the responsiveness of the app.
If it makes sense, we can add it in this iteration. However, I'm beginning to have doubts that Backbone is ideal for Conductor, so there may be another discussion about that once we start thinking about implementing the ajaxified UI.
- Better UI
- Right now we show instance's history in the ugly& forgotten
instance#show page. It would make sense to make it look prettier, and possibly display the events on the Deployment page for all its associated instances.
It's a little bit of a tangent, but it'd be really swell if we could either brush up the Instances page and make it a first-class citizen again, or move the requisite functionality into something else in the new UI. Right now we find the instances page handy but there's no way of getting there.
Yup. It makes sense to display the info there. But I think we should put it into the deployments details page rather than creating whole another UI area with lots of tabs, buttons, etc.
If I understand it correctly, the current goal is to view present deployments as the atomic unit that the users work with inside Conductor.
Since they are not atomic by their nature, we do have to display the information about individual instances, but we want the users to think in terms of deployments, not instances.
Whether that goal still makes sense or not is a different question, but I think that's what the UX people should decide by talking to and observing our users.
- Make the event log persistent
When an instance gets deleted now, all events get deleted with it. What we want to do instead is to hide the instance from the UI but keep it and the events around so that we can still generate correct reports.
Matt Wagner suggested a couple of gems[2][3] that should handle this pretty transparently. They're worth looking into as doing this by ourselves could get tricky.
I've used acts_as_paranoid and acts_as_trashable on a few projects in the past. acts_as_paranoid adds a deleted_at and overrides AR's .find, and seems to work flawlessly. (You can do find_with_deleted or find_only_deleted when you actually want to bring back deleted records.)
acts_as_trashable actually deletes the row from the database, but dumps the object as YAML or something and stuffs it into a "trash" table for archival.
They both sound great. I would prefer starting with acts_as_paranoid as it sounds easier to use for generating reports.
I fear it may mean a performance hit (I've no experience with that, someone who has, please speak up), but since we don't really lose any data, we can always switch to acts_as_trashable later on and just run a migration.
We could probably go the other way around as well, but it sounds more difficult.
Please correct me if I'm wrong with that regard or if there are compelling reasons not to use acts_as_paranoid from the beginning.
Generating And Exporting Status Reports
Right now, this will mean creating a simple CSV file that can be downloaded.
Possibly something like:
GET /conductor/reports
It would be nifty to implement this in a respond_to block:
respond_to do |format| format.html { ... } format.csv { ... } end
Then we can have /conductor/reports/foo give a nice HTML table, and /conductor/reports/foo.csv serve up the CSV file.
Excellent point. I agree completely.
Features& Tasks
... snip ...
- Periodically update uptime values so that they're always semi up-to-date
We could actually just periodically sync a timer on the page, and use that on-page control to keep the timers running continually. I'm not sure if having the times increment every second would be more useful than distracting, though.
Yeah, that would be distracting. But I was thinking that updating it, say, every minute or five would be great.
I'm imagining a scenario when you launch a massive deployment and while the page still shows everything in pending, you leave the page up and go to lunch.
When you get back, it shows everything as running/error and an uptime that is not horribly behind.
As an admin, I want to get the full reports even when the instances have been deleted
- Make the event logs persistent even on instance deletion
- Preserve the data associated with the deleted object for events/reports
How far through associations do we want to go? If I launch an instance, stop it, and then delete the deployment it was in when I'm done, should we save the instance record? The deployment? The deployable XML? (I suspect the answer is that we want to save everything, but thought I'd ask to make sure.)
Yeah, I think we want to save everything unless there's a compelling reason not to.
When we do have all the data, we can always delete or archive it later, should we need to. But if we don't have the data and realize that we want them, it's too late.
Thanks for your great comments and suggestions!
Overall, this looks great!
-- Matt _______________________________________________ aeolus-devel mailing list aeolus-devel@lists.fedorahosted.org https://fedorahosted.org/mailman/listinfo/aeolus-devel
Tomas,
good work - couple refinements inline.
On 07/27/2011 11:50 AM, Tomas Sedovic wrote:
Hey everyone,
One of the proposed features for this iteration is Status Reporting. There are four main components to it:
<snip>
Matahari And Other External Services
Again, this is outside of this iteration's scope.
I disagree - if we leave this outside of this iteration's scope we have 3 nearly full-time engineers blocked not doing any work. We are ready to start tackling these integration issues now.
As we'll want to get notification from Matahari (and maybe other services) we'll need a way to get notified about events that should be logged but that Conductor itself cannot detect.
From talking to Matahari folks, it seems they need Conductor to let them know when an instance was started and stopped.
If I understand it correctly, the whole notion of Config Server is to have something that communicates with the in-instance agents (matahari). Thus, we may want to hijack that and send the messages through Config Server rather than directly.
This will save us the trouble of creating yet another communication interface and discovering the matahari agents. Rather, we'll piggyback on something that's already planned anyway.
Managing Matahari discovery and state is in the domain of pacemaker cloud. If Aeolus integrates with Matahari directly, developers will be reimplementing pacemaker-cloud in some fashion and spend several months doing it.
As for other services notifying Conductor, this can be as simple as:
POST /conductor/instances/3141/events So, like, fyi, that thingy over over here has changed.
Possibly with a timestamp in the request body so that we can log when the event happened vs. when it was received.
However, this must be tied into whatever way we implement cross-module authentication and authorization, otherwise everyone will be able to create phoney events.
This is one advantage of using QMF for events - authentication is free in this model.
Is QMF a nonstarter for aeolus?
Features & Tasks
As a user, I want to see the uptime of my instances and deployments
- Display uptime of instances in the UI
- Fix the uptime calculation for deployments
As a user, I want to see Conductor always show the current status without having to press <F5> every time
- Implement Backbone.js or some other system that will make this easy for us
- Dynamically update the instance/deployment status across Conductor
- Dynamically update the scoreboards in the Monitor section
- Dynamically show/hide errors as they unfold happen on the backend
- Periodically update uptime values so that they're always semi up-to-date
As an admin, I want to see a report of everything that happened to instances
- Generate a report of all the Events in a machine-processable format
- Add a download link for the report into the UI
- Make this available to administrators only
As an admin, I want to get the full reports even when the instances have been deleted
- Make the event logs persistent even on instance deletion
- Preserve the data associated with the deleted object for events/reports
As an admin, I want to be able to query the status report to get to the info I need
- note: this is probably out of scope for the Iteration 4.
- Filter by a specific timeframe
- Filter by a specific user
- Filter by a specific pool
As a user, I want to be notified on changes that happen inside of an instance
- note: this is out of scope for the Iteration 4.
- Implement a way of communicating with Matahari
- Provide an API for getting event notifications from external services
As a user, I want to be able to see and query events of objects that I have access to
- note: this is out of scope for the Iteration 4.
- Display the status reports filtered by user's permissions
Please see our commitments for this iteration: https://fedorahosted.org/pipermail/aeolus-devel/2011-July/003602.html
aeolus-devel mailing list aeolus-devel@lists.fedorahosted.org https://fedorahosted.org/mailman/listinfo/aeolus-devel
On 28/07/11 12:02, Steven Dake wrote:
Matahari And Other External Services
Again, this is outside of this iteration's scope.
I disagree - if we leave this outside of this iteration's scope we have 3 nearly full-time engineers blocked not doing any work. We are ready to start tackling these integration issues now.
Hi Steven,
Are the integration issues, and the tasks which are going to address them, defined yet? Once the required changes in Conductor are agreed, we can put them on the backlog and schedule them.
Would it be fair to say that we need to start with some scoping discussions during the first sprint, so that we can work out:
* what the Conductor UI is going to need to provide in terms of: ** additional user input to pass to Policy Engine ** the monitoring metrics we can expect to have relayed to Conductor
*The APIs and transport between Conductor and Policy Engine
If those aren't quite the right considerations, and especially if they're miles off, we should put some time into planning the details. Agreed?
Angus
On 07/28/2011 01:02 PM, Steven Dake wrote:
Tomas,
good work - couple refinements inline.
On 07/27/2011 11:50 AM, Tomas Sedovic wrote:
Hey everyone,
One of the proposed features for this iteration is Status Reporting. There are four main components to it:
<snip>
Matahari And Other External Services
Again, this is outside of this iteration's scope.
I disagree - if we leave this outside of this iteration's scope we have 3 nearly full-time engineers blocked not doing any work. We are ready to start tackling these integration issues now.
That's great. I wasn't aware that we have people who can help us with this (I have to shamefully admit that I've only managed to read the HA thread earlier today).
As we'll want to get notification from Matahari (and maybe other services) we'll need a way to get notified about events that should be logged but that Conductor itself cannot detect.
From talking to Matahari folks, it seems they need Conductor to let them know when an instance was started and stopped.
If I understand it correctly, the whole notion of Config Server is to have something that communicates with the in-instance agents (matahari). Thus, we may want to hijack that and send the messages through Config Server rather than directly.
This will save us the trouble of creating yet another communication interface and discovering the matahari agents. Rather, we'll piggyback on something that's already planned anyway.
Managing Matahari discovery and state is in the domain of pacemaker cloud. If Aeolus integrates with Matahari directly, developers will be reimplementing pacemaker-cloud in some fashion and spend several months doing it.
Makes sense. Code that doesn't have to go into Conductor is a good code. I wasn't that familiar with pacemaker cloud but I was hoping we could leverage something like that.
I was thinking about the Config Server, which will be communicating with the instances, but if Pacemaker Cloud is designed for this, great.
As for other services notifying Conductor, this can be as simple as:
POST /conductor/instances/3141/events So, like, fyi, that thingy over over here has changed.
Possibly with a timestamp in the request body so that we can log when the event happened vs. when it was received.
However, this must be tied into whatever way we implement cross-module authentication and authorization, otherwise everyone will be able to create phoney events.
This is one advantage of using QMF for events - authentication is free in this model.
Is QMF a nonstarter for aeolus?
Given the nature of Conductor (a Rails webapp), it feels more natural to communicate via HTTP APIs whenever possible.
That said, if it would make more sense to use QMF, we could manage.
Personally, I have zero experience with QMF, but I know some Conductor folks do.
Jay, could you share your thoughts, on this?
Features& Tasks
As a user, I want to see the uptime of my instances and deployments
- Display uptime of instances in the UI
- Fix the uptime calculation for deployments
As a user, I want to see Conductor always show the current status without having to press<F5> every time
- Implement Backbone.js or some other system that will make this easy for us
- Dynamically update the instance/deployment status across Conductor
- Dynamically update the scoreboards in the Monitor section
- Dynamically show/hide errors as they unfold happen on the backend
- Periodically update uptime values so that they're always semi up-to-date
As an admin, I want to see a report of everything that happened to instances
- Generate a report of all the Events in a machine-processable format
- Add a download link for the report into the UI
- Make this available to administrators only
As an admin, I want to get the full reports even when the instances have been deleted
- Make the event logs persistent even on instance deletion
- Preserve the data associated with the deleted object for events/reports
As an admin, I want to be able to query the status report to get to the info I need
- note: this is probably out of scope for the Iteration 4.
- Filter by a specific timeframe
- Filter by a specific user
- Filter by a specific pool
As a user, I want to be notified on changes that happen inside of an instance
- note: this is out of scope for the Iteration 4.
- Implement a way of communicating with Matahari
- Provide an API for getting event notifications from external services
As a user, I want to be able to see and query events of objects that I have access to
- note: this is out of scope for the Iteration 4.
- Display the status reports filtered by user's permissions
Please see our commitments for this iteration: https://fedorahosted.org/pipermail/aeolus-devel/2011-July/003602.html
Thanks for the comments, Steven.
We should definitely discuss this further. I'll try to read up more on Pacemaker Cloud and Matahari tomorrow.
In the meantime, what do you see as the main things that need to happen so that the reporting flows between Conductor and the instances it manages?
That is:
* What (if any) data should conductor be sending to pacemaker * What will pacemaker send back to conductor * What protocol and format does pacemaker use. QMF? * Suppose that pacemaker crashes or something. Conductor will then be cut off from some of instances' information. Would it make sense for conductor to talk to Matahari directly, then? If so, could we reuse the pacemaker cloud code not to duplicate the effort? * Anything else I'm missing?
Thomas
aeolus-devel mailing list aeolus-devel@lists.fedorahosted.org https://fedorahosted.org/mailman/listinfo/aeolus-devel
aeolus-devel mailing list aeolus-devel@lists.fedorahosted.org https://fedorahosted.org/mailman/listinfo/aeolus-devel
On 07/28/2011 09:47 AM, Tomas Sedovic wrote:
On 07/28/2011 01:02 PM, Steven Dake wrote:
Tomas,
good work - couple refinements inline.
On 07/27/2011 11:50 AM, Tomas Sedovic wrote:
Hey everyone,
One of the proposed features for this iteration is Status Reporting. There are four main components to it:
<snip>
Matahari And Other External Services
Again, this is outside of this iteration's scope.
I disagree - if we leave this outside of this iteration's scope we have 3 nearly full-time engineers blocked not doing any work. We are ready to start tackling these integration issues now.
That's great. I wasn't aware that we have people who can help us with this (I have to shamefully admit that I've only managed to read the HA thread earlier today).
As we'll want to get notification from Matahari (and maybe other services) we'll need a way to get notified about events that should be logged but that Conductor itself cannot detect.
From talking to Matahari folks, it seems they need Conductor to let them know when an instance was started and stopped.
If I understand it correctly, the whole notion of Config Server is to have something that communicates with the in-instance agents (matahari). Thus, we may want to hijack that and send the messages through Config Server rather than directly.
This will save us the trouble of creating yet another communication interface and discovering the matahari agents. Rather, we'll piggyback on something that's already planned anyway.
Managing Matahari discovery and state is in the domain of pacemaker cloud. If Aeolus integrates with Matahari directly, developers will be reimplementing pacemaker-cloud in some fashion and spend several months doing it.
Makes sense. Code that doesn't have to go into Conductor is a good code. I wasn't that familiar with pacemaker cloud but I was hoping we could leverage something like that.
I was thinking about the Config Server, which will be communicating with the instances, but if Pacemaker Cloud is designed for this, great.
As for other services notifying Conductor, this can be as simple as:
POST /conductor/instances/3141/events So, like, fyi, that thingy over over here has changed.
Possibly with a timestamp in the request body so that we can log when the event happened vs. when it was received.
However, this must be tied into whatever way we implement cross-module authentication and authorization, otherwise everyone will be able to create phoney events.
This is one advantage of using QMF for events - authentication is free in this model.
Is QMF a nonstarter for aeolus?
Given the nature of Conductor (a Rails webapp), it feels more natural to communicate via HTTP APIs whenever possible.
That said, if it would make more sense to use QMF, we could manage.
Personally, I have zero experience with QMF, but I know some Conductor folks do.
Jay, could you share your thoughts, on this?
Features& Tasks
As a user, I want to see the uptime of my instances and deployments
- Display uptime of instances in the UI
- Fix the uptime calculation for deployments
As a user, I want to see Conductor always show the current status without having to press<F5> every time
- Implement Backbone.js or some other system that will make this easy for us
- Dynamically update the instance/deployment status across Conductor
- Dynamically update the scoreboards in the Monitor section
- Dynamically show/hide errors as they unfold happen on the backend
- Periodically update uptime values so that they're always semi up-to-date
As an admin, I want to see a report of everything that happened to instances
- Generate a report of all the Events in a machine-processable format
- Add a download link for the report into the UI
- Make this available to administrators only
As an admin, I want to get the full reports even when the instances have been deleted
- Make the event logs persistent even on instance deletion
- Preserve the data associated with the deleted object for events/reports
As an admin, I want to be able to query the status report to get to the info I need
- note: this is probably out of scope for the Iteration 4.
- Filter by a specific timeframe
- Filter by a specific user
- Filter by a specific pool
As a user, I want to be notified on changes that happen inside of an instance
- note: this is out of scope for the Iteration 4.
- Implement a way of communicating with Matahari
- Provide an API for getting event notifications from external services
As a user, I want to be able to see and query events of objects that I have access to
- note: this is out of scope for the Iteration 4.
- Display the status reports filtered by user's permissions
Please see our commitments for this iteration: https://fedorahosted.org/pipermail/aeolus-devel/2011-July/003602.html
Thanks for the comments, Steven.
We should definitely discuss this further. I'll try to read up more on Pacemaker Cloud and Matahari tomorrow.
In the meantime, what do you see as the main things that need to happen so that the reporting flows between Conductor and the instances it manages?
That is:
- What (if any) data should conductor be sending to pacemaker
pacemaker-cloud needs the deployable and assembly information, preferably in XML format. Prior to launching a deployable, aeolus would tell pacemaker-cloud about it. Today this happens with a QMF call into our system. We are not locked into QMF at this interface - this is only what our prototype provides today.
- What will pacemaker send back to conductor
pacemaker-cloud generates QMF events when state changes occur in deployables or assemblies. We currently have a format for these events, but are open to formatting changes. We are not locked into QMF at this interface and open to protocol changes.
- What protocol and format does pacemaker use. QMF?
yes - matahari requires QMF and that isn't likely to change in the long term. In terms of our "external interfaces", they are currently implemented using QMF and XML but QMF could be changed to something more rails friendly if required.
- Suppose that pacemaker crashes or something. Conductor will then be
cut off from some of instances' information. Would it make sense for conductor to talk to Matahari directly, then? If so, could we reuse the pacemaker cloud code not to duplicate the effort?
suppose conductor fails....
All software fails, but precautions have been taken to protect this component: + The code has recovery to protect from unplanned stop failures (such as a crash). + The developers have applied high availability development principles to maximize MTBF. See slide 14:
http://www.redhat.com/summit/2011/presentations/summit/whats_ne/thursday/dak...
+ The project code footprint is extremely small (python/sh are project test cases):
Totals grouped by language (dominant language first): cpp: 2432 (46.26%) sh: 1124 (21.38%) python: 975 (18.55%) ansic: 726 (13.81%) Total Physical Source Lines of Code (SLOC) = 5,257
compare for yourself to code footprint of other components (yum install sloccount; sloccount projectdir).
The way to reuse pacemaker cloud codebase is to integrate with it. Communicating with matahari via conductor solves a subsset of problems solved by pacemaker cloud.
- Anything else I'm missing?
shutdown and diagnostics.
Key tasks: + aeolus sends deployable and assembly information to pacemaker-cloud prior to launching an instance. + constraint that assembly UUIDs given to this call match the Matahari instance id for the assembly. + aeolus sends deployable shutdown information to pacemaker-cloud + agree on event format and feedback API. + implement feedback API. + document diagnostic capabilities. + sort out image building to include Matahari including uuid and qpid authentication credentials.
Thomas
aeolus-devel mailing list aeolus-devel@lists.fedorahosted.org https://fedorahosted.org/mailman/listinfo/aeolus-devel
aeolus-devel mailing list aeolus-devel@lists.fedorahosted.org https://fedorahosted.org/mailman/listinfo/aeolus-devel
aeolus-devel mailing list aeolus-devel@lists.fedorahosted.org https://fedorahosted.org/mailman/listinfo/aeolus-devel
On Thu, 2011-07-28 at 14:58 -0700, Steven Dake wrote:
On 07/28/2011 09:47 AM, Tomas Sedovic wrote:
On 07/28/2011 01:02 PM, Steven Dake wrote:
As for other services notifying Conductor, this can be as simple as:
POST /conductor/instances/3141/events So, like, fyi, that thingy over over here has changed.
Possibly with a timestamp in the request body so that we can log when the event happened vs. when it was received.
However, this must be tied into whatever way we implement cross-module authentication and authorization, otherwise everyone will be able to create phoney events.
This is one advantage of using QMF for events - authentication is free in this model.
I suspect whatever we choose here, there will at the very least be some coordination needed here. Perhaps this should be included in the authentication/encryption features so we consider all the needs of all pieces at the same time?
Is QMF a nonstarter for aeolus?
Given the nature of Conductor (a Rails webapp), it feels more natural to communicate via HTTP APIs whenever possible.
Agree with this sentiment for Conductor. However, that in no way means we cannot use qmf as a notification mechanism. I'll outline 2 basic scenarios here, but the first (either variation) is what I think is the better path. Perhaps there is a 3rd option I have not thought of, in which case, more ideas welcome.
== Option 1: QMF-HTTP Gateway/Bridge ==
* One the previous version of the aeolus suite, we had a daemon called aeolus-connector. This particular daemon was using the Imagefactory console that we wrote, but the concept could easily be extended to any console. The basic idea is that there is a dedicated 'gateway' that is an http service on one side, and talks to one or more qmf consoles on the other. This allows any service to: * Send events via qmf and not have to worry about converting those events into http requests to be sent to the api of a web app. The http request/put/whatever is done by the gateway. * Send requests from a web app _to_ a service via a standard http request, which is already easily done from the web app. The request is processed with a thin wrapper to call the appropriate method on the agent via the embedded console.
This effectively allows 2-way communication between any web app and any qmf-exposed service, while keeping that gateway configurable to be clustered/proxied/whatever-is-needed-by-sysadmin. There are 2 variations how this could be deployed (and we may not have to even decide this right away):
1. Set up this gateway with conductor (ie, on same box or network). This would have a level of http auth (cert, krb, w/e), and a level of auth/config needed for qmf (similar auth options). The possible added complication here would be configuring the qpid domain so to console is able to find an agent and register for events (say, if the agent is on some sandboxed network location). Again, this may be a minor config issue, as I know you can do all kinds of fancy things with qmf for these kinds of scenarios.
2. Set up the gateway inside the pacemaker cloud (different box/network from conductor). All the auth bits should be the same here, but different config - conductor would need to be reachable in this case via http in some way, so I can envision environments where this might not work.
=== What would need to be done? === * Bring back and enhance the aeolus-connector web service * Write a console for the pacemaker agent * Add that console and appropriate endpoints into web service/gateway. * Conductor would need to both call this gateway and add api entrypoints for events to be posted to.
== Option 2: Console inside Conductor ==
This would be the same console needed for the previous option, but running directly in the web app. While theoretically possible, I think this is not the best path for the following reasons (not counting more I am not thinking of this second, I am sure):
* As ruby doesn't have real threads, when we played with this idea in the past, issues were encountered in this scenario, where the console method would block the web app or vice versa. Perhaps some of this could be alleviated with a better threading model (like jruby), but that would add other complications (such as the qmf ruby lib being c, which may not be usable from the jvm) * This console would have to either write its updates directly to the db (which we have been trying to avoid, multiple writers, everything should go through conductor api), or call the conductor api, in which case we are doing almost everything the previous suggestion would do. * If the console were to hang or crash, the entire web app woudl also need to be restarted. * It just plain doesnt really fit inside the web app. My feeling is that we should let the conductor stay a web app and not pollute it with additional pieces that would be better on their own (yes, this one may be highly opinionated).
- This console would have to either write its updates directly to the db
(which we have been trying to avoid, multiple writers, everything should go through conductor api), or call the conductor api, in which case we are doing almost everything the previous suggestion would do.
This point has always confused me... Multiple writers to a db is a perfectly normal/logical thing as long as each writer has it's own segmented space to write. If it's that big of a concern, then why not create a separate database on the same DB server (if you're not comfortable with table segmentation)?
For example, status updates for vms/services coming from Matahari could go to a table that Conductor ONLY reads from, while Pacemaker Cloud has read/write access. This eliminates all of the hoop jumping that I'm seeing in the above two hacky workarounds for using a database properly
Also, if the intent is that the conductor api is the only interface that is acceptable, then why not just provide REST api calls that Pacemaker Cloud can use directly? Yes, it's a bit of glue code for the Pacemaker Cloud guys to write, but it seems a LOT simpler than the generic HTTP/QMF bridge, which IMO smacks of over-engineering.
Thoughts?
Perry
On 07/29/2011 12:24 PM, Perry Myers wrote:
- This console would have to either write its updates directly to the db
(which we have been trying to avoid, multiple writers, everything should go through conductor api), or call the conductor api, in which case we are doing almost everything the previous suggestion would do.
This point has always confused me... Multiple writers to a db is a perfectly normal/logical thing as long as each writer has it's own segmented space to write. If it's that big of a concern, then why not create a separate database on the same DB server (if you're not comfortable with table segmentation)?
For example, status updates for vms/services coming from Matahari could go to a table that Conductor ONLY reads from, while Pacemaker Cloud has read/write access. This eliminates all of the hoop jumping that I'm seeing in the above two hacky workarounds for using a database properly
Also, if the intent is that the conductor api is the only interface that is acceptable, then why not just provide REST api calls that Pacemaker Cloud can use directly? Yes, it's a bit of glue code for the Pacemaker Cloud guys to write, but it seems a LOT simpler than the generic HTTP/QMF bridge, which IMO smacks of over-engineering.
Jason,
I read both of your proposals.
I prefer Perry's suggestion of a standard conductor API if direct QMF integration is not possible. This allows each application to write their own glue code for conductor and maintain it directly. It appears direct QMF integration in conductor is blocked on the threading/non event driven nature of QMF as stated in your last email.
One question I have with this model is the HTTP transmission model. Sending events to this conductor API is easy - just write the http data. Retrieving new events (specifically deployable start + xml data) is where I see problems - we don't want to poll in our app for new deployable information. Pacemaker cloud devs prefer event driven architectures rather then polling.
Is that possible with a http interface?
Regards -stevve
Thoughts?
Perry _______________________________________________ aeolus-devel mailing list aeolus-devel@lists.fedorahosted.org https://fedorahosted.org/mailman/listinfo/aeolus-devel
On Fri, 2011-07-29 at 13:21 -0700, Steven Dake wrote:
On 07/29/2011 12:24 PM, Perry Myers wrote:
- This console would have to either write its updates directly to the db
(which we have been trying to avoid, multiple writers, everything should go through conductor api), or call the conductor api, in which case we are doing almost everything the previous suggestion would do.
This point has always confused me... Multiple writers to a db is a perfectly normal/logical thing as long as each writer has it's own segmented space to write. If it's that big of a concern, then why not create a separate database on the same DB server (if you're not comfortable with table segmentation)?
For example, status updates for vms/services coming from Matahari could go to a table that Conductor ONLY reads from, while Pacemaker Cloud has read/write access. This eliminates all of the hoop jumping that I'm seeing in the above two hacky workarounds for using a database properly
Also, if the intent is that the conductor api is the only interface that is acceptable, then why not just provide REST api calls that Pacemaker Cloud can use directly? Yes, it's a bit of glue code for the Pacemaker Cloud guys to write, but it seems a LOT simpler than the generic HTTP/QMF bridge, which IMO smacks of over-engineering.
Jason,
I read both of your proposals.
I prefer Perry's suggestion of a standard conductor API if direct QMF integration is not possible. This allows each application to write their own glue code for conductor and maintain it directly. It appears direct QMF integration in conductor is blocked on the threading/non event driven nature of QMF as stated in your last email.
One question I have with this model is the HTTP transmission model. Sending events to this conductor API is easy - just write the http data. Retrieving new events (specifically deployable start + xml data) is where I see problems - we don't want to poll in our app for new deployable information. Pacemaker cloud devs prefer event driven architectures rather then polling.
Is that possible with a http interface?
That is in fact the main purpose of the previous suggestion, conductor would issue events to pacemaker as they occurred. Pacemaker maintains its event-driven nature without having to care how the client/consumer wants to interact with it. That said, if from your side, you would prefer to just write the glue yourselves as perry suggested, no skin off my back, less work in fact. Conductor has to provider the api anyway, and if there is a good solution (I know webhooks was floated, could be an option) for pacemaker to receive http updates, then yay.
On 07/29/2011 02:09 PM, Jason Guiditta wrote:
On Fri, 2011-07-29 at 13:21 -0700, Steven Dake wrote:
On 07/29/2011 12:24 PM, Perry Myers wrote:
- This console would have to either write its updates directly to the db
(which we have been trying to avoid, multiple writers, everything should go through conductor api), or call the conductor api, in which case we are doing almost everything the previous suggestion would do.
This point has always confused me... Multiple writers to a db is a perfectly normal/logical thing as long as each writer has it's own segmented space to write. If it's that big of a concern, then why not create a separate database on the same DB server (if you're not comfortable with table segmentation)?
For example, status updates for vms/services coming from Matahari could go to a table that Conductor ONLY reads from, while Pacemaker Cloud has read/write access. This eliminates all of the hoop jumping that I'm seeing in the above two hacky workarounds for using a database properly
Also, if the intent is that the conductor api is the only interface that is acceptable, then why not just provide REST api calls that Pacemaker Cloud can use directly? Yes, it's a bit of glue code for the Pacemaker Cloud guys to write, but it seems a LOT simpler than the generic HTTP/QMF bridge, which IMO smacks of over-engineering.
Jason,
I read both of your proposals.
I prefer Perry's suggestion of a standard conductor API if direct QMF integration is not possible. This allows each application to write their own glue code for conductor and maintain it directly. It appears direct QMF integration in conductor is blocked on the threading/non event driven nature of QMF as stated in your last email.
One question I have with this model is the HTTP transmission model. Sending events to this conductor API is easy - just write the http data. Retrieving new events (specifically deployable start + xml data) is where I see problems - we don't want to poll in our app for new deployable information. Pacemaker cloud devs prefer event driven architectures rather then polling.
Is that possible with a http interface?
That is in fact the main purpose of the previous suggestion, conductor would issue events to pacemaker as they occurred. Pacemaker maintains its event-driven nature without having to care how the client/consumer wants to interact with it. That said, if from your side, you would prefer to just write the glue yourselves as perry suggested, no skin off my back, less work in fact. Conductor has to provider the api anyway, and if there is a good solution (I know webhooks was floated, could be an option) for pacemaker to receive http updates, then yay.
Only interest is getting something functional and simple. We had planned to assist in development of the API (ie write patches to make it happen; you wont have to go it alone).
My initial reaction to your original proposal was that there would essentially be a process that contains multiple consoles from different projects. I believe multiple QMF consoles in one process is not possible.
Did I correctly interpret your proposal?
Regards -steve
aeolus-devel mailing list aeolus-devel@lists.fedorahosted.org https://fedorahosted.org/mailman/listinfo/aeolus-devel
On Fri, 2011-07-29 at 14:25 -0700, Steven Dake wrote:
On 07/29/2011 02:09 PM, Jason Guiditta wrote:
On Fri, 2011-07-29 at 13:21 -0700, Steven Dake wrote:
On 07/29/2011 12:24 PM, Perry Myers wrote:
- This console would have to either write its updates directly to the db
(which we have been trying to avoid, multiple writers, everything should go through conductor api), or call the conductor api, in which case we are doing almost everything the previous suggestion would do.
This point has always confused me... Multiple writers to a db is a perfectly normal/logical thing as long as each writer has it's own segmented space to write. If it's that big of a concern, then why not create a separate database on the same DB server (if you're not comfortable with table segmentation)?
For example, status updates for vms/services coming from Matahari could go to a table that Conductor ONLY reads from, while Pacemaker Cloud has read/write access. This eliminates all of the hoop jumping that I'm seeing in the above two hacky workarounds for using a database properly
Also, if the intent is that the conductor api is the only interface that is acceptable, then why not just provide REST api calls that Pacemaker Cloud can use directly? Yes, it's a bit of glue code for the Pacemaker Cloud guys to write, but it seems a LOT simpler than the generic HTTP/QMF bridge, which IMO smacks of over-engineering.
Jason,
I read both of your proposals.
I prefer Perry's suggestion of a standard conductor API if direct QMF integration is not possible. This allows each application to write their own glue code for conductor and maintain it directly. It appears direct QMF integration in conductor is blocked on the threading/non event driven nature of QMF as stated in your last email.
One question I have with this model is the HTTP transmission model. Sending events to this conductor API is easy - just write the http data. Retrieving new events (specifically deployable start + xml data) is where I see problems - we don't want to poll in our app for new deployable information. Pacemaker cloud devs prefer event driven architectures rather then polling.
Is that possible with a http interface?
That is in fact the main purpose of the previous suggestion, conductor would issue events to pacemaker as they occurred. Pacemaker maintains its event-driven nature without having to care how the client/consumer wants to interact with it. That said, if from your side, you would prefer to just write the glue yourselves as perry suggested, no skin off my back, less work in fact. Conductor has to provider the api anyway, and if there is a good solution (I know webhooks was floated, could be an option) for pacemaker to receive http updates, then yay.
Only interest is getting something functional and simple. We had planned to assist in development of the API (ie write patches to make it happen; you wont have to go it alone).
Reading back, I realize my reply was a bit harsher sounding than I intended - all I meant to say was that if the other approach didn't suit the situation, I would not lose sleep over it, especially if I was not having to implement it. /me makes mental note not to reply quickly late on a friday.
My initial reaction to your original proposal was that there would essentially be a process that contains multiple consoles from different projects.
This is one possible scenario, yes, though for performance reasons, I am not sure this would be ideal. I was thinking more that there would be one of these bridges configured/proxied/load balanced for one production installation, or perhaps even each cloud. If there are more than one set of components at a time that needed to make use of this in that scenario, then sure, I could envision extra consoles being added to a bridge.
I believe multiple QMF consoles in one process is not possible.
When you spin up a new console and run it, it starts a new process from the c side (so a real thread). There may still be a bug or two around this on the qmf side (there were issues launching this thread correctly from a ruby call in recent memory), so we would need to watch for performance degradation, but that should be pretty easy to spot and file a bz for. I may be wrong as I have not had occasion to do this yet, but I am pretty sure it is not expected to be an issue.
Did I correctly interpret your proposal?
Regards -steve
aeolus-devel mailing list aeolus-devel@lists.fedorahosted.org https://fedorahosted.org/mailman/listinfo/aeolus-devel
On Fri, 2011-07-29 at 15:24 -0400, Perry Myers wrote:
- This console would have to either write its updates directly to the db
(which we have been trying to avoid, multiple writers, everything should go through conductor api), or call the conductor api, in which case we are doing almost everything the previous suggestion would do.
This point has always confused me... Multiple writers to a db is a perfectly normal/logical thing as long as each writer has it's own segmented space to write. If it's that big of a concern, then why not create a separate database on the same DB server (if you're not comfortable with table segmentation)?
The issue isn't so much the multiple writers to the DB (since as you point out that's the bread-and-butter of RDBMS) but the processing that needs to happen before you hit the DB, since you want to avoid implementing the same business logic in two places.
With ovirt, we tried to tease apart the protocol-specific processing (HTTP vs. QMF) and strict business logic with the 'service layer', it was workable but awkward.
There is definitely room for a generic QMF/HTTP bridge: something that takes in QMF events on a console (stats/property changes) and turns them into HTTP requests. The other way (making QMF calls from a webserver) fits much better with the general flow of a webapp, and would come down to similar issues as DB connection handling causes, i.e. nothing too dramatic.
David
== Option 1: QMF-HTTP Gateway/Bridge ==
- One the previous version of the aeolus suite, we had a daemon called
aeolus-connector. This particular daemon was using the Imagefactory console that we wrote, but the concept could easily be extended to any console. The basic idea is that there is a dedicated 'gateway' that is an http service on one side, and talks to one or more qmf consoles on the other. This allows any service to:
- Send events via qmf and not have to worry about converting those
events into http requests to be sent to the api of a web app. The http request/put/whatever is done by the gateway.
- Send requests from a web app _to_ a service via a standard http
request, which is already easily done from the web app. The request is processed with a thin wrapper to call the appropriate method on the agent via the embedded console.
This effectively allows 2-way communication between any web app and any qmf-exposed service, while keeping that gateway configurable to be clustered/proxied/whatever-is-needed-by-sysadmin. There are 2 variations how this could be deployed (and we may not have to even decide this right away):
- Set up this gateway with conductor (ie, on same box or network).
This would have a level of http auth (cert, krb, w/e), and a level of auth/config needed for qmf (similar auth options). The possible added complication here would be configuring the qpid domain so to console is able to find an agent and register for events (say, if the agent is on some sandboxed network location). Again, this may be a minor config issue, as I know you can do all kinds of fancy things with qmf for these kinds of scenarios.
- Set up the gateway inside the pacemaker cloud (different box/network
from conductor). All the auth bits should be the same here, but different config - conductor would need to be reachable in this case via http in some way, so I can envision environments where this might not work.
=== What would need to be done? ===
- Bring back and enhance the aeolus-connector web service
- Write a console for the pacemaker agent
- Add that console and appropriate endpoints into web service/gateway.
- Conductor would need to both call this gateway and add api entrypoints
for events to be posted to.
Looking at this more generically... Jay, there may be uses for this QMF/REST bridge outside of Conductor if it could be made generic and lightweight.
For the general matahari systems management use cases, we'll be running QMF Agents on the guest (or host) with a local qpid broker. But say that you would just rather access the matahari APIs via REST calls instead of setting up a QMF console? This QMF/REST bridge could sit alongside the matahari-broker and matahari-agents to expose their APIs via REST calls.
What would need to be done to make this generic and small footprint dependency-wise?
Also, let's say you run this bridge with an Agent 'Foo' which exports method bar(). But later you add a new method baz(), would the bridge itself need to be updated to handle the new method or is that done transparently somehow (maybe dynamic translation from QMF to REST by introspection/reading schemas)
Or am I completely off base?
Perry
On Fri, 2011-07-29 at 17:38 -0400, Perry Myers wrote:
== Option 1: QMF-HTTP Gateway/Bridge ==
- One the previous version of the aeolus suite, we had a daemon called
aeolus-connector. This particular daemon was using the Imagefactory console that we wrote, but the concept could easily be extended to any console. The basic idea is that there is a dedicated 'gateway' that is an http service on one side, and talks to one or more qmf consoles on the other. This allows any service to:
- Send events via qmf and not have to worry about converting those
events into http requests to be sent to the api of a web app. The http request/put/whatever is done by the gateway.
- Send requests from a web app _to_ a service via a standard http
request, which is already easily done from the web app. The request is processed with a thin wrapper to call the appropriate method on the agent via the embedded console.
This effectively allows 2-way communication between any web app and any qmf-exposed service, while keeping that gateway configurable to be clustered/proxied/whatever-is-needed-by-sysadmin. There are 2 variations how this could be deployed (and we may not have to even decide this right away):
- Set up this gateway with conductor (ie, on same box or network).
This would have a level of http auth (cert, krb, w/e), and a level of auth/config needed for qmf (similar auth options). The possible added complication here would be configuring the qpid domain so to console is able to find an agent and register for events (say, if the agent is on some sandboxed network location). Again, this may be a minor config issue, as I know you can do all kinds of fancy things with qmf for these kinds of scenarios.
- Set up the gateway inside the pacemaker cloud (different box/network
from conductor). All the auth bits should be the same here, but different config - conductor would need to be reachable in this case via http in some way, so I can envision environments where this might not work.
=== What would need to be done? ===
- Bring back and enhance the aeolus-connector web service
- Write a console for the pacemaker agent
- Add that console and appropriate endpoints into web service/gateway.
- Conductor would need to both call this gateway and add api entrypoints
for events to be posted to.
Looking at this more generically... Jay, there may be uses for this QMF/REST bridge outside of Conductor if it could be made generic and lightweight.
For the general matahari systems management use cases, we'll be running QMF Agents on the guest (or host) with a local qpid broker. But say that you would just rather access the matahari APIs via REST calls instead of setting up a QMF console? This QMF/REST bridge could sit alongside the matahari-broker and matahari-agents to expose their APIs via REST calls.
What would need to be done to make this generic and small footprint dependency-wise?
Also, let's say you run this bridge with an Agent 'Foo' which exports method bar(). But later you add a new method baz(), would the bridge itself need to be updated to handle the new method or is that done transparently somehow (maybe dynamic translation from QMF to REST by introspection/reading schemas)
This is precisely what I had in mind. The initial version of this, as I mentioned, was geared specifically toward the imagefactory qmf agent, but the intent was to make it more generic over time. (Since it got temporarily removed, that plan was somewhat delayed). However, given that qmf provides the schema mechanism, and ruby is very good at dynamically building methods based on that type of thing, my hope is to be able to make this into a very simple wrapper that can generically call to/respond from any qmf agent. I would like to see only an agent(s) config and endpoint(s) for return handler needed to start the thing up and have it 'just work'. If any custom work is needed, there is a handler mechanism in pace, so anyone can just subclass that and add the custom behavior. The hardest technical bits, I think, are actually the part where we have to handle qmf errors, as they tend to be wrapped in so many layers, but I am cautiously optimistic that will be doable as well.
On the dependency side, they should be fairly light (and already are). If you were installing this via rpm, the biggest thing to decide path on, imo, is how to pull in just the console(s) this particular bridge needs. This could be done in a metapackage, or just docs saying to install x,y and z for this particular scenario. However, this is a packaging decision, I think, and should not impact the code directly. Basically, this would just be a little sinatra webapp that could be proxied/load balanced/whatever with a runtime dep of the needed qmf consoles, and a lib to read and write xml, should be the bulk of it.
Or am I completely off base?
Perry
On Mon, Aug 01, 2011 at 03:28:38PM -0400, Jason Guiditta wrote:
On Fri, 2011-07-29 at 17:38 -0400, Perry Myers wrote:
== Option 1: QMF-HTTP Gateway/Bridge ==
[snip]
What would need to be done to make this generic and small footprint dependency-wise?
Also, let's say you run this bridge with an Agent 'Foo' which exports method bar(). But later you add a new method baz(), would the bridge itself need to be updated to handle the new method or is that done transparently somehow (maybe dynamic translation from QMF to REST by introspection/reading schemas)
This is precisely what I had in mind. The initial version of this, as I mentioned, was geared specifically toward the imagefactory qmf agent, but the intent was to make it more generic over time. (Since it got temporarily removed, that plan was somewhat delayed). However, given that qmf provides the schema mechanism, and ruby is very good at dynamically building methods based on that type of thing, my hope is to be able to make this into a very simple wrapper that can generically call to/respond from any qmf agent. I would like to see only an agent(s) config and endpoint(s) for return handler needed to start the thing up and have it 'just work'. If any custom work is needed, there is a handler mechanism in pace, so anyone can just subclass that and add the custom behavior. The hardest technical bits, I think, are actually the part where we have to handle qmf errors, as they tend to be wrapped in so many layers, but I am cautiously optimistic that will be doable as well.
On the dependency side, they should be fairly light (and already are). If you were installing this via rpm, the biggest thing to decide path on, imo, is how to pull in just the console(s) this particular bridge needs. This could be done in a metapackage, or just docs saying to install x,y and z for this particular scenario. However, this is a packaging decision, I think, and should not impact the code directly. Basically, this would just be a little sinatra webapp that could be proxied/load balanced/whatever with a runtime dep of the needed qmf consoles, and a lib to read and write xml, should be the bulk of it.
Sounds like a nice addition to whatever QMF's upstream project is...
--H
On 08/01/2011 04:08 PM, Hugh Brock wrote:
On Mon, Aug 01, 2011 at 03:28:38PM -0400, Jason Guiditta wrote:
On Fri, 2011-07-29 at 17:38 -0400, Perry Myers wrote:
== Option 1: QMF-HTTP Gateway/Bridge ==
[snip]
What would need to be done to make this generic and small footprint dependency-wise?
Also, let's say you run this bridge with an Agent 'Foo' which exports method bar(). But later you add a new method baz(), would the bridge itself need to be updated to handle the new method or is that done transparently somehow (maybe dynamic translation from QMF to REST by introspection/reading schemas)
This is precisely what I had in mind. The initial version of this, as I mentioned, was geared specifically toward the imagefactory qmf agent, but the intent was to make it more generic over time. (Since it got temporarily removed, that plan was somewhat delayed). However, given that qmf provides the schema mechanism, and ruby is very good at dynamically building methods based on that type of thing, my hope is to be able to make this into a very simple wrapper that can generically call to/respond from any qmf agent. I would like to see only an agent(s) config and endpoint(s) for return handler needed to start the thing up and have it 'just work'. If any custom work is needed, there is a handler mechanism in pace, so anyone can just subclass that and add the custom behavior. The hardest technical bits, I think, are actually the part where we have to handle qmf errors, as they tend to be wrapped in so many layers, but I am cautiously optimistic that will be doable as well.
On the dependency side, they should be fairly light (and already are). If you were installing this via rpm, the biggest thing to decide path on, imo, is how to pull in just the console(s) this particular bridge needs. This could be done in a metapackage, or just docs saying to install x,y and z for this particular scenario. However, this is a packaging decision, I think, and should not impact the code directly. Basically, this would just be a little sinatra webapp that could be proxied/load balanced/whatever with a runtime dep of the needed qmf consoles, and a lib to read and write xml, should be the bulk of it.
Sounds like a nice addition to whatever QMF's upstream project is...
Agreed, thanks for the info Jay. The question is, of course, what the timeline for doing this and does it make sense for us to start integration between Pacemaker Cloud and Conductor with something simpler to get things working? Then eventually when this generic QMF/REST bridge matures, we can integrate that as well.
QMF is listed on the front page there :)
aeolus-devel@lists.fedorahosted.org