Hi,
This expands on some of the notes Jan provided in other RFCs. delayed_jobs and resque appears to be the most commonly deployed solution.
I listed what I thought should be the requirements for a background processing solution. For each requirement I then added some details on how well delayed_jobs and resque could satisfy it.
Resque contains most of the features we need. It requires Redis, which is a open source project sponsored by VMware. Redis is available in Fedora. But I don't see Redis available in RHEL and getting it in for RHEL is the big question mark.
https://www.aeolusproject.org/redmine/projects/aeolus/wiki/Background_Proces...
---
Background Processing
# Summary
The two most common solutions are delayed_jobs and resque. There is a good write up on github comparing other background processing solutions and why they eventually steered towards delayed_jobs and then resque, https://github.com/blog/542-introducing-resque.
The primary differences between delayed_jobs and resque are:
At the moment, delayed_jobs doesn't have support for recurring jobs. Resque does support recuring jobs through the resque-scheduler extension/gem.
resque provides a sinatra app to monitor the queue. delayed_job doesn't provide monitoring tools out of the box, but we can potential build something on top of rails or simply look at the contents of the database table.
resque requires multiple components and potentially could be more difficult to support. It requries a second gem called resque-scheduler. It also uses Redis as its backend and it is currently not available with RHEL. This may be the deal breaker.
# Requirements
1. Bucket jobs into different queues. A long running job to check instance status for 1000 instances should not hold up other jobs. The solution should also support multiple workers which would minimize impact of longer running jobs. But using different queues will offer finer grain control.
* delayed_job: supports multiple queues through named queues starting with version 3.0. Can start up multiple workers for all queues or for specific queues. * resque: supports multiple queues and workers.
2. Jobs should persist in some way. If a crash occurs, we should be able to restart the system and continue with processing incomplete jobs in the queue.
* delayed_job: Jobs persists as objects stored in activerecord entries. * resque: Jobs persists as json objects in redis entries. Using json objects instead of actual objects which may have advanced to a different version makes updating the application potentially easier.
3. Recurring jobs.
* delayed_jobs: Not available, in development. * resque: Through resque-scheduler extension. * whenever: A potential alternative to do cron style scheduling [6].
4. Alerts. Failures should be presented to the user in some way (email, conductor UI) so that appropriate actions can be taken.
* delayed_jobs: Support code hooks for different stages in the process. Hooks can be added for error, failure, success.. By default workers will retry a job 25 times. We should use a lower number. No sense in retrying that number of times and holding up the queue if there is a hard failure somewhere in the system. By default it also deletes failed jobs, but can be configured to leave them in the queue with a flag to indicate failure. * resque: Failed jobs can go through additional processing using different failure backends. redis, syslog, custom, etc..
5. A mechanism to requeue a failed job once the underlying issue has been resolved. If an instance start job fails and there is a network failure to a provider. Once the network is back online, we should have an ability to requeue those jobs. Not sure if this should be automated or if this should be a button somewhere where a user can manually requeue all or select failed jobs.
* custom
6. Monitor job status. We should have some way to see what is in the queue.
* delayed_jobs: Can only view queue through activerecord database entries. There is no UI so it is more difficult to see what is going on. * resque: Provides a sinatra app to monitor queues, jobs, and workers.
7. Should not enqueue duplicate jobs.
* custom
8. Ability to remove jobs from the queues and to place a pause on the queues or jobs.
* custom
9. Supportable in Fedora and RHEL
* delayed_jobs: We used it in the past. Will need to carry the gem. * resque: Will need to carry the gem. In addition it requires Redis as the backend. Redis is available in Fedora but not in RHEL. Redis is a open source project sponsored by VMware [4].
# Use Cases
1. Dbomatic replacement for instance and realm checking and RHEV instance start.
Each RHEV instance that is created will also lead to a job that is enqueued to start that instance.
Create a new job to perform instance status check. Create a status check job for each provider account. Allow status check job to be disabled/enabled per provider account.
Create a new job to sync realms for all providers. This can be broken up to a job per provider if needed.
Create two queues. One for managing instance lifecycle. And a second queue for all other jobs. Start with two workers per queue. Make the number of workers configurable so that it may be adjusted when needed.
2. ldap syncing
3. Generic instance start and stop
# Reference
[1] https://github.com/collectiveidea/delayed_job/wiki/Named-Queues-Proposal
[2] https://github.com/blog/542-introducing-resque Discusses github's use of different background job solutions
[3] https://github.com/bvandenbos/resque-scheduler
[4] http://redis.io/
[5] http://blog.railsupgrade.com/2011/08/replace-delayedjob-with-resque.html
On Mon, Apr 09, 2012 at 05:44:45PM -0700, Richard Su wrote:
Hi,
This expands on some of the notes Jan provided in other RFCs. delayed_jobs and resque appears to be the most commonly deployed solution.
I have worked with delayed_job a little bit in the past, and was pretty happy with it.
I listed what I thought should be the requirements for a background processing solution. For each requirement I then added some details on how well delayed_jobs and resque could satisfy it.
Resque contains most of the features we need. It requires Redis, which is a open source project sponsored by VMware. Redis is available in Fedora. But I don't see Redis available in RHEL and getting it in for RHEL is the big question mark.
Redis is really cool. But the idea of pulling it in as a dependency just for a queuing system feels like overkill to me. (IMHO)
https://www.aeolusproject.org/redmine/projects/aeolus/wiki/Background_Proces...
Background Processing
# Summary
The two most common solutions are delayed_jobs and resque. There is a good write up on github comparing other background processing solutions and why they eventually steered towards delayed_jobs and then resque, https://github.com/blog/542-introducing-resque.
When we first started using DelayedJob on a previous project, I was really nervous about the idea of using our already-busy database for it, and at how it would scale as our number of jobs grew. It sounds like GitHub _did_ hit scaling issues, but on the project I worked on using DJ, we never ran into any problems even as we enqueued thousands of jobs and processed several hundred a minute. (Sending out customized emails, etc.)
The primary differences between delayed_jobs and resque are:
At the moment, delayed_jobs doesn't have support for recurring jobs. Resque does support recuring jobs through the resque-scheduler extension/gem.
I've only taken a quick look so far, but it looks like resque-scheduler leverages https://github.com/jmettraux/rufus-scheduler for its cron-like functionality. I wonder if we can take advantage of that?
resque provides a sinatra app to monitor the queue. delayed_job doesn't provide monitoring tools out of the box, but we can potential build something on top of rails or simply look at the contents of the database table.
So I actually view this as a plus for DJ. Resque comes with a standalone Sinatra app. DelayedJob can be integrated into our app cleanly by treating the table like it's ActiveRecord. My memory is slightly hazy, but I seem to recall that we set up a Job model based on DJ and set up some named scopes, we could easily do "Job.failed.count" or "Job.pending.count" for quick statistics, and we built a tiny little admin controller for paginating through the list of jobs.
resque requires multiple components and potentially could be more difficult to support. It requries a second gem called resque-scheduler. It also uses Redis as its backend and it is currently not available with RHEL. This may be the deal breaker.
# Requirements
- Bucket jobs into different queues. A long running job to check
instance status for 1000 instances should not hold up other jobs. The solution should also support multiple workers which would minimize impact of longer running jobs. But using different queues will offer finer grain control.
- delayed_job: supports multiple queues through named queues starting
with version 3.0. Can start up multiple workers for all queues or for specific queues.
- resque: supports multiple queues and workers.
- Jobs should persist in some way. If a crash occurs, we should be
able to restart the system and continue with processing incomplete jobs in the queue.
- delayed_job: Jobs persists as objects stored in activerecord entries.
- resque: Jobs persists as json objects in redis entries. Using json
objects instead of actual objects which may have advanced to a different version makes updating the application potentially easier.
- Recurring jobs.
- delayed_jobs: Not available, in development.
- resque: Through resque-scheduler extension.
- whenever: A potential alternative to do cron style scheduling [6].
See also rufus-scheduler, linked above.
It's not clear to me if "whenever" actually lets you define new jobs while running or not.
- Alerts. Failures should be presented to the user in some way
(email, conductor UI) so that appropriate actions can be taken.
- delayed_jobs: Support code hooks for different stages in the
process. Hooks can be added for error, failure, success.. By default workers will retry a job 25 times. We should use a lower number. No sense in retrying that number of times and holding up the queue if there is a hard failure somewhere in the system. By default it also deletes failed jobs, but can be configured to leave them in the queue with a flag to indicate failure.
- resque: Failed jobs can go through additional processing using
different failure backends. redis, syslog, custom, etc..
- A mechanism to requeue a failed job once the underlying issue has
been resolved. If an instance start job fails and there is a network failure to a provider. Once the network is back online, we should have an ability to requeue those jobs. Not sure if this should be automated or if this should be a button somewhere where a user can manually requeue all or select failed jobs.
I'm slightly uneasy about this, but perhaps I'm just not thinking it through fully enough. If launching an instance fails, I think, per Jan's robust image launching stuff, we want to just move on and try somewhere else, rather than having the instance potentially pop up an hour later. I think the general idea is good, though.
- custom
- Monitor job status. We should have some way to see what is in the queue.
- delayed_jobs: Can only view queue through activerecord database
entries. There is no UI so it is more difficult to see what is going on.
- resque: Provides a sinatra app to monitor queues, jobs, and workers.
Though as I mentioned above, I would put a different spin on it -- DJ lets us use our existing ActiveRecord interfaces to query this data easily and integrate it into our app. Resque comes with a standalone Sinatra app.
- Should not enqueue duplicate jobs.
- custom
- Ability to remove jobs from the queues and to place a pause on the
queues or jobs.
- custom
- Supportable in Fedora and RHEL
- delayed_jobs: We used it in the past. Will need to carry the gem.
- resque: Will need to carry the gem. In addition it requires Redis
as the backend. Redis is available in Fedora but not in RHEL. Redis is a open source project sponsored by VMware [4].
# Use Cases
- Dbomatic replacement for instance and realm checking and RHEV
instance start.
Each RHEV instance that is created will also lead to a job that is enqueued to start that instance.
Create a new job to perform instance status check. Create a status check job for each provider account. Allow status check job to be disabled/enabled per provider account.
Create a new job to sync realms for all providers. This can be broken up to a job per provider if needed.
Create two queues. One for managing instance lifecycle. And a second queue for all other jobs. Start with two workers per queue. Make the number of workers configurable so that it may be adjusted when needed.
I think this is a good approach, though it bears mention that if each worker has a copy of the Rails runtime, this can end up gobbling up a lot of memory.
-- Matt
On 04/10/2012 08:10 AM, Matt Wagner wrote:
On Mon, Apr 09, 2012 at 05:44:45PM -0700, Richard Su wrote:
Hi,
This expands on some of the notes Jan provided in other RFCs. delayed_jobs and resque appears to be the most commonly deployed solution.
I have worked with delayed_job a little bit in the past, and was pretty happy with it.
I listed what I thought should be the requirements for a background processing solution. For each requirement I then added some details on how well delayed_jobs and resque could satisfy it.
Resque contains most of the features we need. It requires Redis, which is a open source project sponsored by VMware. Redis is available in Fedora. But I don't see Redis available in RHEL and getting it in for RHEL is the big question mark.
Redis is really cool. But the idea of pulling it in as a dependency just for a queuing system feels like overkill to me. (IMHO)
https://www.aeolusproject.org/redmine/projects/aeolus/wiki/Background_Proces...
Background Processing
# Summary
The two most common solutions are delayed_jobs and resque. There is a good write up on github comparing other background processing solutions and why they eventually steered towards delayed_jobs and then resque, https://github.com/blog/542-introducing-resque.
When we first started using DelayedJob on a previous project, I was really nervous about the idea of using our already-busy database for it, and at how it would scale as our number of jobs grew. It sounds like GitHub _did_ hit scaling issues, but on the project I worked on using DJ, we never ran into any problems even as we enqueued thousands of jobs and processed several hundred a minute. (Sending out customized emails, etc.)
The primary differences between delayed_jobs and resque are:
At the moment, delayed_jobs doesn't have support for recurring jobs. Resque does support recuring jobs through the resque-scheduler extension/gem.
I've only taken a quick look so far, but it looks like resque-scheduler leverages https://github.com/jmettraux/rufus-scheduler for its cron-like functionality. I wonder if we can take advantage of that?
resque provides a sinatra app to monitor the queue. delayed_job doesn't provide monitoring tools out of the box, but we can potential build something on top of rails or simply look at the contents of the database table.
So I actually view this as a plus for DJ. Resque comes with a standalone Sinatra app. DelayedJob can be integrated into our app cleanly by treating the table like it's ActiveRecord. My memory is slightly hazy, but I seem to recall that we set up a Job model based on DJ and set up some named scopes, we could easily do "Job.failed.count" or "Job.pending.count" for quick statistics, and we built a tiny little admin controller for paginating through the list of jobs.
resque requires multiple components and potentially could be more difficult to support. It requries a second gem called resque-scheduler. It also uses Redis as its backend and it is currently not available with RHEL. This may be the deal breaker.
# Requirements
- Bucket jobs into different queues. A long running job to check
instance status for 1000 instances should not hold up other jobs. The solution should also support multiple workers which would minimize impact of longer running jobs. But using different queues will offer finer grain control.
- delayed_job: supports multiple queues through named queues starting
with version 3.0. Can start up multiple workers for all queues or for specific queues.
- resque: supports multiple queues and workers.
- Jobs should persist in some way. If a crash occurs, we should be
able to restart the system and continue with processing incomplete jobs in the queue.
- delayed_job: Jobs persists as objects stored in activerecord entries.
- resque: Jobs persists as json objects in redis entries. Using json
objects instead of actual objects which may have advanced to a different version makes updating the application potentially easier.
- Recurring jobs.
- delayed_jobs: Not available, in development.
- resque: Through resque-scheduler extension.
- whenever: A potential alternative to do cron style scheduling [6].
See also rufus-scheduler, linked above.
It's not clear to me if "whenever" actually lets you define new jobs while running or not.
- Alerts. Failures should be presented to the user in some way
(email, conductor UI) so that appropriate actions can be taken.
- delayed_jobs: Support code hooks for different stages in the
process. Hooks can be added for error, failure, success.. By default workers will retry a job 25 times. We should use a lower number. No sense in retrying that number of times and holding up the queue if there is a hard failure somewhere in the system. By default it also deletes failed jobs, but can be configured to leave them in the queue with a flag to indicate failure.
- resque: Failed jobs can go through additional processing using
different failure backends. redis, syslog, custom, etc..
- A mechanism to requeue a failed job once the underlying issue has
been resolved. If an instance start job fails and there is a network failure to a provider. Once the network is back online, we should have an ability to requeue those jobs. Not sure if this should be automated or if this should be a button somewhere where a user can manually requeue all or select failed jobs.
I'm slightly uneasy about this, but perhaps I'm just not thinking it through fully enough. If launching an instance fails, I think, per Jan's robust image launching stuff, we want to just move on and try somewhere else, rather than having the instance potentially pop up an hour later. I think the general idea is good, though.
- custom
- Monitor job status. We should have some way to see what is in the queue.
- delayed_jobs: Can only view queue through activerecord database
entries. There is no UI so it is more difficult to see what is going on.
- resque: Provides a sinatra app to monitor queues, jobs, and workers.
Though as I mentioned above, I would put a different spin on it -- DJ lets us use our existing ActiveRecord interfaces to query this data easily and integrate it into our app. Resque comes with a standalone Sinatra app.
- Should not enqueue duplicate jobs.
- custom
- Ability to remove jobs from the queues and to place a pause on the
queues or jobs.
- custom
- Supportable in Fedora and RHEL
- delayed_jobs: We used it in the past. Will need to carry the gem.
- resque: Will need to carry the gem. In addition it requires Redis
as the backend. Redis is available in Fedora but not in RHEL. Redis is a open source project sponsored by VMware [4].
# Use Cases
- Dbomatic replacement for instance and realm checking and RHEV
instance start.
Each RHEV instance that is created will also lead to a job that is enqueued to start that instance.
Create a new job to perform instance status check. Create a status check job for each provider account. Allow status check job to be disabled/enabled per provider account.
Create a new job to sync realms for all providers. This can be broken up to a job per provider if needed.
Create two queues. One for managing instance lifecycle. And a second queue for all other jobs. Start with two workers per queue. Make the number of workers configurable so that it may be adjusted when needed.
I think this is a good approach, though it bears mention that if each worker has a copy of the Rails runtime, this can end up gobbling up a lot of memory.
-- Matt
Thanks for the feedback Matt. Let's plan on going with delayed_jobs, unless someone dissents.
We'll need to build some additional infrastructure around delayed_jobs to make it more usable. First would be a way to view the contents of the queues from the command line, maybe in the form of a rake task or through aeolus-cli. I also see us adding the ability to start/stop the queue and to adjust the number of workers through the command line. Viewing the queue from Conductor would be nice, but that feels like follow-on work, especially if we foresee further structural changes to the UI.
To plug the recurring job hole in delayed_job, bringing in a scheduler like rufus-scheduler to queue jobs is a good idea. We'll need to think about how to handle alerts with recurring jobs. With dbomatic, errors are buried in the log. It is good that they are logged, but a person needs to dig around to figure out that there is a problem. If we add alerts, the recurring job would produce a stream of alerts/emails each time it failed, possibly flooding a person's inbox. We could add some logic to skip an alert if the same error occurred during the last run.
Speaking of alerts, is there a way to flag an alert with Conductor. I see there is a "Alerts" box in the Monitor overview page. How does that work exactly?
On Thu, Apr 12, 2012 at 01:01:33PM -0700, Richard Su wrote:
Thanks for the feedback Matt. Let's plan on going with delayed_jobs, unless someone dissents.
We'll need to build some additional infrastructure around delayed_jobs to make it more usable. First would be a way to view the contents of the queues from the command line, maybe in the form of a rake task or through aeolus-cli.
This much should be pretty easy, since it's all ActiveRecord objects. The only tricky part is if we store jobs in delayed_job as blobs of some form, we might want to parse them out and show something user-friendly.
I also see us adding the ability to start/stop the queue and to adjust the number of workers through the command line.
This sounds like it's more controlling the workers than the queue itself, though I don't disagree with the objective at all.
Viewing the queue from Conductor would be nice, but that feels like follow-on work, especially if we foresee further structural changes to the UI.
As with the rake task, this should really be pretty easy to implement, though it doesn't seem like we have any equivalent pages right now.
To plug the recurring job hole in delayed_job, bringing in a scheduler like rufus-scheduler to queue jobs is a good idea.
I will disclaim that I haven't worked with rufus-scheduler in the past, but it looks like it's the type of thing we're looking for.
We'll need to think about how to handle alerts with recurring jobs. With dbomatic, errors are buried in the log. It is good that they are logged, but a person needs to dig around to figure out that there is a problem. If we add alerts, the recurring job would produce a stream of alerts/emails each time it failed, possibly flooding a person's inbox. We could add some logic to skip an alert if the same error occurred during the last run.
Having worn a sysadmin hat in the past, I think a tidal wave of emails to indicate a failure is really pretty common. (And it certainly gets your attention more quickly than a single email about a potential problem... Up until the mailserver crashes.)
I think an easy work-around might be to only send alerts if the type of task and the class of the exception hadn't previously been emailed in the past N hours. The downside to this is that it isn't immediately obvious if was a single occurrence of a freak failure, or if it happened 2,500 times in the past hour.
Suppose we created an Alerts table, and logged everything there? We could use it for deciding if we should send an email or not, but it would also allow an admin to log in and see "47,500 new notifications" and see them? The header designs from the UX team showed a notification counter, but we commented it out since we don't currently have an analogous concept in Aeolus. This could implement it.
Speaking of alerts, is there a way to flag an alert with Conductor. I see there is a "Alerts" box in the Monitor overview page. How does that work exactly?
Right now, I'm pretty sure that "Alerts" just lists Instance.failed.all or something along those lines. It seems to me that we could implement an Alert class, and we could (a) log it to the database, and (b) have an after_create hook that decided if we should send an email or not. Does that sound like it would work?
-- Matt
On 04/12/2012 10:41 PM, Matt Wagner wrote:
On Thu, Apr 12, 2012 at 01:01:33PM -0700, Richard Su wrote:
Thanks for the feedback Matt. Let's plan on going with delayed_jobs, unless someone dissents.
We'll need to build some additional infrastructure around delayed_jobs to make it more usable. First would be a way to view the contents of the queues from the command line, maybe in the form of a rake task or through aeolus-cli.
This much should be pretty easy, since it's all ActiveRecord objects. The only tricky part is if we store jobs in delayed_job as blobs of some form, we might want to parse them out and show something user-friendly.
I also see us adding the ability to start/stop the queue and to adjust the number of workers through the command line.
This sounds like it's more controlling the workers than the queue itself, though I don't disagree with the objective at all.
Viewing the queue from Conductor would be nice, but that feels like follow-on work, especially if we foresee further structural changes to the UI.
As with the rake task, this should really be pretty easy to implement, though it doesn't seem like we have any equivalent pages right now.
To plug the recurring job hole in delayed_job, bringing in a scheduler like rufus-scheduler to queue jobs is a good idea.
I will disclaim that I haven't worked with rufus-scheduler in the past, but it looks like it's the type of thing we're looking for.
It seems that rufus-scheduler is based only on threads - it creates one master thread which is only while-true loop with a sleep delay and creation of threads for doing jobs. It's not a standalone process so we would have to write a daemon anyway or run it from our rails process. Overall my impression is that it would be step back in compare to current dbomatic which at least uses separate processes.
Delayed job doesn't support recurring jobs so if we decide for Delayed job now, what will be next step in Dbomatic improvements? Try to use Delayed job for recurring jobs too (by re-enqueing jobs)? Or Integrate another background tool for running recurring jobs?
Using two background tools (one for delayed jobs and one for recurring jobs) doesn't sound right to me - delayed and recurring jobs are very similar to each other.
Using Delayed job for recurring jobs is not correct too. As mentioned in a another thread, re-enqueing jobs is hacky and might be unsafe. Though this option sounds still better than having two separate background tools. I'm little bit afraid that logic which will have to be written around recurring jobs will not be better than current Dbomatic, but I could be wrong.
Because of above I was most attracted by resque-scheduler or anything else which supports both delayed and recurring jobs natively. As I understand it, major problem with resque-scheduler is that Redis is not in RHEL and it would be a problem to get it in, and also because this tool is too heavyweight? And there is not another similar tool?
We'll need to think about how to handle alerts with recurring jobs. With dbomatic, errors are buried in the log. It is good that they are logged, but a person needs to dig around to figure out that there is a problem. If we add alerts, the recurring job would produce a stream of alerts/emails each time it failed, possibly flooding a person's inbox. We could add some logic to skip an alert if the same error occurred during the last run.
Having worn a sysadmin hat in the past, I think a tidal wave of emails to indicate a failure is really pretty common. (And it certainly gets your attention more quickly than a single email about a potential problem... Up until the mailserver crashes.)
I think an easy work-around might be to only send alerts if the type of task and the class of the exception hadn't previously been emailed in the past N hours. The downside to this is that it isn't immediately obvious if was a single occurrence of a freak failure, or if it happened 2,500 times in the past hour.
Suppose we created an Alerts table, and logged everything there? We could use it for deciding if we should send an email or not, but it would also allow an admin to log in and see "47,500 new notifications" and see them? The header designs from the UX team showed a notification counter, but we commented it out since we don't currently have an analogous concept in Aeolus. This could implement it.
Speaking of alerts, is there a way to flag an alert with Conductor. I see there is a "Alerts" box in the Monitor overview page. How does that work exactly?
Right now, I'm pretty sure that "Alerts" just lists Instance.failed.all or something along those lines. It seems to me that we could implement an Alert class, and we could (a) log it to the database, and (b) have an after_create hook that decided if we should send an email or not. Does that sound like it would work?
-- Matt
On 04/13/2012 03:58 AM, Jan Provazník wrote:
On 04/12/2012 10:41 PM, Matt Wagner wrote:
On Thu, Apr 12, 2012 at 01:01:33PM -0700, Richard Su wrote:
Thanks for the feedback Matt. Let's plan on going with delayed_jobs, unless someone dissents.
We'll need to build some additional infrastructure around delayed_jobs to make it more usable. First would be a way to view the contents of the queues from the command line, maybe in the form of a rake task or through aeolus-cli.
This much should be pretty easy, since it's all ActiveRecord objects. The only tricky part is if we store jobs in delayed_job as blobs of some form, we might want to parse them out and show something user-friendly.
I also see us adding the ability to start/stop the queue and to adjust the number of workers through the command line.
This sounds like it's more controlling the workers than the queue itself, though I don't disagree with the objective at all.
Viewing the queue from Conductor would be nice, but that feels like follow-on work, especially if we foresee further structural changes to the UI.
As with the rake task, this should really be pretty easy to implement, though it doesn't seem like we have any equivalent pages right now.
To plug the recurring job hole in delayed_job, bringing in a scheduler like rufus-scheduler to queue jobs is a good idea.
I will disclaim that I haven't worked with rufus-scheduler in the past, but it looks like it's the type of thing we're looking for.
It seems that rufus-scheduler is based only on threads - it creates one master thread which is only while-true loop with a sleep delay and creation of threads for doing jobs. It's not a standalone process so we would have to write a daemon anyway or run it from our rails process. Overall my impression is that it would be step back in compare to current dbomatic which at least uses separate processes.
We wouldn't use rufus-scheduler or another scheduler to run jobs. We would use the scheduler to queue jobs into delayed_job. The jobs would then then be picked up by workers in delayed_job. The number of delayed_job works is configurable and each worker runs in its own process and has its own rails environment.
The scheduling would need to be smart enough to not queue duplicate jobs in the queue if one is already queued or is running. That is something we would add.
A benefit I see with moving to jobs from dbomatic is it gives us an opportunity to break up the different pieces of functionality in dbomatic which will make it easier to test. It also gives us the opportunity to plug in alerts. Right now unless you look at the dbomatic.log it is hard know if a problem occurred.
Delayed job doesn't support recurring jobs so if we decide for Delayed job now, what will be next step in Dbomatic improvements? Try to use Delayed job for recurring jobs too (by re-enqueing jobs)? Or Integrate another background tool for running recurring jobs?
The thinking is combining a scheduler with delayed_job will get us recurring jobs.
Using two background tools (one for delayed jobs and one for recurring jobs) doesn't sound right to me - delayed and recurring jobs are very similar to each other.
Using two background tools is not the goal.
Using Delayed job for recurring jobs is not correct too. As mentioned in a another thread, re-enqueing jobs is hacky and might be unsafe. Though this option sounds still better than having two separate background tools. I'm little bit afraid that logic which will have to be written around recurring jobs will not be better than current Dbomatic, but I could be wrong.
With resque, you don't get recurring jobs unless you add the resque-scheduler, a separate gem. So I think our approach here is similar to what you would get with resque.
I don't think re-queueing jobs is necessarily a bad idea, people have implemented it. It is a matter of trade-offs. But I think having a scheduler would be less work from a maintenance perspective
Until delayed_jobs supports recurring jobs natively, I think using the scheduler + delayed_job approach would be fine.
Because of above I was most attracted by resque-scheduler or anything else which supports both delayed and recurring jobs natively. As I understand it, major problem with resque-scheduler is that Redis is not in RHEL and it would be a problem to get it in, and also because this tool is too heavyweight? And there is not another similar tool?
Yes, that is the general conclusion.
I don't oppose using resque, it just feels like there would be more road blocks to make it happen. Whereas we can run with delayed_jobs on day one.
We'll need to think about how to handle alerts with recurring jobs. With dbomatic, errors are buried in the log. It is good that they are logged, but a person needs to dig around to figure out that there is a problem. If we add alerts, the recurring job would produce a stream of alerts/emails each time it failed, possibly flooding a person's inbox. We could add some logic to skip an alert if the same error occurred during the last run.
Having worn a sysadmin hat in the past, I think a tidal wave of emails to indicate a failure is really pretty common. (And it certainly gets your attention more quickly than a single email about a potential problem... Up until the mailserver crashes.)
I think an easy work-around might be to only send alerts if the type of task and the class of the exception hadn't previously been emailed in the past N hours. The downside to this is that it isn't immediately obvious if was a single occurrence of a freak failure, or if it happened 2,500 times in the past hour.
Suppose we created an Alerts table, and logged everything there? We could use it for deciding if we should send an email or not, but it would also allow an admin to log in and see "47,500 new notifications" and see them? The header designs from the UX team showed a notification counter, but we commented it out since we don't currently have an analogous concept in Aeolus. This could implement it.
Speaking of alerts, is there a way to flag an alert with Conductor. I see there is a "Alerts" box in the Monitor overview page. How does that work exactly?
Right now, I'm pretty sure that "Alerts" just lists Instance.failed.all or something along those lines. It seems to me that we could implement an Alert class, and we could (a) log it to the database, and (b) have an after_create hook that decided if we should send an email or not. Does that sound like it would work?
-- Matt
I've updated the wiki page to give a bit more information about the tasks being proposed for this feature.
https://www.aeolusproject.org/redmine/projects/aeolus/wiki/Background_Proces...
- Richard
On 04/13/2012 10:22 PM, Richard Su wrote:
On 04/13/2012 03:58 AM, Jan Provazník wrote:
On 04/12/2012 10:41 PM, Matt Wagner wrote:
On Thu, Apr 12, 2012 at 01:01:33PM -0700, Richard Su wrote:
Thanks for the feedback Matt. Let's plan on going with delayed_jobs, unless someone dissents.
We'll need to build some additional infrastructure around delayed_jobs to make it more usable. First would be a way to view the contents of the queues from the command line, maybe in the form of a rake task or through aeolus-cli.
This much should be pretty easy, since it's all ActiveRecord objects. The only tricky part is if we store jobs in delayed_job as blobs of some form, we might want to parse them out and show something user-friendly.
I also see us adding the ability to start/stop the queue and to adjust the number of workers through the command line.
This sounds like it's more controlling the workers than the queue itself, though I don't disagree with the objective at all.
Viewing the queue from Conductor would be nice, but that feels like follow-on work, especially if we foresee further structural changes to the UI.
As with the rake task, this should really be pretty easy to implement, though it doesn't seem like we have any equivalent pages right now.
To plug the recurring job hole in delayed_job, bringing in a scheduler like rufus-scheduler to queue jobs is a good idea.
I will disclaim that I haven't worked with rufus-scheduler in the past, but it looks like it's the type of thing we're looking for.
It seems that rufus-scheduler is based only on threads - it creates one master thread which is only while-true loop with a sleep delay and creation of threads for doing jobs. It's not a standalone process so we would have to write a daemon anyway or run it from our rails process. Overall my impression is that it would be step back in compare to current dbomatic which at least uses separate processes.
We wouldn't use rufus-scheduler or another scheduler to run jobs. We would use the scheduler to queue jobs into delayed_job. The jobs would
OK, sounds like a plan.
then then be picked up by workers in delayed_job. The number of delayed_job works is configurable and each worker runs in its own process and has its own rails environment.
The scheduling would need to be smart enough to not queue duplicate jobs in the queue if one is already queued or is running. That is something we would add.
A benefit I see with moving to jobs from dbomatic is it gives us an opportunity to break up the different pieces of functionality in dbomatic which will make it easier to test. It also gives us the opportunity to plug in alerts. Right now unless you look at the dbomatic.log it is hard know if a problem occurred.
Delayed job doesn't support recurring jobs so if we decide for Delayed job now, what will be next step in Dbomatic improvements? Try to use Delayed job for recurring jobs too (by re-enqueing jobs)? Or Integrate another background tool for running recurring jobs?
The thinking is combining a scheduler with delayed_job will get us recurring jobs.
Using two background tools (one for delayed jobs and one for recurring jobs) doesn't sound right to me - delayed and recurring jobs are very similar to each other.
Using two background tools is not the goal.
Using Delayed job for recurring jobs is not correct too. As mentioned in a another thread, re-enqueing jobs is hacky and might be unsafe. Though this option sounds still better than having two separate background tools. I'm little bit afraid that logic which will have to be written around recurring jobs will not be better than current Dbomatic, but I could be wrong.
With resque, you don't get recurring jobs unless you add the resque-scheduler, a separate gem. So I think our approach here is similar to what you would get with resque.
I don't think re-queueing jobs is necessarily a bad idea, people have implemented it. It is a matter of trade-offs. But I think having a scheduler would be less work from a maintenance perspective
Until delayed_jobs supports recurring jobs natively, I think using the scheduler + delayed_job approach would be fine.
Because of above I was most attracted by resque-scheduler or anything else which supports both delayed and recurring jobs natively. As I understand it, major problem with resque-scheduler is that Redis is not in RHEL and it would be a problem to get it in, and also because this tool is too heavyweight? And there is not another similar tool?
Yes, that is the general conclusion.
I don't oppose using resque, it just feels like there would be more road blocks to make it happen. Whereas we can run with delayed_jobs on day one.
We'll need to think about how to handle alerts with recurring jobs. With dbomatic, errors are buried in the log. It is good that they are logged, but a person needs to dig around to figure out that there is a problem. If we add alerts, the recurring job would produce a stream of alerts/emails each time it failed, possibly flooding a person's inbox. We could add some logic to skip an alert if the same error occurred during the last run.
Having worn a sysadmin hat in the past, I think a tidal wave of emails to indicate a failure is really pretty common. (And it certainly gets your attention more quickly than a single email about a potential problem... Up until the mailserver crashes.)
I think an easy work-around might be to only send alerts if the type of task and the class of the exception hadn't previously been emailed in the past N hours. The downside to this is that it isn't immediately obvious if was a single occurrence of a freak failure, or if it happened 2,500 times in the past hour.
Suppose we created an Alerts table, and logged everything there? We could use it for deciding if we should send an email or not, but it would also allow an admin to log in and see "47,500 new notifications" and see them? The header designs from the UX team showed a notification counter, but we commented it out since we don't currently have an analogous concept in Aeolus. This could implement it.
Speaking of alerts, is there a way to flag an alert with Conductor. I see there is a "Alerts" box in the Monitor overview page. How does that work exactly?
Right now, I'm pretty sure that "Alerts" just lists Instance.failed.all or something along those lines. It seems to me that we could implement an Alert class, and we could (a) log it to the database, and (b) have an after_create hook that decided if we should send an email or not. Does that sound like it would work?
-- Matt
On 04/16/2012 09:28 AM, Jan Provaznik wrote:
On 04/13/2012 10:22 PM, Richard Su wrote:
On 04/13/2012 03:58 AM, Jan Provazník wrote:
On 04/12/2012 10:41 PM, Matt Wagner wrote:
On Thu, Apr 12, 2012 at 01:01:33PM -0700, Richard Su wrote:
Thanks for the feedback Matt. Let's plan on going with delayed_jobs, unless someone dissents.
We'll need to build some additional infrastructure around delayed_jobs to make it more usable. First would be a way to view the contents of the queues from the command line, maybe in the form of a rake task or through aeolus-cli.
This much should be pretty easy, since it's all ActiveRecord objects. The only tricky part is if we store jobs in delayed_job as blobs of some form, we might want to parse them out and show something user-friendly.
I also see us adding the ability to start/stop the queue and to adjust the number of workers through the command line.
This sounds like it's more controlling the workers than the queue itself, though I don't disagree with the objective at all.
Viewing the queue from Conductor would be nice, but that feels like follow-on work, especially if we foresee further structural changes to the UI.
As with the rake task, this should really be pretty easy to implement, though it doesn't seem like we have any equivalent pages right now.
To plug the recurring job hole in delayed_job, bringing in a scheduler like rufus-scheduler to queue jobs is a good idea.
I will disclaim that I haven't worked with rufus-scheduler in the past, but it looks like it's the type of thing we're looking for.
It seems that rufus-scheduler is based only on threads - it creates one master thread which is only while-true loop with a sleep delay and creation of threads for doing jobs. It's not a standalone process so we would have to write a daemon anyway or run it from our rails process. Overall my impression is that it would be step back in compare to current dbomatic which at least uses separate processes.
We wouldn't use rufus-scheduler or another scheduler to run jobs. We would use the scheduler to queue jobs into delayed_job. The jobs would
OK, sounds like a plan.
then then be picked up by workers in delayed_job. The number of delayed_job works is configurable and each worker runs in its own process and has its own rails environment.
The scheduling would need to be smart enough to not queue duplicate jobs in the queue if one is already queued or is running. That is something we would add.
A benefit I see with moving to jobs from dbomatic is it gives us an opportunity to break up the different pieces of functionality in dbomatic which will make it easier to test. It also gives us the opportunity to plug in alerts. Right now unless you look at the dbomatic.log it is hard know if a problem occurred.
Delayed job doesn't support recurring jobs so if we decide for Delayed job now, what will be next step in Dbomatic improvements? Try to use Delayed job for recurring jobs too (by re-enqueing jobs)? Or Integrate another background tool for running recurring jobs?
The thinking is combining a scheduler with delayed_job will get us recurring jobs.
Using two background tools (one for delayed jobs and one for recurring jobs) doesn't sound right to me - delayed and recurring jobs are very similar to each other.
Using two background tools is not the goal.
Using Delayed job for recurring jobs is not correct too. As mentioned in a another thread, re-enqueing jobs is hacky and might be unsafe. Though this option sounds still better than having two separate background tools. I'm little bit afraid that logic which will have to be written around recurring jobs will not be better than current Dbomatic, but I could be wrong.
With resque, you don't get recurring jobs unless you add the resque-scheduler, a separate gem. So I think our approach here is similar to what you would get with resque.
I don't think re-queueing jobs is necessarily a bad idea, people have implemented it. It is a matter of trade-offs. But I think having a scheduler would be less work from a maintenance perspective
Until delayed_jobs supports recurring jobs natively, I think using the scheduler + delayed_job approach would be fine.
Because of above I was most attracted by resque-scheduler or anything else which supports both delayed and recurring jobs natively. As I understand it, major problem with resque-scheduler is that Redis is not in RHEL and it would be a problem to get it in, and also because this tool is too heavyweight? And there is not another similar tool?
Yes, that is the general conclusion.
I don't oppose using resque, it just feels like there would be more road blocks to make it happen. Whereas we can run with delayed_jobs on day one.
We'll need to think about how to handle alerts with recurring jobs. With dbomatic, errors are buried in the log. It is good that they are logged, but a person needs to dig around to figure out that there is a problem. If we add alerts, the recurring job would produce a stream of alerts/emails each time it failed, possibly flooding a person's inbox. We could add some logic to skip an alert if the same error occurred during the last run.
Having worn a sysadmin hat in the past, I think a tidal wave of emails to indicate a failure is really pretty common. (And it certainly gets your attention more quickly than a single email about a potential problem... Up until the mailserver crashes.)
I think an easy work-around might be to only send alerts if the type of task and the class of the exception hadn't previously been emailed in the past N hours. The downside to this is that it isn't immediately obvious if was a single occurrence of a freak failure, or if it happened 2,500 times in the past hour.
Suppose we created an Alerts table, and logged everything there? We could use it for deciding if we should send an email or not, but it would also allow an admin to log in and see "47,500 new notifications" and see them? The header designs from the UX team showed a notification counter, but we commented it out since we don't currently have an analogous concept in Aeolus. This could implement it.
Speaking of alerts, is there a way to flag an alert with Conductor. I see there is a "Alerts" box in the Monitor overview page. How does that work exactly?
Right now, I'm pretty sure that "Alerts" just lists Instance.failed.all or something along those lines. It seems to me that we could implement an Alert class, and we could (a) log it to the database, and (b) have an after_create hook that decided if we should send an email or not. Does that sound like it would work?
-- Matt
I think we should take into consideration the fact that in Rails 4 there is going to be some unified queuing API: https://github.com/rails/rails/commit/adff4a706a5d7ad18ef05303461e1a0d848bd6...
Imre
aeolus-devel@lists.fedorahosted.org