The current scheduler works almost purely at the recipe level. The extent to which it pays attention to recipesets pretty much amounts to ensuring all recipes in a recipe set are scheduled on the same lab controller.
This creates some interesting problems with multi-host testing: - a recipe set with strict host requirements for only some systems may hold on to common systems for a long time while waiting for rare ones (the addition of dynamic virt support opens the door for a recipe set to hold on to dynamic virt resources while waiting for physical hardware for other recipes) - recipe sets scheduled for unique systems may deadlock if a high priority job is competing with a previously queued low priority job which has already claimed some resources
To better explain the latter problem, consider a lab with only 2 systems, A and B, containing a particular piece of hardware, and a multi-host recipe set that needs both of them. Queue a low priority version of that job while a test is running on system A, and the job will claim system B immediately for one recipe, while the other will remain in the queue. If a high priority copy of the job is added before the test running on system A completes, then system A will be claimed by Job 2. This leaves the two jobs in a classic ABBA deadlock, as Job 1 has System B and is waiting for System A, while Job 2 has System A and is waiting for System B.
Some of the metrics support being added in 0.11 is actually about measuring the overall impact of the first problem (by seeing what proportion of their time systems spend in the Scheduled state).
For other reasons to do with being able to effectively partition the scheduling task between multiple schedulers each handling the systems managed by a particular lab controller, I've been considering proposing the inclusion of a "Claimed" state in the recipe lifecycle. The "Claimed" state would fit between "Queued" and "Scheduled", and indicate that the recipe had been assigned to a specific lab controller, but not yet assigned to a specific system (at the moment, this state change is handled implicitly through setting "recipe.recipeset.lab_controller" when the first recipe in the recipeset is scheduled).
Furthermore, the scheduler would be updated to work on a *cached* copy of the System status data. This is needed to avoid the current problem where there's a race condition with system status changes occurring during a scheduling pass leading to recipes jumping the queue (I'm interested in hearing about relatively clean ways to this with SQL Alchemy, though: http://stackoverflow.com/questions/13983067/cached-reads-immediate-writes-wi...)
In combination, these two would allow Claimed recipes to be given priority over Queued recipes on subsequent passes, preventing the deadlock problem and theoretically also improving system utilitization.
One social challenge with addressing this is that we don't want to enable/encourage queue jumping for rare systems by scheduling them in a recipe set with a job that will be scheduled quickly, but I'm not sure we can solve that at the technical level.
Cheers, Nick.
Furthermore, the scheduler would be updated to work on a *cached* copy of the System status data. This is needed to avoid the current problem where there's a race condition with system status changes occurring during a scheduling pass leading to recipes jumping the queue (I'm interested in hearing about relatively clean ways to this with SQL Alchemy, though: http://stackoverflow.com/questions/13983067/cached-reads-immediate-writes-wi...)
Do you mean https://bugzilla.redhat.com/show_bug.cgi?id=872187 ?
I've worked around it using the transaction for the whole scheduling loop (not just per-recipe as it was before). This is sort of "cached" data, as the transactions reads are consistent no matter what got written during the transaction. I'm ordering the recipes by recipe set priority, recipeset.id and recipe.id (in this order) and then they get scheduled.
I haven't seen any deadlocks since then (with tasks having the same priority).
Cheers, J.
On 12/21/2012 09:53 PM, Jaroslav Kortus wrote:
Furthermore, the scheduler would be updated to work on a *cached* copy of the System status data. This is needed to avoid the current problem where there's a race condition with system status changes occurring during a scheduling pass leading to recipes jumping the queue (I'm interested in hearing about relatively clean ways to this with SQL Alchemy, though: http://stackoverflow.com/questions/13983067/cached-reads-immediate-writes-wi...)
Do you mean https://bugzilla.redhat.com/show_bug.cgi?id=872187 ?
I've worked around it using the transaction for the whole scheduling loop (not just per-recipe as it was before). This is sort of "cached" data, as the transactions reads are consistent no matter what got written during the transaction. I'm ordering the recipes by recipe set priority, recipeset.id and recipe.id (in this order) and then they get scheduled.
I haven't seen any deadlocks since then (with tasks having the same priority).
Yeah, but lumping everything into one giant transaction has its own problems (mainly to do with state consistency with external systems like RHEV-M and the filesystem).
What I realised over the Christmas break is that many of these problems can be resolved by moving towards a more event based scheduling system, with two key scheduling events:
1. When a new recipe is submitted, attempt to assign it to a dynamic virtual system or to a system from the idle pool 2. When a system completes its current task, attempt to assign it a recipe from the queue *before* placing it back in the idle pool (in the case of dynamic virt, see if any of the recipes that previously failed dynamic virt allocation can now be allocated a dynamic VM)
The current scheduling loop would then become a cleanup loop (e.g. looking for dead recipes that need to be aborted for various reasons)
With separate scheduling events, different prioritisation rules can be applied to the two kinds of scheduling:
New recipes would use the current recipe based scheduling: filter and order the available systems according to the preferences of the user submitting the job and the requirements expressed in the recipe.
Free systems would use system based scheduling: order the queued recipes according to the preferences of the system owner and the priorities of the queued recipes.
The latter scheduling algorithm can deal with the deadlock problem by prioritising recipes that are part of a recipe set that already has some resources allocated on the relevant lab controller over those which are just part of the general queue.
The one downside is that users would be able to exploit this is order to jump the queue for access to rare resources by also scheduling a recipe in the same recipe set that can run on readily available hardware. While we likely can't prevent such abuse, we should be able to provide tools to help detect it and leave it up to organizational "acceptable use" policies to deal with it.
Cheers, Nick.
----- Original Message -----
From: "Nick Coghlan" ncoghlan@redhat.com To: beaker-devel@lists.fedorahosted.org Sent: Wednesday, January 2, 2013 12:08:40 PM Subject: Re: [Beaker-devel] Scheduling recipe sets rather than recipes
On 12/21/2012 09:53 PM, Jaroslav Kortus wrote:
Furthermore, the scheduler would be updated to work on a *cached* copy of the System status data. This is needed to avoid the current problem where there's a race condition with system status changes occurring during a scheduling pass leading to recipes jumping the queue (I'm interested in hearing about relatively clean ways to this with SQL Alchemy, though: http://stackoverflow.com/questions/13983067/cached-reads-immediate-writes-wi...)
Do you mean https://bugzilla.redhat.com/show_bug.cgi?id=872187 ?
I've worked around it using the transaction for the whole scheduling loop (not just per-recipe as it was before). This is sort of "cached" data, as the transactions reads are consistent no matter what got written during the transaction. I'm ordering the recipes by recipe set priority, recipeset.id and recipe.id (in this order) and then they get scheduled.
I haven't seen any deadlocks since then (with tasks having the same priority).
Yeah, but lumping everything into one giant transaction has its own problems (mainly to do with state consistency with external systems like RHEV-M and the filesystem).
What I realised over the Christmas break is that many of these problems can be resolved by moving towards a more event based scheduling system, with two key scheduling events:
- When a new recipe is submitted, attempt to assign it to a dynamic
virtual system or to a system from the idle pool 2. When a system completes its current task, attempt to assign it a recipe from the queue *before* placing it back in the idle pool (in the case of dynamic virt, see if any of the recipes that previously failed dynamic virt allocation can now be allocated a dynamic VM)
The current scheduling loop would then become a cleanup loop (e.g. looking for dead recipes that need to be aborted for various reasons)
With separate scheduling events, different prioritisation rules can be applied to the two kinds of scheduling:
New recipes would use the current recipe based scheduling: filter and order the available systems according to the preferences of the user submitting the job and the requirements expressed in the recipe.
Free systems would use system based scheduling: order the queued recipes according to the preferences of the system owner and the priorities of the queued recipes.
I'd be curious to see what kind of preferences we're talking about.
When it comes to resources used, this would make more sense on a shared system, where the recipe is running in the background on a system that is performing other concurrent work. In such a case the system owner describing their preferences such as memory, disk space and CPU usage makes sense. In Beaker though nothing else is running on a system, so resource usage is a non issue to the owner.
It does make sense to have the recipe run time as a preference (perhaps owners may want to infrequently take control of their machine and do not want to accept recipes that run for a week), but the problem is that we allow recipe owners to dynamically change the run time of a recipe.
I guess group/user based preferences make the most sense. The owner may give favour to some groups/users over others, which is perfectly legitimate.
The latter scheduling algorithm can deal with the deadlock problem by prioritising recipes that are part of a recipe set that already has some resources allocated on the relevant lab controller over those which are just part of the general queue.
The one downside is that users would be able to exploit this is order to jump the queue for access to rare resources by also scheduling a recipe in the same recipe set that can run on readily available hardware. While we likely can't prevent such abuse, we should be able to provide tools to help detect it and leave it up to organizational "acceptable use" policies to deal with it.
Cheers, Nick.
-- Nick Coghlan Red Hat Infrastructure Engineering & Development, Brisbane
Python Applications Team Lead Beaker Development Lead (http://beaker-project.org/) GlobalSync Development Lead (http://pulpdist.readthedocs.org) _______________________________________________ Beaker-devel mailing list Beaker-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/beaker-devel
On 01/02/2013 12:57 PM, Raymond Mancy wrote:
I'd be curious to see what kind of preferences we're talking about.
When it comes to resources used, this would make more sense on a shared system, where the recipe is running in the background on a system that is performing other concurrent work. In such a case the system owner describing their preferences such as memory, disk space and CPU usage makes sense. In Beaker though nothing else is running on a system, so resource usage is a non issue to the owner.
It does make sense to have the recipe run time as a preference (perhaps owners may want to infrequently take control of their machine and do not want to accept recipes that run for a week), but the problem is that we allow recipe owners to dynamically change the run time of a recipe.
I guess group/user based preferences make the most sense. The owner may give favour to some groups/users over others, which is perfectly legitimate.
Yep, the focus is user/group preferences. We want to make it easier for a system owner to say "If any of my team have jobs in the queue, run those, otherwise run someone else's job". At the moment, system owners that want to prioritise their own team's jobs effectively have to opt out of contributing to the public pool at all.
The flipside is finally exposing the owner/group/public pool preference in recipe definitions and the user preferences - that one will apply when the recipe is first checked against the pool of idle systems (if there's no appropriate system available immediately, the recipe will enter the queue and be assigned to the first system that wants to run it).
It wouldn't surprise me if we eventually go down the Condor path of allowing system owners to prefer jobs that make better use of a machine's capabilities, but there are plenty of things ahead of that on the to-do list.
Cheers, Nick.
beaker-devel@lists.fedorahosted.org