Beaker provisioning without Cobbler
by Dan Callaghan
Dear list,
I've been working on removing Beaker's dependence on Cobbler for
provisioning systems. This mail is to describe the approach I am
proposing, and to seek feedback on it.
At present, Beaker requires cobblerd to be running on each lab
controller. When a new distro comes along, it must be imported into
Cobbler. From there, a script is run to register new Cobbler distros
with the Beaker server. (Bill has been doing some work on this side of
things separately.)
When it comes time to reboot or power off a system -- either because a
user has manually requested it, or the scheduler is starting/stopping a
job -- the Beaker server makes a series of XML-RPC calls to cobblerd on
the lab controller, requesting that it execute the appropriate power
control script. The parameters for the power script are based on the
system's power settings stored in Beaker.
Similarly, provisioning a distro on a system means making a series of
XML-RPC calls to cobblerd to configure netbooting for the system and
then rebooting it.
As a first step towards removing Cobbler, I am tackling the power
command side of things. Since version 0.6.14 Beaker already has a
per-system queue of power commands, to handle the fact that some systems
take a very long time to power on and off. Right now a dedicated thread
runs in beakerd (on the Beaker server) processing new power commands,
and checking the status of running ones. In each case it has to make an
XML-RPC call back to cobblerd on the lab controller.
We can flip this relationship on its head, by making the lab controller
poll the Beaker server for new power commands. When it sees one, it
executes the power script and reports the result back. This is similar
to the way the existing beaker-watchdog daemon works: it polls the
server periodically for new and expired watchdog records and acts on
them.
The main advantage to this approach is that we no longer need
bi-directional communication between lab controllers and the Beaker
server. Instead, all requests come from the lab controllers to the
server. If a lab controller goes down, or becomes unreachable, errors do
not pile up on the Beaker server. The queue for systems in that lab will
simply not progress until the lab controller comes back online, at which
point it should be able to recover gracefully. It will also allow labs
to be behind a NAT. Plus it is more efficient to have the lab controller
report back when a power script is finished, rather than having the
Beaker server polling cobblerd to check the status of the command.
I have a proof-of-concept patch which implements power command handling.
You can view and comment on the patch here:
http://gerrit.beaker-project.org/912
In this patch I have used gevent, which is a library for event-driven
asynchronous programming using "greenlets". I wanted to avoid using
threads for supervising the power scripts, because in a large lab with
hundreds of test systems and many power commands running concurrently,
having a thread per command (actually at least two, one to read from
stdout and one to read from stderr) would waste a lot of memory. I chose
gevent over Twisted because a lot of existing code can be used as-is,
without porting all of it to Twisted. You can read more about gevent
here:
http://www.gevent.org/
Still on my todo list for this patch:
* implement timeouts for the power scripts, so that they can't run
forever and never return
* add optional support for receiving power commands over AMQP (as an
optimisation for polling), like the beaker-watchdog daemon currently
has
* make the daemon shutdown cleanly: if there are any power scripts
running, they should be allowed to complete and report their result
before being killed
As a next step towards removing Cobbler, we can expand the command queue
to include provisioning commands and have the new beaker-provision
daemon process those also. I will be working on this next.
--
Dan Callaghan <dcallagh(a)redhat.com>
12 years, 1 month
RHEVM Integration with beaker
by Bill Peck
Hi Steve,
I'd like to discuss more on how we can best integrate Beaker with RHEVM.
Goals:
1 - Dynamically create systems based on the requirements from Beaker recipe.
- Attempt to schedule on RHEVM first.
2 - Quick provisioning by using pre-built images
3 - Provide images for operating systems that are difficult to automate
(Windows)
Implementing all of the goals at once may be too much. I'd rather see
us tackle 1 and then move on to 2 and then 3.
The big question is how do we do this efficiently? Do we want to be
able to support multiple RHEVM servers?
thinking out loud here, here is how we currently process a recipe.
- A new recipe comes in that requires an x86_64 system with 4 gigs of
ram and 20 gig of disk space on Distro X.
- We select all systems that could possibly match, even ones that are on
labs that don't have Distro X (it may show up later)
- If the recipe is a multi-host recipe we then remove bad choices, all
recipes need to run in one lab.
- We schedule the recipe when a join condition from the recipe matches 1
or more systems.
- Finally when all recipes in a multi-host recipe set are scheduled we
move all of them to Running and kick off pxe installs
Because we want to create the system dynamically we can't rely on sql to
alert us that a match is available. I'm concerned with scalability, I
don't think you remember this, but old rhts attempted to match every
queued recipe every time through the loop. It was horrendous! The more
queued recipes you had the longer it took to do one loop. The sql
method works wonderfully because we join on the system being available,
we never see recipes that can't be serviced. So my concern is how we
implement this with RHEVM without creating a bottle neck?
I'm also wondering if RHEVM will be able to tell us the reason for not
being able to create a host. For example, when we ask to create a 32
gig system and RHEVM replies that it can't, is it because it doesn't
currently have enough ram because other hosts are running or is it
because the RHEVH box only has 16 gig ram it will never be able to
satisfy it. And if no one can satisfy it we should abort it.
I think you had some ideas on this and I'm hoping we can work through
them here. :-)
12 years, 2 months
scheduling deadlock
by Jun'ichi Nomura
Hello,
I've observed scheduling deadlock a few times in my Beaker.
It's typical AB-BA deadlock and seems due to Beaker scheduling per Recipe,
while it should, I think, do per RecipeSet.
What do you think?
Reproducible steps are as follows.
Suppose I have 3 jobs below, each contains a recipeset with 2 recipes:
job1: RS:1
R:11 requires hostA
R:12 requires hostB
job2: RS:2
R:21 requires hostB
R:22 requires hostC
job3: RS:3 (with high priority)
R:31 requires hostB
R:32 requires hostC
and I submit them with a certain interval inbetween:
submit job1
=> R:11 and R:12 take hostA and hostB, respectively.
submit job2
=> R:22 takes hostC.
R:21 is queued, waiting for hostB.
submit job3
=> R:31 and R:32 are queued.
When job1 completes and hostB is freed,
R:31 takes hostB, since it has higher priority than R:21.
And now we get AB-BA deadlock between RS:2 and RS:3 ...
--
Jun'ichi Nomura, NEC Corporation
12 years, 2 months