Adding resource management to beaker
by Bill Peck
Hello Beaker,
One of the jobs we run uses shared storage. For example:
HostA with Driver1
HostB with Driver2
HostC with Driver3
All connected to StorageX.
We can't run the test on more than one host at a time since it would
conflict.
Being able to specify the following in the job xml is the goal:
<recipeSet>
<recipe/> <!-- Require HostA -->
<resource/> <!-- Require StorageX -->
</recipeSet>
<recipeSet>
<recipe/> <!-- Require HostB -->
<resource/> <!-- Require StorageX -->
</recipeSet>
<recipeSet>
<recipe/> <!-- Require HostC -->
<resource/> <!-- Require StorageX -->
</recipeSet>
I know there is a resource type for recipe but this doesn't seem like a
good solution. I did try an experiment where I created StorageX host in
beaker with no power control and scheduled three recipesets like above.
But this hack won't work because a watchdog with a NULL expire time is
set and that means it will sit there forever. Even if it did work,
maybe if we created a dummy power type, we would still end up with an abort.
I don't think it would take much work to support this type of
multi-host/resource requirement.
Right now we have the following model
Object Recipe, which has the following attributes: distro_tree,
resource, rendered_kickstart, watchdog, systems, dyn_systems, tasks,
dyn_tasks, tags, repos, rpms, logs, custom_packages, ks_appends
Object MachineRecipe which inherits from Recipe, and adds a guests
attribute.
Object GuestRecipe which inherits from Recipe.
A ResourceRecipe object wouldn't need most of these attributes, probably
just the following:
resource, systems, and dyn_systems. (no watchdog, just return when
recipeSet completes or aborts)
And resource currently maps to a RecipeResource which can be one of the
following:
SystemResource
VirtResource
GuestResource
I think adding another Object here would make sense, but the term
Resource is overused and its a little confusing right now.
If done correctly we could re-use the filtering code in needproperty,
but I think the only things we could search on for resources would be
the following: Name, lab_controller, and key_values? (storing how these
shared resources are connected would be another option I suppose, but
maybe too complicated for a first implementation?)
I'm sure there are other things to consider, we could support setting up
and tearing down shared resources (almost like a power on/off command).
I want to be mindful of things like that but I don't want to be
paralysed with what if's and never get anything done.
I know you guys have been working hard on the groups features and want
to re-do the scheduler as well. Maybe this would be something for after
the scheduler re-write? I don't see any mention of this on the roadmap
[1] (although the roadmap is quite full already!).
In any event, I'm interested in hearing your ideas and feedback.
Thanks!
1 - http://beaker-project.org/dev/tech-roadmap.html
8 years, 11 months
Beaker 0.16.2 released
by Dan Callaghan
The Beaker 0.16.2 maintenance release is now available from
beaker-project.org [1]. (It was actually published two weeks ago, my
apologies for the delay in this announcement.)
This release fixes a number of minor bugs. It also defines a new
kickstart metadata variable, beah_no_ipv6, which will cause Beah to
avoid using IPv6 even when it is available. You should set this variable
in your recipe if it performs destructive network testing which affects
IPv6 connectivity. The previous workaround for this situation, setting
beah_rpm=beah-0.6.48, is no longer necessary with the new option.
Updated versions of beah, rhts, and /distribution/reservesys have also
been released.
A complete description of the bug fixes in this release can be found in
the updated release notes [2]. The detailed list of all changes made
since Beaker 0.16.1 is also available [3].
[1] https://beaker-project.org/releases/
[2] https://beaker-project.org/docs/whats-new/release-0.16.html#beaker-0-16-2
[3] https://git.beaker-project.org/cgit/beaker/log/?qt=range&q=beaker-0.16.1....
--
Dan Callaghan <dcallagh(a)redhat.com>
Software Engineer, Hosted & Shared Services
Red Hat, Inc.
9 years, 4 months
Fwd: beah components and their interaction
by Amit Saha
I have found myself referring these fairly regularly while working on Beah.
Would it be a good idea to put this in the beah developer guide?
I can't promise that it will be any better than it is as below.
> During a test run, several Beah components interact over TCP/IP within
> the system itself.
>
> Beah local servers
> ==================
>
> When you login to a test system (say, when "reservesys" is running), you will
> see that
> the following Beah specific servers listening:
>
> beah-srv 9353 root 10u IPv4 28367 0t0 TCP *:12432 (LISTEN)
> beah-srv 9353 root 11u IPv6 28368 0t0 TCP localhost:12432
> (LISTEN)
> beah-srv 9353 root 13u IPv4 28370 0t0 TCP *:12434 (LISTEN)
> beah-srv 9353 root 14u IPv6 30919 0t0 TCP localhost:12434
> (LISTEN)
> beah-rhts 10898 root 7u IPv4 31905 0t0 TCP localhost:7085
> (LISTEN)
> beah-rhts 10898 root 11u IPv6 31906 0t0 TCP localhost:7085
> (LISTEN)
>
> Note that this is the IPv6 capable harness that we plan to release soon and
> hence each
> of the servers are listening on both IPv4 and IPv6 interfaces.
>
> The 'beah-srv' process corresponds to the function "start_server" in
> beah/wires/internals/twserver.py
> and it basically starts the TaskListener and BackendListener, whose presence
> you can usually see in
> the console logs:
>
> 2014-01-29 00:55:48,719 beah start_server: INFO Controller: BackendListener
> listening on 127.0.0.1:12432
> 2014-01-29 00:55:48,720 beah start_server: INFO Controller: BackendListener
> listening on ::1:12432
> 2014-01-29 00:55:48,720 beah start_server: INFO Controller: BackendListener
> listening on /var/beah/backend12432.socket
> 2014-01-29 00:55:48,722 beah start_server: INFO Controller: TaskListener
> listening on 127.0.0.1:12434
> 2014-01-29 00:55:48,723 beah start_server: INFO Controller: TaskListener
> listening on ::1:12434
> 2014-01-29 00:55:48,723 beah start_server: INFO Controller: TaskListener
> listening on /var/beah/task12434.socket
>
> These servers exist throughout a recipe run on the test system. The
> corresponding "client" programs live
> in beah/wires/internals/twbackend.py and beah/wires/internals/twtask.py.
>
>
> The beah-rhts-task (beah/tasks/rhts_xmlrpc.py:main()) starts a server *per
> task*, it is the
> result server and exits on a task completion.
>
>
> Beah daemons
> ============
>
> The following beah daemons are started at system boot:
>
> beah-fwd-backend
> ================
>
> This handles the communication during multi host jobs.
>
> Source: beah/beaker/backends/forwarder.py
>
>
> beah-beaker-backend
> ===================
>
> Talks to the lab controller's beaker-proxy process over XML-RPC.
>
> Source: beah/beaker/backends/beakerlc.py
>
>
> beah-srv
> ========
>
> Main daemon process we saw above.
>
> Source: beah/bin/srv.py
>
> As you can see from the following status, the beah-rhts-task is spawned by
> beah-srv:
>
> # systemctl status beah-srv
> beah-srv.service - The Beaker Harness server.
> Loaded: loaded (/usr/lib/systemd/system/beah-srv.service; enabled)
> Active: active (running) since Wed 2014-01-29 00:55:47 EST; 20min ago
> Main PID: 9353 (beah-srv)
> CGroup: /system.slice/beah-srv.service
> ├─ 9353 /usr/bin/python /usr/bin/beah-srv
> └─10898 /usr/bin/python /usr/bin/beah-rhts-task
>
>
9 years, 5 months