Future directions for the Beaker test harness
by Nick Coghlan
A few weeks back, Amit put together a proof of concept for running the
test harness in a container, rather than directly on the host
(http://gerrit.beaker-project.org/#/c/3199).
That proof of concept relies on restraint, the new reference harness,
that is intended to eventually replace beah
(https://beaker-project.org/dev/proposals/reference-harness.html)
At the same time, I don't think restraint is currently getting the level
of review and testing that it needs to mature into a plausible
replacement for beah as the default harness.
I think Amit's proposed patch provides a possible way forward:
1. Accept the initial approach where restraint is the *only* supported
harness when running inside a container. Specifying both
"contained_harness" and "harness" as ks_meta variables should be an
error at this point (side note: 'harness' should also be documented
along with all the other ks_meta variables, with a link to
https://beaker-project.org/docs/alternative-harnesses/).
2. Recommend publishing both beah *and* restraint in the harness repos
for Beaker installations. This will not only make restraint available
for container based testing, but also make it readily available via
"harness=restraint" for normal testing, without needing to add a custom
repo definition.
3. Once we have container based testing working reliably with restraint,
drop the restriction against using alternative harnesses in containers.
The priority at the moment though is to get something working that can
run on an Atomic Host, and still provide a relatively normal execution
environment for the executed tasks. Supporting alternative harnesses
*inside* containers is a nice-to-have that can wait until later - by
flatly disallowing it, we ensure we don't have to spend any time working
on container related issues that don't impact restraint. For the initial
iteration, we can also ignore the question of choosing the base image
used to run the harness, as well as being able to start and stop other
containers on the host.
I've filed an RFE for 0.18 on that basis:
https://bugzilla.redhat.com/show_bug.cgi?id=1131388
As part of this, we may also want to move restraint from Bill's personal
account on GitHub to the main Beaker project account, but I don't think
that's particularly urgent at this point.
Regards,
Nick.
--
Nick Coghlan
Red Hat Hosted & Shared Services
Software Engineering & Development, Brisbane
HSS Provisioning Architect
5 years, 6 months
Adding resource management to beaker
by Bill Peck
Hello Beaker,
One of the jobs we run uses shared storage. For example:
HostA with Driver1
HostB with Driver2
HostC with Driver3
All connected to StorageX.
We can't run the test on more than one host at a time since it would
conflict.
Being able to specify the following in the job xml is the goal:
<recipeSet>
<recipe/> <!-- Require HostA -->
<resource/> <!-- Require StorageX -->
</recipeSet>
<recipeSet>
<recipe/> <!-- Require HostB -->
<resource/> <!-- Require StorageX -->
</recipeSet>
<recipeSet>
<recipe/> <!-- Require HostC -->
<resource/> <!-- Require StorageX -->
</recipeSet>
I know there is a resource type for recipe but this doesn't seem like a
good solution. I did try an experiment where I created StorageX host in
beaker with no power control and scheduled three recipesets like above.
But this hack won't work because a watchdog with a NULL expire time is
set and that means it will sit there forever. Even if it did work,
maybe if we created a dummy power type, we would still end up with an abort.
I don't think it would take much work to support this type of
multi-host/resource requirement.
Right now we have the following model
Object Recipe, which has the following attributes: distro_tree,
resource, rendered_kickstart, watchdog, systems, dyn_systems, tasks,
dyn_tasks, tags, repos, rpms, logs, custom_packages, ks_appends
Object MachineRecipe which inherits from Recipe, and adds a guests
attribute.
Object GuestRecipe which inherits from Recipe.
A ResourceRecipe object wouldn't need most of these attributes, probably
just the following:
resource, systems, and dyn_systems. (no watchdog, just return when
recipeSet completes or aborts)
And resource currently maps to a RecipeResource which can be one of the
following:
SystemResource
VirtResource
GuestResource
I think adding another Object here would make sense, but the term
Resource is overused and its a little confusing right now.
If done correctly we could re-use the filtering code in needproperty,
but I think the only things we could search on for resources would be
the following: Name, lab_controller, and key_values? (storing how these
shared resources are connected would be another option I suppose, but
maybe too complicated for a first implementation?)
I'm sure there are other things to consider, we could support setting up
and tearing down shared resources (almost like a power on/off command).
I want to be mindful of things like that but I don't want to be
paralysed with what if's and never get anything done.
I know you guys have been working hard on the groups features and want
to re-do the scheduler as well. Maybe this would be something for after
the scheduler re-write? I don't see any mention of this on the roadmap
[1] (although the roadmap is quite full already!).
In any event, I'm interested in hearing your ideas and feedback.
Thanks!
1 - http://beaker-project.org/dev/tech-roadmap.html
8 years, 11 months
Simplifying running the inventory task
by Nick Coghlan
Currently, doing a hardware scan on a system requires a command line like the following:
bkr machine-test --inventory --family=RedHatEnterpriseLinux6 --arch=x86_64 --machine=<FQDN>
The "family" part may need to change when using Beaker to manage architectures that RHEL6 doesn't support. However, due to the fact beaker-system-scan currently still depends on smolt for some features, using RHEL7 or Fedora instead isn't ideal if RHEL6 supports the hardware.
In order to eventually add a "Scan this system now" button to the web UI, and, equivalently, a simple "bkr update-inventory <FQDN>" command, I think we may need to move the logic of choosing a preferred distro family to run the inventory task to the main Beaker server.
Specifically, I'm thinking of a mapping of architecture -> distro family + inventory task, with a default distro and inventory task to use for unspecified architectures, together with a dedicated server API *separate* from the normal job submission API, that requested an inventory scan for a particular machine. The task of choosing which distro to use, and which inventory task to run would then be handled by the server.
The purpose of this would be to better enable working with architectures that the default approach can't handle yet - standard supported architectures would use the default of /distribution/inventory-on-RHEL-6 (at least until we manage to remove the dependency on smolt, then we can switch to RHEL7 by default), while experimental architectures could use a different distro, or even a different inventory task.
By making this admin configurable (include the default distro used), we'd also better support instances that don't have RHEL trees loaded at all, but only Fedora and/or CentOS.
Cheers,
Nick.
--
Nick Coghlan
Red Hat Hosted & Shared Services
Software Engineering & Development, Brisbane
HSS Provisioning Architect
9 years, 1 month
Feature requests for system performance monitoring
by Nick Coghlan
I thought we had some open feature requests for being able to monitor
system performance while tasks were running. After seeing the
"Performance Co-Pilot" talk at PyCon AU, I'm wondering whether we might
be able to come up with a task that allows PCP monitoring to be enabled
on the system under test and the logs uploaded for later analysis (we'll
need to be careful with this, since it could easily overwhelm the log
server if used indiscriminately).
However, I can't find the relevant RFEs. That may just be a search
failure on my part, though, so I figured I'd check if anyone else
remembered such a thing.
Cheers,
Nick.
--
Nick Coghlan
Red Hat Hosted & Shared Services
Software Engineering & Development, Brisbane
HSS Provisioning Architect
9 years, 2 months