A few weeks back, Amit put together a proof of concept for running the
test harness in a container, rather than directly on the host
(http://gerrit.beaker-project.org/#/c/3199)
That proof of concept relies on restraint, the new reference harness,
that is intended to eventually replace beah
(https://beaker-project.org/dev/proposals/reference-harness.html)
At the same time, I don't think restraint is currently getting the level
of review and testing that it needs to mature into a plausible
replacement for beah as the default harness.
I think Amit's proposed patch provides a possible way forward:
1. Accept the initial approach where restraint is the *only* supported
harness when running inside a container. Specifying both
"contained_harness" and "harness" as ks_meta variables should be an
error at this point (side note: 'harness' should also be documented
along with all the other ks_meta variables, with a link to
https://beaker-project.org/docs/alternative-harnesses/)
2. Recommend publishing both beah *and* restraint in the harness repos
for Beaker installations. This will not only make restraint available
for container based testing, but also make it readily available via
"harness=restraint" for normal testing, without needing to add a custom
repo definition.
3. Once we have container based testing working reliably with restraint,
drop the restriction against using alternative harnesses in containers.
The priority at the moment though is to get something working that can
run on an Atomic Host, and still provide a relatively normal execution
environment for the executed tasks. Supporting alternative harnesses
*inside* containers is a nice-to-have that can wait until later - by
flatly disallowing it, we ensure we don't have to spend any time working
on container related issues that don't impact restraint. For the initial
iteration, we can also ignore the question of choosing the base image
used to run the harness, as well as being able to start and stop other
containers on the host.
I've filed an RFE for 0.18 on that basis:
https://bugzilla.redhat.com/show_bug.cgi?id=1131388
As part of this, we may also want to move restraint from Bill's personal
account on GitHub to the main Beaker project account, but I don't think
that's particularly urgent at this point.
Regards,
Nick.
--
Nick Coghlan
Red Hat Hosted & Shared Services
Software Engineering & Development, Brisbane
HSS Provisioning Architect
Hello Beaker,
One of the jobs we run uses shared storage. For example:
HostA with Driver1
HostB with Driver2
HostC with Driver3
All connected to StorageX.
We can't run the test on more than one host at a time since it would
conflict.
Being able to specify the following in the job xml is the goal:
<recipeSet>
<recipe/> <!-- Require HostA -->
<resource/> <!-- Require StorageX -->
</recipeSet>
<recipeSet>
<recipe/> <!-- Require HostB -->
<resource/> <!-- Require StorageX -->
</recipeSet>
<recipeSet>
<recipe/> <!-- Require HostC -->
<resource/> <!-- Require StorageX -->
</recipeSet>
I know there is a resource type for recipe but this doesn't seem like a
good solution. I did try an experiment where I created StorageX host in
beaker with no power control and scheduled three recipesets like above.
But this hack won't work because a watchdog with a NULL expire time is
set and that means it will sit there forever. Even if it did work,
maybe if we created a dummy power type, we would still end up with an abort.
I don't think it would take much work to support this type of
multi-host/resource requirement.
Right now we have the following model
Object Recipe, which has the following attributes: distro_tree,
resource, rendered_kickstart, watchdog, systems, dyn_systems, tasks,
dyn_tasks, tags, repos, rpms, logs, custom_packages, ks_appends
Object MachineRecipe which inherits from Recipe, and adds a guests
attribute.
Object GuestRecipe which inherits from Recipe.
A ResourceRecipe object wouldn't need most of these attributes, probably
just the following:
resource, systems, and dyn_systems. (no watchdog, just return when
recipeSet completes or aborts)
And resource currently maps to a RecipeResource which can be one of the
following:
SystemResource
VirtResource
GuestResource
I think adding another Object here would make sense, but the term
Resource is overused and its a little confusing right now.
If done correctly we could re-use the filtering code in needproperty,
but I think the only things we could search on for resources would be
the following: Name, lab_controller, and key_values? (storing how these
shared resources are connected would be another option I suppose, but
maybe too complicated for a first implementation?)
I'm sure there are other things to consider, we could support setting up
and tearing down shared resources (almost like a power on/off command).
I want to be mindful of things like that but I don't want to be
paralysed with what if's and never get anything done.
I know you guys have been working hard on the groups features and want
to re-do the scheduler as well. Maybe this would be something for after
the scheduler re-write? I don't see any mention of this on the roadmap
[1] (although the roadmap is quite full already!).
In any event, I'm interested in hearing your ideas and feedback.
Thanks!
1 - http://beaker-project.org/dev/tech-roadmap.html
We're currently contemplating two paths to a "better scheduler".
1. Kill the bespoke scheduler, and replace it with Mesos.
We still don't know just how big the ramifications of that would be, but
if anyone wants to try out Mesos, this is probably the place to start:
https://timothysc.github.io/blog/2014/09/08/mesos-breeze/
(That may involve waiting until Fedora 21 is actually released, but
that's not too far away)
2. Improve the bespoke scheduler with PyGMO
The heart of the current scheduler is the "schedule_queued_recipes"
function. That essentially treats the recipe queue and the idle systems
as a 2-D matrix, and tries to map one to the other. However, it does so
incrementally on a recipe-by-recipe basis, which makes it difficult to
determine a "best fit" option that tries to get entire recipe sets
running immediately, minimises the amount of unused RAM or disk space, etc.
At PyCon New Zealand, I was introduced to a multi-objective optimiser
library published by the European Space Agency: https://esa.github.io/pygmo/
Whereas switching to Mesos would be a big architectural change, adopting
PyGMO to make the current scheduler *better* might be feasible by
switching "schedule_queued_recipes" to an approach where it:
1. Reads the current recipe queue and idle system sets from the database
2. Organises them into a format suitable for handing over to PyGMO
3. Runs PyGMO over the data set with an appropriate cost function to be
minimised
4. Assigns queued recipes to idle systems based on the results
Such a "holistic scheduler" approach could avoid various problems we
have seen with the current algorithm, where its incremental
recipe-by-recipe approach can cause scheduling lock ups for multihost
recipe sets and various other problems (only some of which have been
worked around at this point).
However, PyGMO has two big unanswered questions: what kind of compute
resources we would need to use it effectively, and what kind of cost
function would be appropriate to address both the hard constraints of
specific hardware requirements and keeping entire recipe sets in the
same lab, along with the other more flexible factors like minimising
wasted RAM and preferring to run older recipes first.
The "event driven scheduler" idea in the current design proposals is
also still in the mix, but if someone finds the time to investigate
PyGMO and it looks promising, then they'll need some significant updates
to take that into account.
Regards,
Nick.
--
Nick Coghlan
Red Hat Hosted & Shared Services
Software Engineering & Development, Brisbane
HSS Provisioning Architect
Hi folks,
I am working on bug[1] and have created a task to provision guest recipes with cloud images.
An example job can be found here[2] and its using RHEL7 image. There is one issue that the hostname
on guest recipe is not resolved properly from dhcp server.It only shows ip address and it may be required
further investigation. Other than that, everything is looking fine.
The source code is located at my personal repo[3]. If you have any questions, please let me know.
Cheers,Matt Jia
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1108455
[2] https://beaker-devel.app.eng.bos.redhat.com/jobs/5912
[3] https://github.com/matt8754/beaker-cloud-init
On behalf of the Beaker development team, I'm pleased to announce that
Beaker 0.18.0 is now available from the Beaker web site [1].
As always, the release notes [2] have the full story, but the highlights
in this release are:
* an improved usage reminder e-mail system
* a new --host-filter option for workflow commands, for pre-defined
<hostRequires/> XML snippets
* better support for "custom" distros (that is, Fedora- or
RHEL-compatible distros which are named something else)
If you are dealing with custom distros in your Beaker installation,
please note that there are some changes to the implementation details of
the kickstart templates which may affect any custom templates or
snippets you have. The release notes describe some potential issues, if
you have any other questions we can help.
The detailed list of all changes made since Beaker 0.17.3 is also
available [3].
[1] https://beaker-project.org/releases/
[2] https://beaker-project.org/docs/whats-new/release-0.18.html
[3] https://git.beaker-project.org/cgit/beaker/log/?qt=range&q=beaker-0.17.3..b…
--
Dan Callaghan <dcallagh(a)redhat.com>
Software Engineer, Hosted & Shared Services
Red Hat, Inc.
>From the Docker 1.1 release notes
(http://blog.docker.com/2014/07/announcing-docker-1-1/)
* / is now allowed as source of --volumes. This means you can bind-mount
your whole system in a container if you need to.
This seems like a potentially useful feature when it comes to running
test harnesses in a container.
Cheers,
Nick.
--
Nick Coghlan
Red Hat Hosted & Shared Services
Software Engineering & Development, Brisbane
HSS Provisioning Architect