Adding resource management to beaker
by Bill Peck
Hello Beaker,
One of the jobs we run uses shared storage. For example:
HostA with Driver1
HostB with Driver2
HostC with Driver3
All connected to StorageX.
We can't run the test on more than one host at a time since it would
conflict.
Being able to specify the following in the job xml is the goal:
<recipeSet>
<recipe/> <!-- Require HostA -->
<resource/> <!-- Require StorageX -->
</recipeSet>
<recipeSet>
<recipe/> <!-- Require HostB -->
<resource/> <!-- Require StorageX -->
</recipeSet>
<recipeSet>
<recipe/> <!-- Require HostC -->
<resource/> <!-- Require StorageX -->
</recipeSet>
I know there is a resource type for recipe but this doesn't seem like a
good solution. I did try an experiment where I created StorageX host in
beaker with no power control and scheduled three recipesets like above.
But this hack won't work because a watchdog with a NULL expire time is
set and that means it will sit there forever. Even if it did work,
maybe if we created a dummy power type, we would still end up with an abort.
I don't think it would take much work to support this type of
multi-host/resource requirement.
Right now we have the following model
Object Recipe, which has the following attributes: distro_tree,
resource, rendered_kickstart, watchdog, systems, dyn_systems, tasks,
dyn_tasks, tags, repos, rpms, logs, custom_packages, ks_appends
Object MachineRecipe which inherits from Recipe, and adds a guests
attribute.
Object GuestRecipe which inherits from Recipe.
A ResourceRecipe object wouldn't need most of these attributes, probably
just the following:
resource, systems, and dyn_systems. (no watchdog, just return when
recipeSet completes or aborts)
And resource currently maps to a RecipeResource which can be one of the
following:
SystemResource
VirtResource
GuestResource
I think adding another Object here would make sense, but the term
Resource is overused and its a little confusing right now.
If done correctly we could re-use the filtering code in needproperty,
but I think the only things we could search on for resources would be
the following: Name, lab_controller, and key_values? (storing how these
shared resources are connected would be another option I suppose, but
maybe too complicated for a first implementation?)
I'm sure there are other things to consider, we could support setting up
and tearing down shared resources (almost like a power on/off command).
I want to be mindful of things like that but I don't want to be
paralysed with what if's and never get anything done.
I know you guys have been working hard on the groups features and want
to re-do the scheduler as well. Maybe this would be something for after
the scheduler re-write? I don't see any mention of this on the roadmap
[1] (although the roadmap is quite full already!).
In any event, I'm interested in hearing your ideas and feedback.
Thanks!
1 - http://beaker-project.org/dev/tech-roadmap.html
8 years, 11 months
Simplifying running the inventory task
by Nick Coghlan
Currently, doing a hardware scan on a system requires a command line like the following:
bkr machine-test --inventory --family=RedHatEnterpriseLinux6 --arch=x86_64 --machine=<FQDN>
The "family" part may need to change when using Beaker to manage architectures that RHEL6 doesn't support. However, due to the fact beaker-system-scan currently still depends on smolt for some features, using RHEL7 or Fedora instead isn't ideal if RHEL6 supports the hardware.
In order to eventually add a "Scan this system now" button to the web UI, and, equivalently, a simple "bkr update-inventory <FQDN>" command, I think we may need to move the logic of choosing a preferred distro family to run the inventory task to the main Beaker server.
Specifically, I'm thinking of a mapping of architecture -> distro family + inventory task, with a default distro and inventory task to use for unspecified architectures, together with a dedicated server API *separate* from the normal job submission API, that requested an inventory scan for a particular machine. The task of choosing which distro to use, and which inventory task to run would then be handled by the server.
The purpose of this would be to better enable working with architectures that the default approach can't handle yet - standard supported architectures would use the default of /distribution/inventory-on-RHEL-6 (at least until we manage to remove the dependency on smolt, then we can switch to RHEL7 by default), while experimental architectures could use a different distro, or even a different inventory task.
By making this admin configurable (include the default distro used), we'd also better support instances that don't have RHEL trees loaded at all, but only Fedora and/or CentOS.
Cheers,
Nick.
--
Nick Coghlan
Red Hat Hosted & Shared Services
Software Engineering & Development, Brisbane
HSS Provisioning Architect
9 years, 1 month
Fwd: Beaker distro list is wrong
by Raymond Mancy
Thanks Pavol, Forwarding this to the beaker-dev list.
----- Forwarded Message -----
From: "Pavol Babincak" <pbabinca(a)redhat.com>
To: "Raymond Mancy" <rmancy(a)redhat.com>
Sent: Friday, June 20, 2014 2:32:11 AM
Subject: Beaker distro list is wrong
Hi Raymond,
it seems Beaker's list of Tags & Distros[1] are incorrect.
Is direct e-mail best way to solve this or do you have mailing
list/request tracker to contact people who take a care of this?
Anyway here is the list of tags which shouldn't be there:
- RHEL-5.11-Client-
- RHEL-5.11-Server-
- RHEL-5.11-Client-Alpha-1.1
- RHEL-5.11-Server-Alpha-1.1
They've been created by accident during compose promotion.
There are also couple of Distos:
- RHEL5.11-Server-20140606.0.n
- RHEL5.11-Server-20140604.0.n
- RHEL5.11-Server-20140603.0
- RHEL5.11-Server-20140603.0.n
- RHEL5.11-Server-20140602.0
- RHEL5.11-Server-20140601.0.n
- RHEL5.11-Server-20140529.0.n
- RHEL5.11-Server-20140525.0.n
- RHEL5.11-Server-20140522.0.n
- RHEL5.11-Server-20140521.0.n
- RHEL5.11-Server-20140515.0.n
- RHEL5.11-Server-20140508.0
- RHEL5.11-Server-20140430.0.n
- RHEL5.11-Server-20140416.0
- RHEL5.11-Server-20140416.0.n
- RHEL5.11-Server-20140413.0.n
- RHEL5.11-Server-20140411.0.n
- RHEL5.11-Server-20140403.0.n
- RHEL5.11-Server-20140327.5.n
- RHEL5.11-Server-20140307.0.n
- RHEL5.11-Server-20140304.0.n
- RHEL5.11-Server-20140228.0.n
- RHEL5.11-Server-20140221.0.n
- RHEL5.10-Server-20140218.0.n
- RHEL5.10-Server-20140103.0.n
- RHEL5.10-Server-20130910.0
- RHEL5.10-Server-20130805.3.n
which doesn't exist anymore.
I tried to publish removal of couple of those composes as we do it
during regular compose removal[2] over message bus. But the composes
haven't been removed from those lists.
Can you help me to find out what is wrong with that?
[1] https://beaker.engineering.redhat.com/reserveworkflow
[2]
http://git.app.eng.bos.redhat.com/git/rcm/utility-scripts.git/tree/remove...
--
Pavol Babincak
Release Engineering, Red Hat
9 years, 3 months
Problems with beaker-import and pulp
by Nick Strugnell
Hi -
I'm using beaker-import (build 0.17.0-1) pointing at a pulp kickstart
repository (on a satellite 6 beta) and running into some issues:
[root@beaker ~]# beaker-import http://foo.com/pulp/repos/ACME_Corporation/Library/content/dist/rhel/serv...
2014-06-17 16:06:15,113 root CRITICAL No valid importer found for http://foo.com/pulp/repos/ACME_Corporation/Library/content/dist/rhel/serv...
The HTTP log on the pulp server shows:
192.168.100.191 - - [17/Jun/2014:17:06:14 +0200] "GET /pulp/repos/ACME_Corporation/Library/content/dist/rhel/server/6/6.5/x86_64/kickstart//.composeinfo HTTP/1.1" 404 295 "-" "Python-urllib/2.6"
192.168.100.191 - - [17/Jun/2014:17:06:14 +0200] "GET /pulp/repos/ACME_Corporation/Library/content/dist/rhel/server/6/6.5/x86_64/kickstart//.composeinfo HTTP/1.1" 404 295 "-" "Python-urllib/2.6"
192.168.100.191 - - [17/Jun/2014:17:06:14 +0200] "GET /pulp/repos/ACME_Corporation/Library/content/dist/rhel/server/6/6.5/x86_64/kickstart//.treeinfo HTTP/1.1" 404 292 "-" "Python-urllib/2.6"
192.168.100.191 - - [17/Jun/2014:17:06:14 +0200] "GET /pulp/repos/ACME_Corporation/Library/content/dist/rhel/server/6/6.5/x86_64/kickstart//.treeinfo HTTP/1.1" 404 292 "-" "Python-urllib/2.6"
192.168.100.191 - - [17/Jun/2014:17:06:14 +0200] "GET /pulp/repos/ACME_Corporation/Library/content/dist/rhel/server/6/6.5/x86_64/kickstart//.treeinfo HTTP/1.1" 404 292 "-" "Python-urllib/2.6"
192.168.100.191 - - [17/Jun/2014:17:06:14 +0200] "GET /pulp/repos/ACME_Corporation/Library/content/dist/rhel/server/6/6.5/x86_64/kickstart//.treeinfo HTTP/1.1" 404 292 "-" "Python-urllib/2.6"
192.168.100.191 - - [17/Jun/2014:17:06:14 +0200] "GET /pulp/repos/ACME_Corporation/Library/content/dist/rhel/server/6/6.5/x86_64/kickstart//.treeinfo HTTP/1.1" 404 292 "-" "Python-urllib/2.6"
192.168.100.191 - - [17/Jun/2014:17:06:14 +0200] "GET /pulp/repos/ACME_Corporation/Library/content/dist/rhel/server/6/6.5/x86_64/kickstart//.treeinfo HTTP/1.1" 404 292 "-" "Python-urllib/2.6"
192.168.100.191 - - [17/Jun/2014:17:06:14 +0200] "GET /pulp/repos/ACME_Corporation/Library/content/dist/rhel/server/6/6.5/x86_64/kickstart//.treeinfo HTTP/1.1" 404 292 "-" "Python-urllib/2.6"
192.168.100.191 - - [17/Jun/2014:17:06:14 +0200] "GET /pulp/repos/ACME_Corporation/Library/content/dist/rhel/server/6/6.5/x86_64/kickstart//.treeinfo HTTP/1.1" 404 292 "-" "Python-urllib/2.6"
192.168.100.191 - - [17/Jun/2014:17:06:14 +0200] "GET /pulp/repos/ACME_Corporation/Library/content/dist/rhel/server/6/6.5/x86_64/kickstart//.treeinfo HTTP/1.1" 404 292 "-" "Python-urllib/2.6"
Now the pulp repository does not contain a .composeinfo or a .treeinfo
but _does_ contain a treeinfo (no dot) - which should be enough surely?
Am I missing something here?
Cheers,
Nick
--
Principal Architect, Europe
M: +44 7736 665171
nstrug(a)redhat.com
GPG FPR: 9C6C 093C 756A 6C57 49A1 E211 BBBA F5F5 C440 5DE0
9 years, 3 months
Adding RHEL7 to patchbot CI checks
by Nick Coghlan
Patchbot currently still only runs the CI tests on RHEL6. With RHEL7
released, we should add it to the CI runs - adding a second recipe set
to the job should be straightforward, and if either run fails, then
patchbot will complain on Gerrit. This will chew up more resources on
the devel instance, but I think it's necessary.
Before we do that, though, we need to double check the dogfood job
actually runs properly on RHEL7.
Cheers,
Nick.
--
Nick Coghlan
Red Hat Hosted & Shared Services
Software Engineering & Development, Brisbane
HSS Provisioning Architect
9 years, 3 months
Missing dependency beaker-common in beaker-project.org yum repos
by Dan Callaghan
This is just a heads-up that since Wednesday (2014-06-11) there was
a problem with the yum repos at <https://beaker-project.org/yum/>. The
beaker-common package (a new subpackage of beaker) was not included in
the repos. As a result, attempting to install or upgrade beaker packages
would have failed with a dependency resolution error.
Sorry for the mistake. The issue is fixed now. As always, if you notice
any issues with the Beaker web site please report them to this list.
--
Dan Callaghan <dcallagh(a)redhat.com>
Software Engineer, Hosted & Shared Services
Red Hat, Inc.
9 years, 3 months
Beaker 0.17.0 released
by Dan Callaghan
On behalf of the Beaker development team, I'm pleased to announce that
Beaker 0.17.0 is now available from the Beaker web site [1].
Last week Nick gave a good summary of the improvements in this release,
which I'll quote here:
> Some highlights:
>
> - the new "<reservesys/>" element in recipe definitions makes it
> possible to reserve a system even if the recipe is aborted due to a
> timeout, a kernel panic, or installation failure.
> - jobs can now be scheduled on Manual systems by name, so you can now
> run jobs on a system by name, without having to make it available to
> satisfy arbitrary "hostRequires" filters
> - the "bkr machine-test" command has a new "--ignore-system-status"
> option that allows testing of Manual and Broken systems
> - "Removed" systems are now hidden from most of the UI by default (a new
> dedicated page has been added for resurrection of previously removed
> systems)
> - custom theming makes it possible to provide instance specific help links
>
> While it isn't production ready yet, Beaker 0.17 also includes the
> preliminary support for dynamically dispatching jobs that don't require
> real hardware to an associated OpenStack instance.
A number of other enhancements and bug fixes are included in this
release as well. For a complete description, refer to the Beaker 0.17
release notes [2].
The detailed list of all changes made since Beaker 0.16.2 is also
available [3].
[1] https://beaker-project.org/releases/
[2] https://beaker-project.org/docs/whats-new/release-0.17.html
[3] https://git.beaker-project.org/cgit/beaker/log/?qt=range&q=beaker-0.16.2....
--
Dan Callaghan <dcallagh(a)redhat.com>
Software Engineer, Hosted & Shared Services
Red Hat, Inc.
9 years, 3 months
Draft Beaker 0.17 release notes published
by Nick Coghlan
We cut the Beaker 0.17 release candidate for pre-release testing today.
As part of that, the draft release notes are also now available:
https://beaker-project.org/docs-release-0.17/whats-new/release-0.17.html
Some highlights:
- the new "<reservesys/>" element in recipe definitions makes it
possible to reserve a system even if the recipe is aborted due to a
timeout, a kernel panic, or installation failure.
- jobs can now be scheduled on Manual systems by name, so you can now
run jobs on a system by name, without having to make it available to
satisfy arbitrary "hostRequires" filters
- the "bkr machine-test" command has a new "--ignore-system-status"
option that allows testing of Manual and Broken systems
- "Removed" systems are now hidden from most of the UI by default (a new
dedicated page has been added for resurrection of previously removed
systems)
- custom theming makes it possible to provide instance specific help links
While it isn't production ready yet, Beaker 0.17 also includes the
preliminary support for dynamically dispatching jobs that don't require
real hardware to an associated OpenStack instance.
Cheers,
Nick.
--
Nick Coghlan
Red Hat Hosted & Shared Services
Software Engineering & Development, Brisbane
Testing Solutions Team Lead
9 years, 4 months
OpenStack dynamic VM support status update
by Nick Coghlan
Dan has been working on implementing the OpenStack design proposal, and
the preliminary experimental version of that will be available in Beaker
0.17. This initial version allows individual users to supply appropriate
credentials to log into OpenStack on their behalf, and then tried to run
single-host recipe sets submitted by that user on OpenStack rather than
bare metal whenever possible. The scheduler will choose an appropriate
VM flavour (based on hostRequires), dynamically install a distro via
iPXE and then run the recipe on the fresh instance. While the use of
iPXE means this still incurs the Anaconda install time, it will still be
faster than most bare metal systems (since there is no server POST to
run through) and it also means that all the usual kickstart based tweaks
still work.
The current plan is to get this iPXE based version working cleanly to
thrash out most of the Beaker/OpenStack integration issues, and only
then start looking at image based provisioning. Once we have image based
provisioning working in OpenStack, we can then looking at supporting it
natively in Beaker. This approach is to keep the number of unknowns in
play at any given point in time at least somewhat manageable. While it's
going to take a while to get there, the end result should provide a nice
flexible system where simple requests without stringent hardware
requirements can be offloaded to generic compute resources in OpenStack,
while more exotic hardware requests will stay under the management of
Beaker's relatively fine-grained inventory system.
As we work through the initial integration, I'm expecting three possible
outcomes for various things:
- "still needs work" on the Beaker side
- "configuration constraints" in terms of consequences for particular
design decisions for an associated OpenStack instance
- "OpenStack RFEs" for things that currently seem to be inherent
limitations of OpenStack
At this point, we have the following items in each category:
Still needs work on the Beaker side:
* console log retrieval (although this will be somewhat clunky for the
time being - see the RFE section)
* using the trust delegation model created for Heat rather than storing
raw credentials
* improving the scalability of the OpenStack integration to better
handle simultaneous scheduling of at least dozens (preferably hundreds)
of recipes without stalling the whole scheduling system
* expanding to support multi-host recipes
Configuration constraints:
* Short version: use routable IPs for an OpenStack instance paired with
Beaker rather than relying on the NAT system
* Long version:
* if you don't use routable IPs, you won't get usable SSH access to
dynamic VMs without setting up some kind of SSH proxy infrastructure (we
tried & failed to get this working with floating IPs - that doesn't
work, because the VM doesn't know its own external IP address)
* if you don't use routable IPs, you won't be able to mix-and-match
Beaker systems and OpenStack VMs in multi-host recipes (once we
implement that)
* if you don't use routable IPs, you won't get usable DNS names for
the systems
OpenStack RFEs:
* the current console log access APIs don't support console log
streaming properly - we'll file some specific RFEs once we get a basic
version based on the existing APIs working
* it currently looks like the setup process for trust delegation will be
quite messy from a user experience perspective - we may file some RFEs
about that, depending on how it turns out
* OpenStack Designate ("DNS-as-a-Service") will hopefully address some
of the limitations of non-routable IPs, but in the meantime, we'll just
be saying "if you use non-routable IPs, here's a bunch of things that
won't work for dynamic VMs"
Cheers,
Nick.
--
Nick Coghlan
Red Hat Hosted & Shared Services
Software Engineering & Development, Brisbane
Testing Solutions Team Lead
9 years, 4 months