To help narrow down a problem that sometimes occurs with lab controller
<-> main server communications, Beaker 0.13.2 is currently scheduled to
completely remove the implicit retries for the affected XML-RPC calls
(see https://bugzilla.redhat.com/show_bug.cgi?id=974352)
However, it occurs to me that there's a potentially big downside to
doing this: currently, we can usually restart the main web service (e.g.
for a maintenance update that doesn't include any database changes)
without significant ill effects, because it will just trip the automatic
retries in the lab controllers for any in progress calls and those calls
will generally succeed on the second attempt.
By turning the implicit retries off *completely* for the lab controller
daemons (rather then just reducing the number of attempts and logging
each failed attempt properly), any calls back to the server when the
server shuts down will actually *fail* completely.
So while our initial assessment was that the implicit retries were doing
more harm than good, I think this may be a case where we actually need
them. If I'm right, then 0.13.2 isn't shippable in its current form (we
need to either revert that change to the lab controllers, or update it
to still retry at least once, and log the first failure properly).
Cheers,
Nick.
--
Nick Coghlan
Red Hat Infrastructure Engineering & Development, Brisbane
Testing Solutions Team Lead
Beaker Development Lead (http://beaker-project.org/)
We are very pleased to announce that Beaker v0.13.1 in now available
from the yum repos on beaker-project.org, and the main site has been
updated accordingly.
In this release we have introduced the first components of the Enhanced User
Groups design proposal [1]. This includes:
- explicit separation of "individual jobs" and "group jobs".
- support for user defined groups.
- optional LDAP integration for group definitions
Following the migration to Sphinx-based documentation, Beaker 0.13
introduces substantial changes to the upstream Beaker User Guide. These
changes should make it easier for users to get started working with an
existing Beaker installation. The greatest changes can be seen in the
section on writing Beaker tasks [2]. Improvements have also been made to
the job submission and execution documentation [3].
Additional details about the above features and smaller changes for this
release can be found in the release notes here:
http://beaker-project.org/docs/whats-new/release-0.13.html
Thank you to all who contributed to this release.
We're currently working on an incremental maintenance release (0.13.2),
while also starting to think seriously about the scope of Beaker 0.14
(which we're currently aiming to release some time in August).
Cheers,
Nick.
--
Nick Coghlan
Red Hat Infrastructure Engineering & Development, Brisbane
Testing Solutions Team Lead
Beaker Development Lead (http://beaker-project.org/)
Currently, the "System Pools" and "Implicit System Pools" design
proposals are structured such that the pools are initially made
explicit, and then we add a separate mechanism to implicitly associated
them with users and groups.
http://beaker-project.org/dev/proposals/system-pools.htmlhttp://beaker-project.org/dev/proposals/implicit-system-pools.html
I'd like to propose that we instead consider approach this differently,
and do the *implicit* pools first, using them as a back end
implementation technique for existing Beaker concepts, and then expand
them to additional use cases over time.
With this revised approach, the first increment of the functionality
would look at implicitly associating a system pool with every user. When
a user is the "owner" for a system, then that system is considered part
of their implicitly associated pool
The advantage of this approach is that it allows us to adopt the pool
model for system permissions handling, *without* needing to immediately
introduce new UI concepts for explicit management of the pools. Instead,
users would merely gain the ability to set usage policy for systems they
own.
The second increment of the functionality would then expand the implicit
pool concept to user groups. Associating a system with a group would
then make the system part of that group's pool in addition to being part
of the owner's pool. This increment would need to address some of the
thornier permissions questions that arise when a system is part of two
pools (at the least the owner's pool and the group's pool, but
potentially also pools for other groups as well).
Finally, we will be able to expand the pool functionality to permit the
creation of explicit system groups, separate from the implicit groups,
in order to set special permissions for those systems (or just to ensure
they aren't used accidentally).
This seems like a more natural evolution from the existing capabilities
of Beaker to where we want to get to.
Cheers,
Nick.
--
Nick Coghlan
Red Hat Infrastructure Engineering & Development, Brisbane
Testing Solutions Team Lead
Beaker Development Lead (http://beaker-project.org/)
Currently, the notion of "Submission Delegates" is part of the Enhanced
User Groups design proposal:
http://beaker-project.org/dev/proposals/enhanced-user-groups.html#submissio…
However, looking at some of the feedback that led to the creation of
that feature, I'm wondering if it actually makes sense for "Submission
Delegates" to be a group level feature. Since the only permission it
grants is the ability to submit jobs, perhaps it would make more sense
for it to be pulled out to a separate design proposal as a *user* level
feature?
Then the job header in the XML could just gain a "user=<username>"
attribute (similar to the just added "group=<groupname>" attribute), and
all we would do on the back end is add an informational
"submitting_user" field to jobs in the database, and allow individual
*users* to nominate other users as submission delegates.
This seems like it would be simpler to implement and manage, and also
better serve the intended use case of allowing an external automated
service that simplifies scheduling and submission of jobs on behalf of
various users.
Cheers,
Nick.
--
Nick Coghlan
Red Hat Infrastructure Engineering & Development, Brisbane
Testing Solutions Team Lead
Beaker Development Lead (http://beaker-project.org/)
Currently, there's a conflict between the mechanism we use to ensure job
status is tracked correctly (marking jobs as dirty until the entire set
can be updated appropriately), and the mechanism we use to try to avoid
deadlocks between recipe sets that are competing for the same systems
(by always favouring recipe sets that already have some systems assigned).
The problem is that by taking the dirty recipes out of the set of those
considered for scheduling, we recreate the opportunity for the recipe
set deadlock (see [1]). Our instance appears to have enough machines
available that we don't hit it too often, but it looks like it may cause
more problems for smaller instances.
Ray has a suggestion for how we might be able to better cope with that:
> I wonder if we can just accept dirty jobs in the schedule_queued_recipes() method.
> As a dirty recipe it will be in the 'Queued' state, and the only state that would change
> to, by making it clean, would be 'Aborted' (or 'Cancelled' if the user cancels the job).
> So if, while it is dirty, the recipe is assigned a system and then later marked 'Aborted',
> the recipe will be cleaned up and the system released as would normally happen with a completed recipe.
After thinking it through a bit, I think we definitely need to do
*something* along these lines. I initially thought getting started on
some aspects of
http://beaker-project.org/dev/proposals/event-driven-scheduler.html
would be a different way to solve the problem, but it turns out that
needs the *exact* same ability to take queued-but-dirty recipes into
account when attempting to schedule recipes on newly available systems.
The one issue I see is that we really *don't* want to be making more
status changes to recipes that are part of a dirty job or we're back
into risking status update conflicts - what we really want is a way to
flag those systems as off limits for the remainder of this pass over the
queue, *without* attempting to schedule the dirty recipe immediately.
Cheers,
Nick.
[1] https://bugzilla.redhat.com/show_bug.cgi?id=952587
--
Nick Coghlan
Red Hat Infrastructure Engineering & Development, Brisbane
Test Automation Team Lead
Beaker Development Lead (http://beaker-project.org/)
PulpDist Development Lead (http://pulpdist.readthedocs.org)
One of our two test failures on Fedora 18 is this one:
FAIL: bkr.inttest.server.test_jobs:TestJobsController.test_job_xml_can_be_roundtripped
The reason is toprettyxml()'s behavior between Python 2.6 and Python 2.7. From an example at [1], on Python 2.6:
>>> from xml.dom import minidom
>>> d = minidom.parseString('<foo><bar>AAA</bar>BBB<bar>CCC</bar></foo>')
>>> print d.toprettyxml()
<?xml version="1.0" ?>
<foo>
<bar>
AAA
</bar>
BBB
<bar>
CCC
</bar>
</foo>
And on Python 2.7:
>>> from xml.dom import minidom
>>> d = minidom.parseString('<foo><bar>AAA</bar>BBB<bar>CCC</bar></foo>')
>>> d.toprettyxml()
u'<?xml version="1.0" ?>\n<foo>\n\t<bar>AAA</bar>\n\tBBB\n\t<bar>CCC</bar>\n</foo>\n'
>>> print d.toprettyxml()
<?xml version="1.0" ?>
<foo>
<bar>AAA</bar>
BBB
<bar>CCC</bar>
</foo>
I am not sure, but it looks like this is where the "fix" was committed in 2.7/3.x branches[2].
[1] http://ronrothman.com/public/leftbraned/xml-dom-minidom-toprettyxml-and-sil…
[2] http://bugs.python.org/issue4147
At this moment, I am not sure what we can do about it, since changing our expected XML file will break stuff on 2.6/RHEL6.
Best,
Amit.
--
Amit Saha <http://echorand.me>
Infrastructure Engineering and Development
Red Hat, Inc.
Besides adding the Beaker Gerrit remote in your .git/config [1], you can also define a BASH function such as:
# git push
function git-push(){
git push git+ssh://gerrit.beaker-project.org:29418/beaker "$@":refs/for/develop;
}
And then to push a branch 908174, simply $ git-push 908174
[1] http://beaker-project.org/dev/guide/writing-a-patch.html#submitting-your-pa…
Best,
-Amit.
--
Amit Saha <http://echorand.me>
Infrastructure Engineering and Development
Red Hat, Inc.
Beaker has been doing 0.x releases for a long time. We've been talking
about the release later this month being 1.0, but before we declare
we've reached that milestone, I'd really like us to be at the point
where someone could reasonably expect to grab the Beaker RPMs from
beaker-project.org, and using just our documentation, set up and start
running their own Beaker instance, including writing new tests using
either beah or autotest.
We're not there yet, and we're not going to get there this month, so the
next release will be 0.13 and we'll continue with regular 0.x releases
until we're happy we have something we consider worthy of the "Beaker
1.0" title. I don't think we're all that far away - we may need a 0.14,
but I'll be surprised if we see 0.15.
For planning purposes, I'd like to keep marking the backlog as "1.0" -
the 0.x releases will then just be snapshots that make useful chunks of
functionality available to users, rather than having to wait until we're
done with everything we want to include for the 1.0 milestone.
This will throw out the currently predicted releases in
http://beaker-project.org/dev/proposals/handling-large-installations.html and
the associated design proposals, but I'm not really worried about that.
In some respects, it works out better, as it means we can add the
"Effective Job Priorities" proposal to our targets for Beaker 1.0, and
that's the *real* objective of several of the changes we're currently
working on.
Once we're happy we know what we would like to include in 1.0, then I
can go back and adjust those design proposals and the tech roadmap
accordingly.
Based on the off-list discussions, here's what I would like to aim for
as our 1.0 milestone:
* The Effective Job Priorities design proposal (and its dependencies)
* Full Fedora compatibility (including the test suite)
* A stable alternate harness API (ideally with the corresponding
autotest patches merged on their side)
* A harness independent reservesys mechanism
* Moving any remaining hardcoded Red Hat specific settings into
configuration files
* Documentation improvements, including
* an architectural guide that at least explains all of Beaker's
external interfaces
* an explanation of the local watchdog hooks
* more general advice on job design (including effectively using
features like the job matrix)
* Self tests that exercise beah, the legacy harness API and the new
harness API
* Having a public page giving at least basic info on how Beaker is used
at Red Hat
Cheers,
Nick.
--
Nick Coghlan
Red Hat Infrastructure Engineering & Development, Brisbane
Test Automation Team Lead
Beaker Development Lead (http://beaker-project.org/)
PulpDist Development Lead (http://pulpdist.readthedocs.org)