A few weeks back, Amit put together a proof of concept for running the
test harness in a container, rather than directly on the host
(http://gerrit.beaker-project.org/#/c/3199)
That proof of concept relies on restraint, the new reference harness,
that is intended to eventually replace beah
(https://beaker-project.org/dev/proposals/reference-harness.html)
At the same time, I don't think restraint is currently getting the level
of review and testing that it needs to mature into a plausible
replacement for beah as the default harness.
I think Amit's proposed patch provides a possible way forward:
1. Accept the initial approach where restraint is the *only* supported
harness when running inside a container. Specifying both
"contained_harness" and "harness" as ks_meta variables should be an
error at this point (side note: 'harness' should also be documented
along with all the other ks_meta variables, with a link to
https://beaker-project.org/docs/alternative-harnesses/)
2. Recommend publishing both beah *and* restraint in the harness repos
for Beaker installations. This will not only make restraint available
for container based testing, but also make it readily available via
"harness=restraint" for normal testing, without needing to add a custom
repo definition.
3. Once we have container based testing working reliably with restraint,
drop the restriction against using alternative harnesses in containers.
The priority at the moment though is to get something working that can
run on an Atomic Host, and still provide a relatively normal execution
environment for the executed tasks. Supporting alternative harnesses
*inside* containers is a nice-to-have that can wait until later - by
flatly disallowing it, we ensure we don't have to spend any time working
on container related issues that don't impact restraint. For the initial
iteration, we can also ignore the question of choosing the base image
used to run the harness, as well as being able to start and stop other
containers on the host.
I've filed an RFE for 0.18 on that basis:
https://bugzilla.redhat.com/show_bug.cgi?id=1131388
As part of this, we may also want to move restraint from Bill's personal
account on GitHub to the main Beaker project account, but I don't think
that's particularly urgent at this point.
Regards,
Nick.
--
Nick Coghlan
Red Hat Hosted & Shared Services
Software Engineering & Development, Brisbane
HSS Provisioning Architect
I had an interesting email discussion with Adam Williamson in Fedora
QA after he posted to the Fedora qa-devel list about getting OpenQA
set up in Fedora infrastructure to automate some of their previously
manual release testing activities and post the results back to the
Fedora wiki.
With Fedora 23 almost out the door, it seems to me that earlyish in
the Fedora 24 cycle might be a good time to experiment further with
the instance at beaker.fedoraproject.org, but that would depend on
resolving the remaining Fedora integration issues.
Is there a status update available on those at all?
Cheers,
Nick.
--
Nick Coghlan
Fedora Environments & Stacks
Red Hat Developer Experience, Brisbane
Software Development Workflow Designer & Process Architect
Hi folks,
The public Beaker road map docs haven't been updated in a while:
* https://beaker-project.org/dev/tech-roadmap.html
* https://beaker-project.org/dev/release-roadmap.html
With Beaker 21 out the door, would it be possible to get those updated
to better reflect current status and plans?
Cheers,
Nick.
--
Nick Coghlan
Fedora Environments & Stacks
Red Hat Developer Experience, Brisbane
Software Development Workflow Designer & Process Architect
For some time yesterday the SSL certificate for beaker-project.org was
expired. If you were trying to access anything there your browser/client
would (I hope!) have given you an SSL error.
I've deployed a new valid certificate this morning. Sorry for any
inconvenience.
--
Dan Callaghan <dcallagh(a)redhat.com>
Senior Software Engineer, Products & Technologies Operations
Red Hat, Inc.
In several previous retrospectives the issue of long wait times for
Gerrit reviews has come up. In order to quantify the problem I have
written a little script which looks at patch sets submitted to Gerrit in
the last year and measures how long each patch set waited for a review.
https://git.beaker-project.org/cgit/beaker-administrivia/commit/?id=0ce9db1…
The "time to first review" is defined as the period of time between the
patch set's creation in Gerrit and the first comment written on it by
someone other than the patch author or Jenkins.
For now, you can see the output of the script running each night here:
https://beaker-project.org/~dcallagh/gerritstats/
We can find a more permanent home for the graph in future if it turns
out to be useful.
--
Dan Callaghan <dcallagh(a)redhat.com>
Senior Software Engineer, Products & Technologies Operations
Red Hat, Inc.