ResultsDB 2.0 - DB migration on DEV
by Josef Skladanka
So, I have performed the migration on DEV - there were some problems with
it going out of memory, so I had to tweak it a bit (please have a look at
D1059, that is what I ended up using by hot-fixing on DEV).
There still is a slight problem, though - the migration of DEV took about
12 hours total, which is a bit unreasonable. Most of the time was spent in
`alembic/versions/dbfab576c81_change_schema_to_v2_0_step_2.py` lines 84-93
in D1059. The code takes about 5 seconds to change 1k results. That would
mean at least 15 hours of downtime on PROD, and that, I think is unreal...
And since I don't know how to make it faster (tips are most welcomed), I
suggest that we archive most of the data in STG/PROD before we go forward
with the migration. I'd make a complete backup, and deleted all but the
data from the last 3 months (or any other reasonable time span).
We can then populate an "archive" database, and migrate it on its own,
should we decide it is worth it (I don't think it is).
What do you think?
J.
6 years, 4 months
New ExecDB
by Josef Skladanka
With ResultsDB and Trigger rewrite done, I'd like to get started on ExecDB.
The current ExecDB is more of a tech-preview, that was to show that it's
possible to consume the push notifications from Buildbot. The thing is,
that the code doing it is quite a mess (mostly because the notifications
are quite a mess), and it's directly tied not only to Buildbot, but quite
probably to the one version of Buildbot we currently use.
I'd like to change the process to a style, where ExecDB provides an API,
and Buildbot (or possibly any other execution tool we use in the future)
will just use that to switch the execution states.
ExecDB should be the hub, in which we can go to search for execution state
and statistics of our jobs/tasks. The execution is tied together via UUID,
provided by ExecDB at Trigger time. The UUID is passed around through all
the stack, from Trigger to ResultsDB.
The process, as I envision it, is:
1) Trigger consumes FedMsg
2) Trigger creates a new Job in ExecDB, storing data like FedMsg message
id, and other relevant information (to make rescheduling possible)
3) ExecDB provides the UUID, marks the Job s SCHEDULED and Trigger then
passes the UUID, along with other data, to Buildbot.
4) Buildbot runs runtask, (sets ExecDB job to RUNNING)
5) Libtaskotron is provided the UUID, so it can then be used to report
results to ResultsDB.
6) Libtaskotron reports to ResultsDB, using the UUID as the Group UUID.
7) Libtaskotron ends, creating a status file in a known location
8) The status file contains a machine-parsable information about the
runtask execution - either "OK" or a description of "Fault" (network
failed, package to be installed did not exist, koji did not respond... you
name it)
9) Buidbot parses the status file, and reports back to ExecDB, marking the
Job either as FINISHED or CRASHED (+details)
This will need changes in Buildbot steps - a step that switches the job to
RUNNING at the beginnning, and a step that handles the FINISHED/CRASHED
switch. The way I see it, this can be done via a simple CURL or HTTPie call
from the command line. No big issue here.
We should make sure that ExecDB stores data that:
1) show the execution state
2) allow job re-scheduling
3) describe the reason the Job CRASHED
1 is obviously the state. 2 I think can be satisfied by storing the Fedmsg
Message ID and/or the Trigger-parsed data, which are passed to Buildbot.
Here I'd like to focus on 3:
My initial idea was to have SCHEDULED, RUNNING, FINISHED states, and four
crashed states, to describe where the fault was:
- CRASHED_TASKOTRON for when the error is on "our" side (minion could not
be started, git repo with task not cloned...)
- CRASHED_TASK to use when there's an unhandled exception in the Task code
- CRASHED_RESOURCES when network is down, etc
- CRASHED_OTHER whenever we are not sure
The point of the crashed "classes" is to be able to act on different kind
of crash - notify the right party, or even automatically reschedule the
job, in the case of network failure, for example.
After talking this through with Kamil, I'd rather do something slightly
different. There would only be one CRASHED state, but the job would contain
additional information to
- find the right person to notify
- get more information about the cause of the failure
To do this, we came up with a structure like this:
{state: CRASHED, blame: [TASKOTRON, TASK, UNIVERSE], details:
"free-text-ish description"}
The "blame" classes are self-describing, although I'd love to have a better
name for "UNIVERSE". We might want to add more, should it make sense, but
my main focus is to find the right party to notify.
The "details" field will contain the actual cause of the failure (in the
case we know it), and although I have it marked as free-text, I'd like to
have a set of values defined in docs, to keep things consistent.
Doing this, we could record that "Koji failed, timed out" (and blame
UNIVERSE, and possibly reschedule) or "DNF failed, package not found"
(blame TASK if it was in the formula, and notify the task maintained), or
"Minion creation failed" (and blame TASKOTRON, notify us, I guess).
Implementing the crash clasification will obviously take some time, but it
can be gradual, and we can start handling the "well known" failures soon,
for the bigger gain (kparal had some examples, IIRC).
So - what do you think about it? Is it a good idea? Do you feel like there
should be more (I can't really imagine there being less) blame targets
(like NETWORK, for example), and if so, why, and which? How about the
details - hould we go with pre-defined set of values (because enums are
better than free-text, but adding more would mean DB changes), or is
free-text + docs fine? Or do you see some other, better solution?
joza
6 years, 4 months
Enabling "new koji build" Taskotron checks on scratch builds
by Jeremy Cline
Hi everyone,
I recently started maintaining the-new-hotness[0]. In case anyone isn't
familiar with it, it's responsible for filing bugs[1] when a new
release is made upstream. One of its components, rebase-helper,
currently tries to run a set of tests on packages when upstream
releases a new version.
There are a lot of problems with this process and for the most part it
does not work. I started a discussion on the issue tracker about
removing rebase-helper[2] as a dependency. Kamil Páral mentioned that
there has been discussion about running the "new koji build" Taskotron
checks for scratch builds. This would be great for the-new-hotness since
it would do everything and more that rebase-helper currently does with
respect to testing.
How do people feel about this? Are there any obstacles?
Thanks!
[0] https://github.com/fedora-infra/the-new-hotness/
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1404680
[2] https://github.com/fedora-infra/the-new-hotness/issues/145
--
Jeremy Cline
XMPP: jeremy(a)jcline.org
IRC: jcline
6 years, 5 months
Proposal to CANCEL: 2016-12-19 Fedora QA Devel Meeting
by Tim Flink
Most of the regular folks will be absent this week and I'm not aware of
anything urgent to cover so I propose that we cancel the weekly Fedora
QA devel meeting.
If there are any topics that I'm forgetting about and/or you think
should be brought up with the group, reply to this thread and we can
un-cancel the meeting.
Tim
6 years, 5 months
2016-12-15 @ 17:00 UTC - Outage for qadevel.cloud replacement
by Tim Flink
I realize this is a little last minute but there's no telling how much
longer the current auth system will continue to work.
I'm planning to take qadevel down (phabricator, some docs etc.)
tomorrow so that I can finally replace it with an instance that has
working auth among other improvements.
This is going to be a rather large change and I expect that it will
take at least 4 hours. If this is going to be a huge problem, please
let me know soon.
The big changes will be:
- new hostname
*.qadevel.cloud.fedoraproject.org will become
*.qa.fedoraproject.org
- better cert handling
no more errors when http:// is used
- new auth system
using fedora systems, no longer relying on persona
- newer version of phabricator
- lots of other boring changes under the hood :)
6 years, 5 months
Please Test Staging Phabricator
by Tim Flink
As support for the Persona system has winded down, we finally have a
new method for logging into our phabricator instance (that should
also get rid of all those 500s on login).
My goal has been to set up the migration so that there's no account
fiddling needed to use the new auth system. Things are working in my
testing but I'd like to see more people test out the new auth method
before deploying all of this to production.
If you have the time, please try logging in to
https://phab.qa.stg.fedoraproject.org/
I've seen some errors from ipsilon about "Transaction expired, or
cookies not available", click on "Try to login again" and everything
should work.
If you run into problems, please let me know. There are a few accounts
which will need tweaking by hand (phabricator username doesn't match
FAS username so my script didn't work) but I wanted to make sure this
was working for more than just me before finishing things up.
Tim
6 years, 5 months
Release validation NG: planning thoughts
by Adam Williamson
Hi folks!
We should probably set up some projects and so on for this so we can
use issue trackers, but I thought before committing to any structure we
could have at least a short mailing list discussion for planning the
'release validation NG' work.
For anyone who forgot / didn't know - 'release validation NG' is my
nickname for the project to write a dedicated system for manual release
validation testing result submission, using resultsdb for storage. The
goal is to make manual validation testing result submission easier and
less error-prone, and also to allow for improvement analysis of results
and integration of manual results with results from other systems
(taskotron, openQA, autocloud etc). This would be designed to replace
the system of editable wiki pages that I call 'Wikitcms':
https://fedoraproject.org/wiki/Test_Results:Current_Installation_Test (etc.)
https://fedoraproject.org/wiki/Wikitcms
the latter page is a broad overview of how I see the Wikitcms 'system'
working at present. It's that system we'd be replacing, so it may help
you to read through that page to get some context and background on how
we got here and why 'release validation NG' might be a good idea :)
We have a ticket open with the design team:
https://pagure.io/design/issue/483
where kathryng is helping us with design mock ups based on my initial
rough sketches, which is great. Please do take a look at the mockups
and discussion there and add thoughts if you have any.
My very initial thought on architecture is that we could have two main
components, a webui component and a validator/resultsdb submitter
component.
The webui component would be exactly that, the actual web UI for users
to interact with and submit their results to. It would query the
validator/submitter component to find out what relevant 'test events'
were available, and what tests and environments and so forth for each
event, and then present an appropriate UI to the user for them to fill
in their results.
The validator/submitter component would be responsible for watching out
for new composes and keeping track of tests and 'test environments' (if
we keep that concept); it would have an API with endpoints you could
query for this kind of information in order to construct a result
submission, and for submitting results in some kind of defined form. On
receiving a result it would validate it according to some schemas that
admins of the system could configure (to ensure the report is for a
known compose, image, test and test environment, and do some checking
of stuff like the result status, user who submitted the result, comment
content, stuff like that). Then it'd forward the result to resultsdb.
This is just an idea, though. There are a few reasons I thought it
might make sense to separate these two elements:
* It gives us flexibility in a few important respects:
* The validator/submitter could accept results from other things, not
just the webUI - e.g. relval
* The validator/submitter count send results to other things, not
just ResultsDB - e.g. the wiki
* The validator/submitter could be written to allow expansion to
cover things other than release validation results, e.g. Test Day
results, so a future rewrite of the 'testdays' webapp could use it
* It should help with splitting up the work between people; different
people can work on the web UI and the validator/submitter without
blocking each other too often
So these are just my very early thoughts on the project, it'd be great
to know what other folks think! If we can agree on a basic architecture
and plan we could start setting up projects (I think I'd suggest we do
this in Pagure, but we can also consider Phabricator) and tickets for
the initial work.
--
Adam Williamson
Fedora QA Community Monkey
IRC: adamw | Twitter: AdamW_Fedora | XMPP: adamw AT happyassassin . net
http://www.happyassassin.net
6 years, 5 months