Task Result Dashboards
by Lukas Brabec
Hi team,
I would like to open discussion on topic "Task Result Dashboards". I'm posting
here in order to avoid long of the grid discussions we had last time regarding
the docker testing stuff.
There is a tracking ticket in phab [1] that links to tflink's initial ideas [2].
* What is the motivation, what do we want to achieve with such dashboards and
who is the 'non-techincal audience'?
* Runnable once a day or once per hour at minimum; does this imply static
periodically refreshed page? If so, what is the motivation for
static website?
After brief discussion with jskladan, I understand that resultsDB would be
able to handle requests from dynamic page.
* I'm not sure what exactly is meant by 'item tag' in the examples section.
* Would the YAML configuration look something like this:
url: link.to.resultsdbapi.org
overview:
- testplan:
- name: LAMP
- items:
- mariadb
- httpd
- tasks:
- and:
- rpmlint
- depcheck
- or:
- foo
- bar
Is there going to be any additional grouping (for example, based on arch) or
some kind of more precise outcome aggregation (only warn if part of testplan
is failing, etc.)
* Are we going to generate the dashbord for the latest results only, or/and
some kind of summary over given period in history?
* How are Task dasboards related to Static dashboards [3]
[1] https://phab.qa.fedoraproject.org/T725
[2] https://bitbucket.org/tflink/taskdash
[3] https://phab.qa.fedoraproject.org/T738
7 years, 2 months
Enabling running Taskotron tasks on Koji scratch builds
by Martin Krizek
Hi team,
we have received a few requests to run tasks on koji scratch builds. Unfortunately, it is not as straightforward as it would seem since "real" koji builds and scratch builds are not treated the same. That's why "real" builds are downloaded with "koji download-build" and scratch builds are downloaded with its own tool called "koji-download-scratch" which is part of the fedora-review package.
The way the "real" koji builds work is that they are referenced by NVRs since NVRs are unique (I think there can be multiple builds with the same NVR, many with "failed" status and just one with the "completed" status and only the completed ones are searched, but that's just my guess). In taskotron's koji tasks, we feed NVR's to libtaskotron's koji directive which downloads build's RPMs.
In the case of scratch builds, they are not referenced by NVR's but rather just with task ID's.
With that being said, we just can't write a trigger for listening for the scratch builds fedmsgs [1] and trigger a libtaskotron job because there's no NVR in the fedmsg and even if it was, we wouldn't be able to download the rpms.
During a discussion with Kamil, a few solutions were mentioned (none of them is pretty):
1. We can ask koji developers if there is a way to add a method that would return all koji scratch builds for a given NVR - we would then take the latest one and work with it.
2. We can use "koji download-task" which works for both. That would mean koji_build item type would eat koji task IDs instead of NVRs. This would lead to having koji task IDs in resultsdb instead of NVR's which kills readability. Unless libtaskotron finds the NVR from the koji task ID in the case of "real" build" and stores it in a field in resultsdb...Also, we need NVRs for fedmsgs. So add code to the fedmsg layer that would take care of somehow adding a NVR to fedmsg of completed scratch builds tasks...
3. We can add a new action to the koji directive "download_task". And then have for each koji task two formulas, one for scratch build (using "download_task") and one for "real" build" (using "download_build). This would require writing code for supporting multiple formulas for one task and would result in having two almost identical formulas...
Before proceeding with 1. and asking koji folks if they are able to help with that I wanted to check here if anyone has any thoughts on this.
Thanks!
Martin
[1] https://apps.fedoraproject.org/datagrepper/id?id=2017-738c5d32-b2b9-40a3-...
7 years, 2 months
New ExecDB
by Josef Skladanka
With ResultsDB and Trigger rewrite done, I'd like to get started on ExecDB.
The current ExecDB is more of a tech-preview, that was to show that it's
possible to consume the push notifications from Buildbot. The thing is,
that the code doing it is quite a mess (mostly because the notifications
are quite a mess), and it's directly tied not only to Buildbot, but quite
probably to the one version of Buildbot we currently use.
I'd like to change the process to a style, where ExecDB provides an API,
and Buildbot (or possibly any other execution tool we use in the future)
will just use that to switch the execution states.
ExecDB should be the hub, in which we can go to search for execution state
and statistics of our jobs/tasks. The execution is tied together via UUID,
provided by ExecDB at Trigger time. The UUID is passed around through all
the stack, from Trigger to ResultsDB.
The process, as I envision it, is:
1) Trigger consumes FedMsg
2) Trigger creates a new Job in ExecDB, storing data like FedMsg message
id, and other relevant information (to make rescheduling possible)
3) ExecDB provides the UUID, marks the Job s SCHEDULED and Trigger then
passes the UUID, along with other data, to Buildbot.
4) Buildbot runs runtask, (sets ExecDB job to RUNNING)
5) Libtaskotron is provided the UUID, so it can then be used to report
results to ResultsDB.
6) Libtaskotron reports to ResultsDB, using the UUID as the Group UUID.
7) Libtaskotron ends, creating a status file in a known location
8) The status file contains a machine-parsable information about the
runtask execution - either "OK" or a description of "Fault" (network
failed, package to be installed did not exist, koji did not respond... you
name it)
9) Buidbot parses the status file, and reports back to ExecDB, marking the
Job either as FINISHED or CRASHED (+details)
This will need changes in Buildbot steps - a step that switches the job to
RUNNING at the beginnning, and a step that handles the FINISHED/CRASHED
switch. The way I see it, this can be done via a simple CURL or HTTPie call
from the command line. No big issue here.
We should make sure that ExecDB stores data that:
1) show the execution state
2) allow job re-scheduling
3) describe the reason the Job CRASHED
1 is obviously the state. 2 I think can be satisfied by storing the Fedmsg
Message ID and/or the Trigger-parsed data, which are passed to Buildbot.
Here I'd like to focus on 3:
My initial idea was to have SCHEDULED, RUNNING, FINISHED states, and four
crashed states, to describe where the fault was:
- CRASHED_TASKOTRON for when the error is on "our" side (minion could not
be started, git repo with task not cloned...)
- CRASHED_TASK to use when there's an unhandled exception in the Task code
- CRASHED_RESOURCES when network is down, etc
- CRASHED_OTHER whenever we are not sure
The point of the crashed "classes" is to be able to act on different kind
of crash - notify the right party, or even automatically reschedule the
job, in the case of network failure, for example.
After talking this through with Kamil, I'd rather do something slightly
different. There would only be one CRASHED state, but the job would contain
additional information to
- find the right person to notify
- get more information about the cause of the failure
To do this, we came up with a structure like this:
{state: CRASHED, blame: [TASKOTRON, TASK, UNIVERSE], details:
"free-text-ish description"}
The "blame" classes are self-describing, although I'd love to have a better
name for "UNIVERSE". We might want to add more, should it make sense, but
my main focus is to find the right party to notify.
The "details" field will contain the actual cause of the failure (in the
case we know it), and although I have it marked as free-text, I'd like to
have a set of values defined in docs, to keep things consistent.
Doing this, we could record that "Koji failed, timed out" (and blame
UNIVERSE, and possibly reschedule) or "DNF failed, package not found"
(blame TASK if it was in the formula, and notify the task maintained), or
"Minion creation failed" (and blame TASKOTRON, notify us, I guess).
Implementing the crash clasification will obviously take some time, but it
can be gradual, and we can start handling the "well known" failures soon,
for the bigger gain (kparal had some examples, IIRC).
So - what do you think about it? Is it a good idea? Do you feel like there
should be more (I can't really imagine there being less) blame targets
(like NETWORK, for example), and if so, why, and which? How about the
details - hould we go with pre-defined set of values (because enums are
better than free-text, but adding more would mean DB changes), or is
free-text + docs fine? Or do you see some other, better solution?
joza
7 years, 2 months
2017-01-09 @ 15:00 UTC - Fedora QA Devel Meeting
by Tim Flink
# Fedora QA Devel Meeting
# Date: 2017-01-09
# Time: 15:00 UTC (note time change)
(https://fedoraproject.org/wiki/Infrastructure/UTCHowto)
# Location: #fedora-meeting-1 on irc.freenode.net
It's been a while since we last met and there are a couple of things
worth discussing.
https://phab.qa.fedoraproject.org/w/meetings/20170109-fedoraqadevel/
If you have any additional topics, please reply to this thread or add
them in the wiki doc.
Tim
Proposed Agenda
===============
Announcements and Information
-----------------------------
- Please list announcements or significant information items below so
the meeting goes faster
Tasking
-------
- Does anyone need tasks to do?
Potential Other Topics
----------------------
- Documentation
- Dist-Git Task Storage Proposal (and test case docs)
- Rebuilding Taskotron instances
- Phabricator Maintenance
- Projects
Open Floor
----------
- TBD
7 years, 2 months
2017-01-05 @ 18:00 UTC - Outage for qadevel.cloud replacement (for
real this time)
by Tim Flink
I realize this is a little last minute but persona has been completely
shut down now (so auth is no longer possible) and the last issue that
was preventing migration was taken care of yesterday.
I'm planning to take qadevel down (phabricator, some docs etc.)
today so that I can finally replace it with an instance that has
working auth among other improvements.
This is going to be a rather large change and I expect that it will
take at least 4 hours. If this is going to be a huge problem, please
let me know soon.
The big changes will be:
- new hostname
*.qadevel.cloud.fedoraproject.org will become
*.qa.fedoraproject.org
- better cert handling
no more errors when http:// is used
- new auth system
using fedora systems, no longer relying on persona
- newer version of phabricator
- lots of other boring changes under the hood :)
7 years, 2 months
Enabling "new koji build" Taskotron checks on scratch builds
by Jeremy Cline
Hi everyone,
I recently started maintaining the-new-hotness[0]. In case anyone isn't
familiar with it, it's responsible for filing bugs[1] when a new
release is made upstream. One of its components, rebase-helper,
currently tries to run a set of tests on packages when upstream
releases a new version.
There are a lot of problems with this process and for the most part it
does not work. I started a discussion on the issue tracker about
removing rebase-helper[2] as a dependency. Kamil Páral mentioned that
there has been discussion about running the "new koji build" Taskotron
checks for scratch builds. This would be great for the-new-hotness since
it would do everything and more that rebase-helper currently does with
respect to testing.
How do people feel about this? Are there any obstacles?
Thanks!
[0] https://github.com/fedora-infra/the-new-hotness/
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1404680
[2] https://github.com/fedora-infra/the-new-hotness/issues/145
--
Jeremy Cline
XMPP: jeremy(a)jcline.org
IRC: jcline
7 years, 2 months
Proposal to CANCEL: 2017-01-02 Fedora QA Devel Meeting
by Tim Flink
Monday is a holiday for me and I suspect that it is also a holiday for
many other folks. I'm not aware of anything urgent which needs
discussion so I'm proposing that we cancel our normally scheduled QA
devel meeting.
If there are any topics that I'm forgetting about and/or you think
should be brought up with the group, reply to this thread and we can
un-cancel the meeting.
Tim
7 years, 2 months