With ResultsDB and Trigger rewrite done, I'd like to get started on ExecDB.

The current ExecDB is more of a tech-preview, that was to show that it's possible to consume the push notifications from Buildbot. The thing is, that the code doing it is quite a mess (mostly because the notifications are quite a mess), and it's directly tied not only to Buildbot, but quite probably to the one version of Buildbot we currently use.
I'd like to change the process to a style, where ExecDB provides an API, and Buildbot (or possibly any other execution tool we use in the future) will just use that to switch the execution states.

ExecDB should be the hub, in which we can go to search for execution state and statistics of our jobs/tasks. The execution is tied together via UUID, provided by ExecDB at Trigger time. The UUID is passed around through all the stack, from Trigger to ResultsDB.

The process, as I envision it, is:
1) Trigger consumes FedMsg
2) Trigger creates a new Job in ExecDB, storing data like FedMsg message id, and other relevant information (to make rescheduling possible)
3) ExecDB provides the UUID, marks the Job s SCHEDULED and Trigger then passes the UUID, along with other data, to Buildbot.
4) Buildbot runs runtask, (sets ExecDB job to RUNNING)
5) Libtaskotron is provided the UUID, so it can then be used to report results to ResultsDB.
6) Libtaskotron reports to ResultsDB, using the UUID as the Group UUID.
7) Libtaskotron ends, creating a status file in a known location
8) The status file contains a machine-parsable information about the runtask execution - either "OK" or a description of "Fault" (network failed, package to be installed did not exist, koji did not respond... you name it)
9) Buidbot parses the status file, and reports back to ExecDB, marking the Job either as FINISHED or CRASHED (+details)

This will need changes in Buildbot steps - a step that switches the job to RUNNING at the beginnning, and a step that handles the FINISHED/CRASHED switch. The way I see it, this can be done via a simple CURL or HTTPie call from the command line. No big issue here.

We should make sure that ExecDB stores data that:
1) show the execution state
2) allow job re-scheduling
3) describe the reason the Job CRASHED

1 is obviously the state. 2 I think can be satisfied by storing the Fedmsg Message ID and/or the Trigger-parsed data, which are passed to Buildbot. Here I'd like to focus on 3:

My initial idea was to have SCHEDULED, RUNNING, FINISHED states, and four crashed states, to describe where the fault was:
 - CRASHED_TASKOTRON for when the error is on "our" side (minion could not be started, git repo with task not cloned...)
 - CRASHED_TASK to use when there's an unhandled exception in the Task code
 - CRASHED_RESOURCES when network is down, etc
 - CRASHED_OTHER whenever we are not sure

The point of the crashed "classes" is to be able to act on different kind of crash - notify the right party, or even automatically reschedule the job, in the case of network failure, for example.

After talking this through with Kamil, I'd rather do something slightly different. There would only be one CRASHED state, but the job would contain additional information to
 - find the right person to notify
 - get more information about the cause of the failure
To do this, we came up with a structure like this:
  {state: CRASHED, blame: [TASKOTRON, TASK, UNIVERSE], details: "free-text-ish description"}

The "blame" classes are self-describing, although I'd love to have a better name for "UNIVERSE".

I was thinking about this and what about "blame: THIRD_PARTY" (or THIRDPARTY)? I think that best described the distinction of us (taskotron authors), them (task authors) and anyone else (servers, networks, etc).

I'd also like to add "blame: UNKNOWN" to distinguish third parties we can identify (koji, bodhi) from errors we have no idea what caused them. This will allow us to more easily spot new or infrequent crashes. Alternatively, the "blame" field can be null/none, that can have the same meaning. But "unknown" is probably more descriptive (and "none" can be converted to "unknown" when saving this to the database).

We might want to add more, should it make sense, but my main focus is to find the right party to notify.
The "details" field will contain the actual cause of the failure (in the case we know it), and although I have it marked as free-text, I'd like to have a set of values defined in docs, to keep things consistent.

Doing this, we could record that "Koji failed, timed out" (and blame UNIVERSE, and possibly reschedule) or "DNF failed, package not found" (blame TASK if it was in the formula, and notify the task maintained), or "Minion creation failed" (and blame TASKOTRON, notify us, I guess).

Implementing the crash clasification will obviously take some time, but it can be gradual, and we can start handling the "well known" failures soon, for the bigger gain (kparal had some examples, IIRC).

For example Koji errors, Bodhi errors, OOM errors, out-of-disk errors, killed-by-watchdog errors, minion timed out errors, dnf install failed errors.



So - what do you think about it? Is it a good idea? Do you feel like there should be more (I can't really imagine there being less) blame targets (like NETWORK, for example), and if so, why, and which? How about the details - hould we go with pre-defined set of values (because enums are better than free-text, but adding more would mean DB changes), or is free-text + docs fine? Or do you see some other, better solution?

joza