On Mon, 2017-03-06 at 11:50 -0500, Kamil Paral wrote:
> > a) query
`results?testcases=compose.*&item=Fedora-Server-dvd-x86_64-
> > Rawhide-20170228.n.0.iso` and then go through all the results, make
> > them unique by eliminating everything that has the same 'scenario',
> > and work with that
>
> This was one of the primary cases I was thinking of, yes. This is the
> case that releng would be using to decide whether to release a compose,
> for instance.
I'll go slightly off-topic here, but consider this: 'scenario' will
help us identify unique results. But it will not help us decide
whether "all" results have been submitted (maybe some are still
running, maybe some have crashed hard and not sent any results). So
there's a good chance that rel-eng tools will need to have deep
knowledge of the testcases anyway. Not just list of all monitored
testcases, but also that this particular test case needs to be
performed for bios and uefi (or, that this particular testcase needs
to be present with these two scenarios).
So while having scenarios seems definitely helpful in certain cases,
it might not help us in avoiding having too much knowledge in the
gating consumers. Just food for thought.
I have actually been thinking about that problem as well :) And it is
indeed a tricky one. Bodhi already basically ducks the question; it
just shows whatever results are there at the time. We don't have a good
answer for cases like 'can we ship this compose?' or check-compose,
cases where the tool needs to know that testing is complete.
My best idea so far is that we should implement a 'testing complete'
fedmsg with a consistent name for each testing system - so
whatever.test.system.prefix.is.testing.complete or something like that.
Then at least the consumer only has to know which test systems it cares
about, and it can quite trivially trigger whenever it has
'testing.complete' messages from each one for the relevant item. I
really dunno if we can do anything better than this. I suppose if we
also required all systems to send a 'testing.started' message, we could
have some sort of meta-consumer check when it's seen a
'testing.complete' message for each 'testing.started' message and send
out a 'all testing for compose X is complete' message, but even if we
*do* that, it means that any single system can prevent the 'all testing
complete' message going out if it's slow or broken, even if it's one a
given consumer doesn't actually care about. So I don't think consumers
would use such a message even if we managed to build it...
Note that I've already implemented something like this for openQA,
though it doesn't work exactly the way I described at present. Each
openQA 'test complete' message contains a count of currently running or
scheduled tests for the same build, and things like check-compose
trigger when they see a 'job done' message with that count at 0.
> Well, see the above examples - I was kinda envisioning that
you'd
> *discover* the scenario you're interested in from some kind of existing
> result, or just query for all results from two *different* composes,
> then use the 'scenario' key to compare the results for the two composes
> (so you can say 'ok, scenario A passed in compose 1 but failed in
> compose 2'). This is the sort of thing check-compose does - that's how
> it produces a list of 'tests that failed today but passed yesterday',
> etc. It doesn't *know* what the scenarios are before it queries the
> results, but *once it has the results* it calculates the scenarios and
> uses them for comparison.
Fair enough, for this use case it's really useful. But it will not be
the primary way of using scenarios, and that was my point, clarifying
how it's going to be used primarily.
Eh, I dunno if there's much use in denoting one usage as 'primary' and
another as 'not primary'. But sure.
> Well, the funny thing is, for all the same reasons I come to the
other
> conclusion :) I entirely agree that you'd never want to use the
> scenario *without* the test case name...so in that case, saying "you
> should always query for the test case name plus the scenario" seems
> like pointless make-work, right? If we know that there's never any
> point in just using the scenario values *without* the test case name,
> why not just stuff the test case name right into the scenario value so
> you only have to query for 'scenario', rather than always having to
> query for both 'test case name' and 'scenario'?
To be fair, I can think only of a few minor points:
* Data duplication, makes the field longer and harder to read.
This seems pretty minor - honestly there's almost no case for a human
to be manually reading the field ever, except for debugging purposes. I
never actually go and manually inspect what the openQA scenario values
are, there's just no use case for that - it's just a concept that the
tools use to do a job.
* A possibility for errors. Since tasks are allowed to add arbitrary
suffixes to their testcase names, a task can create tens of "sub-
testcases" (e.g. look at dist.rpmgrill*). The scenario field is not
automatically generated but fully under task's control. There might
be copy&paste errors or logical errors in the code, and the scenario
might not correctly reflect the used testcase name.
Hum, I guess, though it doesn't actually *matter* if the scenario still
does its job of being unique within the results for a single item, and
being attached to 'the same' test across multiple items. We're
definitely not going to want consumers to get into the business of
parsing things out of the scenario value, if they actually want one of
the values that makes up the scenario value they should request it
directly. But sure, it's a possibility.
* Consistency (even though that can be arguable). I'd like to be
clear that testcase+item combination is always the default way to
consume results. For more complex tests, it might also require
looking at scenario to distinguish unique results. If I write
instructions that say that results are identified by item+scenario if
scenario exists, otherwise as testcase+item, it seems more complex
and less obvious to me. (But this is maybe just about phrasing).
I...dunno if this is going to be a tenable place to stand, to be
honest. Funnily enough I went through the same process in a somewhat
different context. When thinking about relval-ng in my head, I
initially had the idea that we could kill the Wikitcms 'environment'
columns - which, if you think about it, are basically this 'scenario'
concept - by associating all results with a specific image - which is,
if you think about it, the 'item' concept.
NOTE IF YOU'VE NO IDEA WHAT THIS IS ABOUT: we're talking about the wiki
pages Fedora QA uses to store release validation test results, like
this:
https://fedoraproject.org/wiki/Test_Results:Fedora_26_Branched_20170302.n...
Note each row is for one 'test case', and most rows have multiple
columns for results in different 'environments' (i.e. scenarios) for
that test case. 'relval-ng' is the working name of a system we're
proposing to build to replace the wiki system.
So I was effectively thinking the same as you: we can always just
identify a result as the combination of "a test case" and "a tested
item".
But then I thought, nah, it's still not really that simple. In relval-
ng as in openQA, the easiest example is BIOS vs. UEFI: we have some
tests, e.g. Anaconda_User_Interface_Basic_Video_Driver , where we want
to test on both BIOS and UEFI. This involves the same test case and the
same tested 'item' - an x86_64 installer image - but a different
'scenario'. Unless you start stuffing scenario items into the test case
name (basicvideo.uefi , basicvideo.bios ?) or item name (foobar.iso
BIOS, foobar.iso UEFI ?) - an idea we rejected back at the start of
this thread - I fundamentally don't see a way around this.
There are other examples, though - take the 'Default boot and install'
table, where we consider installing with the same image to a VM and to
bare metal as being different scenarios, and also installing with the
same image written to an optical disc and written to a USB stick.
There's just fundamentally no way around that without invoking the
'scenario' concept, or something very much like it but phrased
differently.
As we've also already noted, the existing package tests have also run
into this problem: "test name + item" is not sufficient to really
define the result, as 'item' is a source package but the same test is
run for all binary package arches and may have different results on
different arches (IIUC).
Basically, my contention is that this 'scenario' concept is going to
just keep on turning up, all over the place, and it's going to be more
realistic to expect to be dealing with a 'scenario' most of the time,
than to expect to be dealing with just 'test case plus item' most of
the time and consider 'scenario' to be a kind of "advanced" thing.
So I think I still slightly prefer the idea of including the test name
in the scenario value, but I really don't have a strong preference
either way. Let's just pick one approach and go with it.
Note if I'm being honest I have a practical reason for this, as openQA
defines the test case name as one of the 'scenario keys', so if we go
with 'test name not in scenario value', in the openQA reporter we'll
have to take the list of 'scenario keys' and then remove the test name
from it. Obviously this is a trivial point, but just to be honest, it's
the real reason why I initially assumed we'd put the test name in the
scenario value: just cos that's how openQA is currently set up to do
it, if you do it the easiest way :)
> But it's really a pretty minor point. The system would work
fine either
> way, it just slightly changes what consumers have to do.
>
> Does anyone else have thoughts on this? I'd like to either land the
> 'scenario' change in the openQA reporter, or definitely decide we want
> to do something else, tomorrow or early next week, so I can move
> forward with the 'is this compose releasable?' script and displaying
> openQA update test results in Bodhi...
Since you're clearly powered by Duracell batteries [1], please go
ahead and implement this. If we decide to implement this later in
resultsdb directly, we can always go and simplify the consumer code.
Right. And I've merged the version where the test name is included in
the scenario value, but again, we can change this later if we want to.
So long as we write the consumer code to query for 'scenario plus test
name', even if it happens to hit an older result where the test name
was included in the scenario, no harm is done, the correct result will
be achieved.
So yeah, openQA results now include a 'scenario' value. For openQA the
scenario value is constructed by joining the values for all keys openQA
considers 'scenario keys' with periods, but we don't actually have to
make this consistent between different systems - the only requirement
is that the value correctly identify the scenario. I can try and send a
patch for Taskotron to do this as well, if you like, or would you
rather do it?
I guess for now I'll adjust autocloudreporter to just provide a hard
coded scenario value, like it provides a hard coded 'test name', in
case we go with the 'test name included in scenario' approach; if we
don't I can just take that back out again.
Btw, if you end up submitting patches to Bodhi and using
'type=bodhi_update' queries, please also use the 'since=' argument
[2] and set it to the update's 'date_modified' timestamp. That
doesn't really implement the higher reliability of such results (as
we talked elsewhere), but it doesn't stress resultsdb so much, which
is always good (we can't do the same for type=koji_build easily, but
we can for type=bodhi_update).
I'm planning to do this, but I did want to talk over with you guys what
would be appropriate for the Taskotron results. Bodhi is definitely
going to have to query for 'type=bodhi_update' results where the item
is the update ID in order to find the openQA results. But I don't know
if we should just make it find the Taskotron results in the same way,
or if it's best to have it query for both koji_build and bodhi_update
results and somehow deduplicate them (so it doesn't show Taskotron
results which were reported against both the Koji build and the Bodhi
update twice). But that's probably a separate thread.
Thanks.
You too!
They're still using the same ad concept these days, though :)
--
Adam Williamson
Fedora QA Community Monkey
IRC: adamw | Twitter: AdamW_Fedora | XMPP: adamw AT happyassassin . net
http://www.happyassassin.net