On Wed, 2017-03-01 at 09:36 -0500, Kamil Paral wrote:
>
Let's even complicate it more. Sometimes it's the same result
repeated again (e.g. new depcheck result with updated repo state),
but sometimes it's a different tool submitting the result for the
"same" test case. For example, I assume this is the reason why you
chose "compose.base_selinux" testcase name instead of
"compose.openqa.base_selinux". The idea is that several tools can
submit the result for the same test case, so openqa, autocloud or
even a manual tester can do it. I'm not currently sold on this idea
(sharing the testcase name instead of having
"compose.openqa.base_selinux", "compose.autocloud.base_selinux" and
"compose.manual.base_selinux"), but that seems to be the current
state. So with this, it's even harder to recognize whether we've
received two results from openqa (the latter superseding the former),
or whether we received two results from two different tools (and
therefore we should consider both).
Well, I wasn't really thinking about having multiple test systems
running 'the same' test concurrently. I was only thinking about the
possibility of moving tests between systems. base_selinux is a fairly
good example, as it's a trivial test that isn't terribly tied to
openQA: 'boot a freshly installed system and check SELinux is enabled'.
We certainly might, at some point, move that test from openQA to some
other system. I was envisaging that if we can say with a high degree of
confidence that the new system is truly performing 'the same test', we
could just 'transfer' the name rather than naming it differently.
Of course, now I combine that thought with this one, it adds an
interesting wrinkle, because if we *do* move tests between systems in
this way, the 'scenario' is likely to differ between the systems. The
'scenario' for an openQA 'base_selinux' test probably wouldn't be the
same as the 'scenario' for a Taskotron 'base_selinux' test. Not sure if
that's a case it's worth worrying about, though.
It was, like, a five minute thing, though. I hadn't thought it through
that systematically. I'm actually not sure if even the 'compose.'
prefix was a good idea, now.
> There are also
> other situations in which it's useful to be able to identify 'the same
> test' for different executions; for instance, `check-compose` needs to
> do this when it does its 'system information comparison' checks from
> compose to compose.
>
> I guess it's worth noting that this is somewhat related to the similar
> question for test 'items' (the 'thing' being tested, in ResultsDB
> parlance) - the question of uniquely identifying 'the same' item within
> and across composes. At least for productmd 'images', myself and
> lsedlar are currently discussing that in
>
https://pagure.io/pungi/issue/525 .
We've discussed this with Josef a while back on qa-devel. We seemed
to agree that item should identify the thing under test well, even
uniquely if possible, but stay simple. We want to avoid having too
many pieces of information concatenated into a single string, just
for the purpose of unique identification. Extra data should be used
for that (structured data, no string parsing). The tradeoff is that
searching is a bit more difficult (we'd need to allow users to also
search by extra data in the frontend, and they'd have to know what to
search for).
For example, for git commits, we don't really like items like
"pagure#namespace/project#githash". Perhaps we could have just
githash as item (because it's almost unique identification even
across many projects) and the rest as extradata. This way we keep
item simple, it's easy to search for manually, and it's easy to
search for automatically (no string parsing).
> Obviously it's more or less a
> solved problem for RPMs.
Almost. For upgradepath, yes, NVR uniquely identifies the result. For
depcheck, NVR + arch (from extradata) is unique identification.
>
> I can think of two possible ways to handle this: via the extradata, or
> via the test case name.
>
> openQA has a useful concept here. It defines what combination of
> metadata defines a unique test scenario like this, and calls it...well,
> that - the 'scenario'. There's a constant definition called
> SCENARIO_KEYS in openQA that you can use to discover the appropriate
> keys. So I'm going to use the term 'scenario' for this from now on.
>
> There's kinda two levels of scenario, now I think about it, depending
> on whether you include 'item' identification in the scenario definition
> or not. For identifying duplicates within the results for a single
> item, you don't need to, but it doesn't hurt; for identifying the same
> scenario across multiple composes, you do need to.
I don't follow here. In order to identify another execution of the
same scenario, you need at least testcase name and item to be exactly
the same (and possibly also some metadata). Do you have some examples
to show it otherwise?
Well, this is all about that "possibly also some metadata". *What*
metadata? How do you, some random releng (or whatever) person trying to
consume arbitrary ResultsDB data, know *what* "possibly also some"
metadata you need to look at to identify 'duplicate' results?
As of right now you have to come ask someone and we say "well, for
depcheck tests do foo, for upgradepath tests do bar, for openQA tests
do moo..."
I'm trying to fix that.
So concrete examples, okay. Here's two openQA test results:
https://taskotron.fedoraproject.org/resultsdb/results/12461058
https://taskotron.fedoraproject.org/resultsdb/results/12460886
they are both results for testcase 'compose.install_ext3' on item
'Fedora-Server-dvd-x86_64-Rawhide-20170228.n.0.iso' . In this case,
they even have the same arch - 'x86_64'. Does this mean they're dupes
(i.e. the test got restarted for some reason)? No. They're actually
different tests; one was run on a BIOS VM, one on a UEFI VM. In other
words, the 'machine' (in openQA terms) is part of the 'scenario' for
openQA tests. But this is hardly a universal rule; I can't just throw
openQA's 'machine' setting into the results and tell everyone trying to
de-duplicate ResultsDB results to look for the 'machine' value.
The idea is just to be able to say, not "you have to look for same test
case and item and possibly some metadata", but "you have to look for
same test case and item and 'scenario' metadata item".
> I suppose someone
> may have a case for identifying 'the same' test against different
> items; for that purpose, you'd need the lighter 'scenario' definition
> (not including the item identifier).
I don't understand this at all, it seems to go against the intended
meaning of "item".
I just mean, say you want to look at all the results for 'the same'
test but for different tested items; I want to look at the last three
weeks worth of all x86_64 BIOS compose.install_ext3 tests, or something
like that. In that case, your 'scenario' does not include the 'item'.
>
> One thing we could do is make it a convention that each test case (and
> / or test case name?)
What's the difference between the two?
I, uh, honestly don't remember what distinction I was trying to draw
there :/. I think it was about the fact that a 'test case' to ResultsDB
is a more complex item than just a name - it has a URL and stuff - so
the 'scenario' properties could possibly be included in something other
than just the test case name.
> Another possibility would be to make it a convention to include
some
> kind of indication of the test 'scenarios' in the extradata for each
> result: a 'scenario' key, or something along those lines. This would
> make it much easier to include the 'item identifier' and 'test
> scenario' proper separately, and you could simply combine them when you
> needed the 'complete' scenario.
I'm not sure what do you mean exactly, but having a "scenario" key
that would list all the other keys which are necessary to understand
what makes this scenario unique looks like a reasonable idea. For
example:
scenario = [firmware_type, arch] # testcase name and item are implied
Well, it's a simpler idea than that: just include a key that has all
the necessary *values*. The indirection of having a key that tells you
what other keys to go look up just seems unnecessarily complex. The
idea was simply that there'd be an item like this in the metadata:
scenario: fedora.Rawhide.Server-dvd-iso.x86_64.server_realmd_join_kickstart.64bit
That's an actual openQA scenario: DISTRI.VERSION.FLAVOR.ARCH.TESTCASE.MACHINE
Then you can look up 'all results for same item, same scenario' (for
de-duplication) or 'all results for same scenario' (to compare results
for "the same test" across different composes).
I have a diff in right now that would add this to the openQA reporter:
https://phab.qa.fedoraproject.org/D1155
since we kinda need it right now for the update stuff (so Bodhi can
find the correct results to display).
The downside is that task authors are required to provide this, and
therefore it's error-prone. I'm not sure how to do it better, though.
We could set some reasonable defaults for each type - so e.g. for
koji_build type, we know we compare testcase name + item (required to
be nvr) + arch (if present). For bodhi_update type, it would be
testcase_name + item (required to be Bodhi ID) +
last_updated_timestamp. Etc. Anything above those defaults would need
to be in "scenario".
Eh, I dunno about having defaults by type. Though I was thinking
that 'testcase name' is kinda the implicit default; if a result doesn't
have a 'scenario' item at all, then just assume you should use
'testcase name'. For any case where it's more complex than that, the
system that submits the results should provide the scenario info (so
Taskotron should add a 'scenario' key like 'TESTCASE_NAME.ARCH' for
Koji results.
Note I don't think we should include the 'item' in the scenario for all
the reasons discussed above; it should just be understood that for
different purposes you might want to query on "scenario + item" or just
"scenario".
--
Adam Williamson
Fedora QA Community Monkey
IRC: adamw | Twitter: AdamW_Fedora | XMPP: adamw AT happyassassin . net
http://www.happyassassin.net