Current state of ResultsDB

Josef Skladanka jskladan at redhat.com
Thu Sep 30 11:44:26 UTC 2010


On 09/29/2010 10:38 PM, James Laska wrote:
> Thanks for summarizing Josef.  It's been a while since we got to work
> with resultdb, so this help keeps things fresh!
>
> On Wed, 2010-09-29 at 09:10 -0400, Josef Skladanka wrote:
>> **skipped **
> Interesting, so the logic of how to interpret test results lives on the
> wiki, not in packaged source code.  This is exciting and new! :)

Yes, my idea was to provide a 'meaningfull' metadata, so we can have a 
'common basic frontend', which will show different testplans based on 
the metadata in wiki. So you have the testplan semantics defined in the 
wiki, results stored in the ResultsDB -> easy way to see, whether some 
packages passes different testplans. Because you'll just provide the 
testplan's URL, and (i.e.) package name, so the frontend  can read the 
data from ResultsDB, and visualise it based on the semantics provided on 
the wiki.

Heh, this description feels a bit overcomplicated, but hopefully, you'll 
get the idea.

> One thought, we seem to be mixing a type of structured syntax with wiki
> syntax (bullets).  I wonder if that will lead to syntax-errors and
> confusion from test plan authors.  For example, I imagine for some plan
> authors (installer) the metadata sections would be quite large and we'll
> want an easy way for the test plan maintainer to sanity-check their
> syntax.  Having to manipulate a large chunk of this data might be rough.
> Being able to cut'n'paste it in one big section into a text editor might
> be useful?  Perhaps nesting it all in<pre></pre>  tags?  Or perhaps
> there is a syntax parser for this stuff?
>
> This is a bit extreme, but is there a defined syntax?

The syntax on wiki (bullets & JSON) is used, because the library Will 
wrote, just assumes this structure of data.

The reason it's like that is IMHO, that we really need to have only one 
key/value pair per line, and bullets tend to provide the straightforward 
way to visualise this requirement.

We sure can use/parse data from <pre> tags, but I'd rather talk about it 
with Will first - because he might have other reasons to implement it, 
as it is now.

>> You noticed, that Rpmlint is both in the "introspecition" and "advisory" groups, this is based on the current state
>> of the testplan<https://fedoraproject.org/wiki/User:Kparal/Proposal:Package_update_acceptance_test_plan>  - I'm not really
>> sure how to base the "no errors" and "no warnings" into that, based solely on the result, but that's not the real issue for now.
>>
>> This brought me to one more possible thing to think about: are the current "result states" (RUNNING, PASSED,
>> INFO, FAILED, ABORTED, CRASHED, WAIVED, NEEDS_INSPECTION) enough?
>
> As listed, you have states for test case results, but it seems that test
> classes implicitly have a result as well?  Do test cases and test
> classes use the same results?
>
> Or do test classes use one subset: PASSED, FAILED, WAIVED,
> NEEDS_INSPECTION
>
> And test cases use another subset (overlapping): PASSED, FAILED, WAIVED,
> INFO, CRASHED, ABORTED

Well, maybe I misdescribed it (or just do not understand your question 
correctly) but.

I suppose that by "test classes" you mean the "mandatory", 
"introspection" and "advisory" groups. If so, then these use the same 
set of results, as the testcases do.

This is simply because the groups should define:
1) What is the 'allowed subset of results' for the testcases. I.E. if 
all the testcases have their results in that subset, the overall 
testplan result is PASSED.
2) If any of the tests, in the respective group, has result out of the 
allowed subset, than the overall testplan result will be the one 
specified by 'on_fail' value.

>> AND - are we able to sort these according to the
>> 'urgency' or 'importance' or 'relevance' or how to call it? Let's say
>> we have two tests in a testplan,
>> one is ABORTED and the second is CRASHED - what would you like to see
>> as 'overall result' Aborted or Crashed?
>> I'm not sure, if I was able to describe it clearly (I'm really hitting
>> the limits of my descriptive skills here :-D),
>> so feel free to ask as many questions, as needed.
>
> Honestly, I have no idea.  I feel like I'd need to get some time behind
> the wheel on this in order to make a determination.  You know, see it in
> action for a while.  If I had to pick now, I'd say that CRASHED or
> ABORTED == FAILED.

Sure thing, I have quite the same feeling (even though i think, that the 
result set is OK at the moment). It's just that Kamil and I have talked 
about it, some time ago, so I felt like the rest of you should also have 
a chance to comment it :)

> If I'm understanding correctly, what seems tremendously challenging in
> what you're creating is that you are designing a mechanism to allow plan
> authors to draft and implement policy on what results will be collected
> (wiki metadata), how results will be presented (front-ends) and what
> actions to take based on the results (bodhi karma etc...)?  Is that
> accurate?

Not really a policy on "which results to collect" - because this is 
mostly dependent on "what tests do we actually have" - but more like a 
mechanism allowing plan authors to draft/describe the policy i a 
'machine-readable' form, so you can immediately see, if this testplan 
would pass/fail on given package(s). Because you already have the 
results stored in the ResultsDB.
Maybe the best term to describe it would be a "view" - the metadata you 
provide tell the aforementioned 'generic' frontend, which results should 
it grab from DB, and how the individual test results affect the overall 
testplan result.

So the aforementioned metadata are mainly describing the presentation 
part, but are also supposed to provide a mechanism to deduce the 
testplan's result from the individual testcase results.

Hope it's clearer now.

Please ask as many questions, as necessary!

Joza


More information about the autoqa-devel mailing list