Current state of ResultsDB

James Laska jlaska at redhat.com
Wed Sep 29 20:38:41 UTC 2010


Thanks for summarizing Josef.  It's been a while since we got to work
with resultdb, so this help keeps things fresh!

On Wed, 2010-09-29 at 09:10 -0400, Josef Skladanka wrote:
> Hello gang!
> 
> Currently, there is a instance of ResultsDB core, running at <http://test1185.test.redhat.com/wsgi/resultsdb/xmlrpc>
> 
> The tested & working functionality is "storing the data into the DB" using beforementioned xmlrpc interface.
> 
> At the moment, "using resultsdb as storage" is implemented in the jskladan branch in git
> <http://test1185.test.redhat.com/wsgi/resultsdb/xmlrpc>
> Precisely these patches:
> <http://git.fedorahosted.org/git/?p=autoqa.git;a=commit;h=46b4019b7608a7947bbda27c0219b67698985841>
> <http://git.fedorahosted.org/git/?p=autoqa.git;a=commit;h=e1404be74b646ec5e0a4432c4860cbe2395ce87a>
> 
> As you can see, the only changes are in the AutoQATest base class (lib/python/test.py), and of course the API library
> for ResultsDB is addded (lib/python/resuldsdb.py or lib/python/logging.py [in the first patch]).
> The rest is simply taken care of automagically.
> 
> This patchset does not make use of the phases, as these would require changes into the sources of individual tests
> and can be postponed until the basic functionality is in the master. But the functionality should be pretty much
> at the same level, as the mailing list provides (although resultsdb is not _at the moment_ storing the "outputs" variable).
> 
> I have also created a wiki metadataa drafts:
> 
> Testplan:
> https://fedoraproject.org/wiki/User:Jskladan/Sandbox:Package_Update_Acceptance_Test_Plan_Metadata
> 
> Testcases:
> https://fedoraproject.org/wiki/User:Jskladan/Sandbox:Rpmguard_Testcase_Metadata
> https://fedoraproject.org/wiki/User:Jskladan/Sandbox:Rpmlint_Testcase_Metadata
> ...
> 
> As you can see, the main focus is on the Testplan part. The idea behind this wiki page is to provide
> a simple, yet whole description of the testplan, so we can make a frontend, which will take it, and
> visualize the 'progress' of the testplan for given envr (which seems to be the common denominator of
> all the arguments given to a test).
> 
> To explain what does it all mean:
> 
> * "testcases" : _stripped_
>   - this has two means
>     1) sum up all the tests covered in the testplan
>     2) provide 'aliases' for further use in the metadata - so we can write "Initscripts" instead of the whole URL
> 
> * "testcase_classes" : ["mandatory", "introspection", "advisory"]
>   - this value specifies testcase 'classes' or 'types' - this has utterly semantic meaning, and these can be _anything_
>     what you find meaningfull. The idea behind this is to be able to somehow group the test in the frontend AND
>     to provide some means to bind the results of all the standalone tests into the "testplan result". See below.
> 
> * "mandatory" : {"testcases": ["Package sanity", "Repo sanity", "Conflicts", "Upgrade path"], "pass": ["PASSED", "WAIVED"], on_fail":"FAILED"}
>   - this says, that the ["Package sanity", "Repo sanity", "Conflicts", "Upgrade path"] tests will be considered "passed", if
>     the result stored in ResultsDB is in ["PASSED", "WAIVED"].
>   - If any of theses tests have a different result, the whole testplan is considered to be "failed" (on_fail":"FAILED")
> 
> * "introspection" : {"testcases": ["Rpmlint", "Initscripts"], "pass": ["PASSED", "WAIVED", "INFO"], "on_fail":"FAILED"}
>   - the same as the mandatory tests, but also satisfies with "INFO" as a result of the test.
> 
> * "advisory" : {"testcases": ["Rpmlint", "Rpmguard"], "pass": ["PASSED", "WAIVED", "INFO", "NEEDS_INSPECTION"], "on_fail":"NEEDS_INSPECTION"} 
>   - if any of these testcases' result is out of the "pass" set, mark the testplan as "NEEDS_INSPECTION".

Interesting, so the logic of how to interpret test results lives on the
wiki, not in packaged source code.  This is exciting and new! :)

One thought, we seem to be mixing a type of structured syntax with wiki
syntax (bullets).  I wonder if that will lead to syntax-errors and
confusion from test plan authors.  For example, I imagine for some plan
authors (installer) the metadata sections would be quite large and we'll
want an easy way for the test plan maintainer to sanity-check their
syntax.  Having to manipulate a large chunk of this data might be rough.
Being able to cut'n'paste it in one big section into a text editor might
be useful?  Perhaps nesting it all in <pre></pre> tags?  Or perhaps
there is a syntax parser for this stuff?

This is a bit extreme, but is there a defined syntax?

> You noticed, that Rpmlint is both in the "introspecition" and "advisory" groups, this is based on the current state
> of the testplan <https://fedoraproject.org/wiki/User:Kparal/Proposal:Package_update_acceptance_test_plan> - I'm not really
> sure how to base the "no errors" and "no warnings" into that, based solely on the result, but that's not the real issue for now.
> 
> This brought me to one more possible thing to think about: are the current "result states" (RUNNING, PASSED,
> INFO, FAILED, ABORTED, CRASHED, WAIVED, NEEDS_INSPECTION) enough? 

As listed, you have states for test case results, but it seems that test
classes implicitly have a result as well?  Do test cases and test
classes use the same results?

Or do test classes use one subset: PASSED, FAILED, WAIVED,
NEEDS_INSPECTION

And test cases use another subset (overlapping): PASSED, FAILED, WAIVED,
INFO, CRASHED, ABORTED

> AND - are we able to sort these according to the
> 'urgency' or 'importance' or 'relevance' or how to call it? Let's say
> we have two tests in a testplan, 
> one is ABORTED and the second is CRASHED - what would you like to see
> as 'overall result' Aborted or Crashed?
> I'm not sure, if I was able to describe it clearly (I'm really hitting
> the limits of my descriptive skills here :-D),
> so feel free to ask as many questions, as needed.

Honestly, I have no idea.  I feel like I'd need to get some time behind
the wheel on this in order to make a determination.  You know, see it in
action for a while.  If I had to pick now, I'd say that CRASHED or
ABORTED == FAILED.

If I'm understanding correctly, what seems tremendously challenging in
what you're creating is that you are designing a mechanism to allow plan
authors to draft and implement policy on what results will be collected
(wiki metadata), how results will be presented (front-ends) and what
actions to take based on the results (bodhi karma etc...)?  Is that
accurate?

Thanks,
James
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 198 bytes
Desc: This is a digitally signed message part
Url : https://fedorahosted.org/pipermail/autoqa-devel/attachments/20100929/f1442b5e/attachment.bin 


More information about the autoqa-devel mailing list