Hello gang,
last week, I made a simple proof of concept using the resultdb database schema https://fedoraproject.org/wiki/AutoQA_resultsdb_schema and xmlrpc interface http://rajcze.homelinux.net/resultdb/xmlrpc.py. This can only start/stop a testrun (can try like this http://rajcze.homelinux.net/resultdb/example.txt - you can invent any test name/version combination if it's not already in the databasehttp://rajcze.homelinux.net/resultdb/frontend/simple_php/?action=show_tests, new test will be created. Watch the state here http://rajcze.homelinux.net/resultdb/frontend/simple_php/?action=show_testruns), but it made me realize some stuff, i'd like to share:
Tests and Testruns: ===================
1) Even though it's not really required to store the results, we certainly need to store some metadata, to be able to show the results in reasonable way (table Test in schema). For basic usage, i suggest fields Name, Version, Tested Package and Description. These should make possible to search the tests in an usefull way.
2) We need a way to identify, which test is actually executed in the testrun. For now, i use identification based on $test_name/$test_version schema, which is converted to UUID5 [1] in URL namespace. I'm not sure, if the UUID is not a duplicate information (since it's figured out from two other known values in the database), but it seems reasonable at least as a unique identifier in the database. For now, my API uses name/version parameters for identification, maybe we would like to store the UUID inside the test source (even though i'm not a big fan of this solution) and use directly it. (hope this is not too confusing :) )
Testplans and Jobs ==================
My starting idea was, that we would have a number of standalone Tests (one Test equals one Testrun), and Testplans would be just a set of these Tests, runned in specified order. One would basically create the testplan 'on the fly' from existing Tests (and/or Testplans) using the TCMS-like-thingie, and the rest would be taken care of automatically.
As you can imagine, this could be quite hard to implement using AutoQA, so I talked with wwoods about it, and i belive, that we agreed, that we would love to have this functionality, but it's not a problem to solve *now*.
So how could Testplans work *now* ---------------------------------
1) Testplans will be hand written, and 'hard coded', using the resultdb only as a metadata/results storage.
2) From AutoQA point of view, Testplan is just an ordinary test, which will subsequentely run each Test required, and will report the results to the resultdb.
3) At the beginning of executing a Testplan, it will create new record in the Job table, and will add a record to _Job-Testrun table for each executed Testrun (aka Test). This way, we'll be able to show overall progress (as James had in his mockup), and we will use this information also in the frontends - for example one could want to compare subsequent executions of a given Testplan.
Questions =========
1) Are there any tests, we would like to use in more than one Testplan? I.E. is there a need to tell apart a Test from Testplan? (for me, it's certainly a good thing)
2) What do you think about the UUID identification? I'm sure we need to have some way to tell the tests apart (at least to be able to automatically store the results :-D), but is the UUID generated from name/version better than "random" UUID or not? (for me, it's better to have name/version, since one could almost automatically re-use the metadata in a simple way, when only the test-version changes, and generating the UUID from human-readable values makes more sense to me)
Links =====
[1] - http://docs.python.org/library/uuid.html#uuid.uuid5, http://tools.ietf.org/html/rfc4122.html
On Mon, 2010-03-22 at 11:45 +0100, Josef Skladanka wrote:
Hello gang,
last week, I made a simple proof of concept using the resultdb database schema https://fedoraproject.org/wiki/AutoQA_resultsdb_schema and xmlrpc interface http://rajcze.homelinux.net/resultdb/xmlrpc.py. This can only start/stop a testrun (can try like this http://rajcze.homelinux.net/resultdb/example.txt - you can invent any test name/version combination if it's not already in the databasehttp://rajcze.homelinux.net/resultdb/frontend/simple_php/?action=show_tests, new test will be created. Watch the state here http://rajcze.homelinux.net/resultdb/frontend/simple_php/?action=show_testruns), but it made me realize some stuff, i'd like to share:
I told Josef on IRC, but I love the schema diagrams. I've always been interested in a good way to graphically represent a schema.
Tests and Testruns:
- Even though it's not really required to store the results, we
certainly need to store some metadata, to be able to show the results in reasonable way (table Test in schema). For basic usage, i suggest fields Name, Version, Tested Package and Description. These should make possible to search the tests in an usefull way.
To be honest, this seems like a sound starting point. Do we need more than just...
* what test is? (aka a test_name and only if really needed, a test_version) * what are we testing? - some unique human-readable (and human-created) identifier for a test run? The most obvious choice being a package envra, but also a build stamp for an ISO install test run (e.g. F-13-Alpha-TC0)?
- We need a way to identify, which test is actually executed in the
testrun. For now, i use identification based on $test_name/$test_version schema, which is converted to UUID5 [1] in URL namespace. I'm not sure, if the UUID is not a duplicate information (since it's figured out from two other known values in the database), but it seems reasonable at least as a unique identifier in the database. For now, my API uses name/version parameters for identification, maybe we would like to store the UUID inside the test source (even though i'm not a big fan of this solution) and use directly it. (hope this is not too confusing :) )
I would think that UUID's make good sense for internally referencing data, but I really don't want to be passing around UUID's in URLs when directing people to test result dashboards. Does this help?
Testplans and Jobs
My starting idea was, that we would have a number of standalone Tests (one Test equals one Testrun), and Testplans would be just a set of these Tests, runned in specified order. One would basically create the testplan 'on the fly' from existing Tests (and/or Testplans) using the TCMS-like-thingie, and the rest would be taken care of automatically.
As you can imagine, this could be quite hard to implement using AutoQA, so I talked with wwoods about it, and i belive, that we agreed, that we would love to have this functionality, but it's not a problem to solve *now*.
Definitely a cool concept. But I agree with you guys, this is probably outside the scope of what we need to accomplish in the short-term.
So how could Testplans work *now*
- Testplans will be hand written, and 'hard coded', using the resultdb
only as a metadata/results storage.
- From AutoQA point of view, Testplan is just an ordinary test, which
will subsequentely run each Test required, and will report the results to the resultdb.
- At the beginning of executing a Testplan, it will create new record
in the Job table, and will add a record to _Job-Testrun table for each executed Testrun (aka Test). This way, we'll be able to show overall progress (as James had in his mockup), and we will use this information also in the frontends - for example one could want to compare subsequent executions of a given Testplan.
I know if says 'hard coded', but this doesn't seem bad for now.
Questions
- Are there any tests, we would like to use in more than one Testplan?
I.E. is there a need to tell apart a Test from Testplan? (for me, it's certainly a good thing)
I think it will be common for a test to live in multiple test plans. For example, we have the Rawhide Acceptance Test plan [1] which includes a *small* subset of tests to validate that the repo and the install images are sane. I envision some of those tests would be used again (possibly a slightly different context) in the installation test plan [2].
[1] https://fedoraproject.org/wiki/QA:Rawhide_Acceptance_Test_Plan [2] https://fedoraproject.org/wiki/QA:Fedora_13_Install_Test_Plan
- What do you think about the UUID identification? I'm sure we need to
have some way to tell the tests apart (at least to be able to automatically store the results :-D), but is the UUID generated from name/version better than "random" UUID or not? (for me, it's better to have name/version, since one could almost automatically re-use the metadata in a simple way, when only the test-version changes, and generating the UUID from human-readable values makes more sense to me)
As long as I never have to use the UUID (or using it is an exception). I mean, if it's just an internal representation (like how git uses hash strings for representing commits), that seems fine. But if we expect users/testers to be passing around the UUID's, I don't know if that improves things.
What I love about storing our test cases and plans in the wiki right now is that I don't need to remember some internal unique ID to reference the test. I just call it what it is, for example 'QA:Testcase_Mediakit_ISO_Size'. What is nice is that the wiki takes an optional version parameter in the event you are referencing something other than HEAD (https://fedoraproject.org/w/index.php?title=QA:Testcase_Mediakit_ISO_Size&am...).
Thanks, James
Links
[1] - http://docs.python.org/library/uuid.html#uuid.uuid5, http://tools.ietf.org/html/rfc4122.html
First of all, I'd like to thank you, for this feedback, James.
On 03/22/2010 09:06 PM, James Laska wrote:
To be honest, this seems like a sound starting point. Do we need more than just...
* what test is? (aka a test_name and only if really needed, a test_version) * what are we testing? - some unique human-readable (and human-created) identifier for a test run? The most obvious choice being a package envra, but also a build stamp for an ISO install test run (e.g. F-13-Alpha-TC0)?
I belive, that we can easily identify Tests just using the name/version.
Identifying test runs - we talked about this with Kamil yesterday, and it's obvious, that we will need to store different information to different types of tests (aka envr to package sanity test, build stamp to install test) - as you said.
I'm currently thinking about the database schema (look at the updated schema, if you please https://fedoraproject.org/wiki/AutoQA_resultsdb_schema), which would allow us to do so - we'll have basic Testrun table storing common data, and some other table(s) to store the specific values.
I'm not sure, how to make this specific-values table(s) right - the schema actually proposed is quite flexible, but it might not be providing en easy survey. The other possibility I can think of is using one _huge_ sparse table, with all the possible extra values - but this seems to have more disadvantages, than advantages, at least from the performance point of view.
I would think that UUID's make good sense for internally referencing data, but I really don't want to be passing around UUID's in URLs when directing people to test result dashboards. Does this help?
I completely agree - the UUID was intended for internal tools usage, not for humans - i was once again thinking about the future, in which we have a TCMS. Then we certainly will need a way to interconnect the (most probably) standalone databases using some common key - and this UUID seems like the easiest way.
In the current db schema, the UUID is kind of useless information (since it can be generated directly from the information stored in the same table), but it seemed like a good idea to put it there, at least so we could discuss it :)
I know if says 'hard coded', but this doesn't seem bad for now.
Good to hear, that none of us has problem with this solution used for now, since it's IMHO the most straightforward way right now.
I think it will be common for a test to live in multiple test plans. For example, we have the Rawhide Acceptance Test plan [1] which includes a *small* subset of tests to validate that the repo and the install images are sane. I envision some of those tests would be used again (possibly a slightly different context) in the installation test plan [2].
[1] https://fedoraproject.org/wiki/QA:Rawhide_Acceptance_Test_Plan [2] https://fedoraproject.org/wiki/QA:Fedora_13_Install_Test_Plan
OK, thank you for this information. Made me happy :)
As long as I never have to use the UUID (or using it is an exception). I mean, if it's just an internal representation (like how git uses hash strings for representing commits), that seems fine. But if we expect users/testers to be passing around the UUID's, I don't know if that improves things.
As I wrote, I certainly do not want users to use UUIDs themselves - i remember the time Ubuntu transferred from device paths to UUIDs in grub.conf and fstab - i never really knew what disc am i mounting ever since :)
What I love about storing our test cases and plans in the wiki right now is that I don't need to remember some internal unique ID to reference the test. I just call it what it is, for example 'QA:Testcase_Mediakit_ISO_Size'. What is nice is that the wiki takes an optional version parameter in the event you are referencing something other than HEAD (https://fedoraproject.org/w/index.php?title=QA:Testcase_Mediakit_ISO_Size&am...).
My idea of storing the Tests is quite the same - there will be a history stored in database, while the latest (i.e. the one with the biggest version number) is considered the HEAD. This concept seems natural and right to me too.
Once again, thank you a lot for your feedback.
Looking forward to hear also from the other people involved
Joza
On Tue, 2010-03-23 at 12:42 +0100, Josef Skladanka wrote:
First of all, I'd like to thank you, for this feedback, James.
On 03/22/2010 09:06 PM, James Laska wrote:
To be honest, this seems like a sound starting point. Do we need more than just...
* what test is? (aka a test_name and only if really needed, a test_version) * what are we testing? - some unique human-readable (and human-created) identifier for a test run? The most obvious choice being a package envra, but also a build stamp for an ISO install test run (e.g. F-13-Alpha-TC0)?
I belive, that we can easily identify Tests just using the name/version.
Identifying test runs - we talked about this with Kamil yesterday, and it's obvious, that we will need to store different information to different types of tests (aka envr to package sanity test, build stamp to install test) - as you said.
I'm currently thinking about the database schema (look at the updated schema, if you please https://fedoraproject.org/wiki/AutoQA_resultsdb_schema), which would allow us to do so - we'll have basic Testrun table storing common data, and some other table(s) to store the specific values.
I'm not sure, how to make this specific-values table(s) right - the schema actually proposed is quite flexible, but it might not be providing en easy survey. The other possibility I can think of is using one _huge_ sparse table, with all the possible extra values - but this seems to have more disadvantages, than advantages, at least from the performance point of view.
Definitely deep in the implementation details, which is okay for prototyping. However, let's make sure that whatever choices made here compliment+support the resultsdb use cases (https://fedoraproject.org/wiki/AutoQA_resultsdb_use_cases).
I had a good discussion with Dave Lawrence who maintains bugzilla.redhat.com. He discussed several regrets with fine tuning a database schema with many joins and very workflow specific tables/columns. While it addresses the needs of the day, it presents a challenge to adjust as the business rules change. He suggested that many of the improvement requests he gets for bugzilla could be addressed by allowing user maintained meta-data structures. One idea was to use a generic tagging concept that allows for data definition by the users, not the db admins. For example, look at mediawiki and how it's structure allows for defining wiki content and tagging it (e.g. [[Category:]]). I wonder how mediawiki's schema is structured? I'm sure it's quite complicated as it's a fairly mature software product ... but who knows, maybe something to learn from there.
Can a similar mechanism could be used here? Instead of specific tables for different dashboards (e.g. install, package_sanity etc...), could this be made such that the test result reporting software was responsible for providing the metadata (tags or whatever) to organize things correctly? I see a "Tag" table in your schema, sounds like you're already heading in this direction?
As it does now [1] with rats, the autoqa wrapper would be responsible for reporting test results back to the resultsdb, providing information about (it looks like you've already got some of these concepts in the schema, I'm just thinking outloud here):
* build tested - Just a generic "build" concept ... includes: * a unique text name * any number of user submitted tags (for example, "compose_time:123151243", "arch:i386", "envra:kernel-2.6.32.9-70.fc12.x86_64" ...) * For each test result ... * build_tested * test_name * test_result - [pass,fail,warn,info,waived] * test_case_url - where to find more details on the test (points to http://fedoraproject.org/wiki/...) * test_run_url - where to find more details on the test result (points to autotest server for job details)
What's missing?
I would think that UUID's make good sense for internally referencing data, but I really don't want to be passing around UUID's in URLs when directing people to test result dashboards. Does this help?
I completely agree - the UUID was intended for internal tools usage, not for humans - i was once again thinking about the future, in which we have a TCMS. Then we certainly will need a way to interconnect the (most probably) standalone databases using some common key - and this UUID seems like the easiest way.
The unique representation of a test case we use now is a URL (a wikipedia link). Is that sufficient for now?
In the current db schema, the UUID is kind of useless information (since it can be generated directly from the information stored in the same table), but it seemed like a good idea to put it there, at least so we could discuss it :)
I know if says 'hard coded', but this doesn't seem bad for now.
Good to hear, that none of us has problem with this solution used for now, since it's IMHO the most straightforward way right now.
I think it will be common for a test to live in multiple test plans. For example, we have the Rawhide Acceptance Test plan [1] which includes a *small* subset of tests to validate that the repo and the install images are sane. I envision some of those tests would be used again (possibly a slightly different context) in the installation test plan [2].
[1] https://fedoraproject.org/wiki/QA:Rawhide_Acceptance_Test_Plan [2] https://fedoraproject.org/wiki/QA:Fedora_13_Install_Test_Plan
OK, thank you for this information. Made me happy :)
As long as I never have to use the UUID (or using it is an exception). I mean, if it's just an internal representation (like how git uses hash strings for representing commits), that seems fine. But if we expect users/testers to be passing around the UUID's, I don't know if that improves things.
As I wrote, I certainly do not want users to use UUIDs themselves - i remember the time Ubuntu transferred from device paths to UUIDs in grub.conf and fstab - i never really knew what disc am i mounting ever since :)
Heh, yeah good example.
What I love about storing our test cases and plans in the wiki right now is that I don't need to remember some internal unique ID to reference the test. I just call it what it is, for example 'QA:Testcase_Mediakit_ISO_Size'. What is nice is that the wiki takes an optional version parameter in the event you are referencing something other than HEAD (https://fedoraproject.org/w/index.php?title=QA:Testcase_Mediakit_ISO_Size&am...).
My idea of storing the Tests is quite the same - there will be a history stored in database, while the latest (i.e. the one with the biggest version number) is considered the HEAD. This concept seems natural and right to me too.
What if we let test case management be handled by a test case management tool? Is that in the scope of what we need to track here?
Once again, thank you a lot for your feedback.
Looking forward to hear also from the other people involved
Thanks, James
autoqa-devel@lists.fedorahosted.org