Hi folks!
We should probably set up some projects and so on for this so we can use issue trackers, but I thought before committing to any structure we could have at least a short mailing list discussion for planning the 'release validation NG' work.
For anyone who forgot / didn't know - 'release validation NG' is my nickname for the project to write a dedicated system for manual release validation testing result submission, using resultsdb for storage. The goal is to make manual validation testing result submission easier and less error-prone, and also to allow for improvement analysis of results and integration of manual results with results from other systems (taskotron, openQA, autocloud etc). This would be designed to replace the system of editable wiki pages that I call 'Wikitcms':
https://fedoraproject.org/wiki/Test_Results:Current_Installation_Test (etc.) https://fedoraproject.org/wiki/Wikitcms
the latter page is a broad overview of how I see the Wikitcms 'system' working at present. It's that system we'd be replacing, so it may help you to read through that page to get some context and background on how we got here and why 'release validation NG' might be a good idea :)
We have a ticket open with the design team: https://pagure.io/design/issue/483
where kathryng is helping us with design mock ups based on my initial rough sketches, which is great. Please do take a look at the mockups and discussion there and add thoughts if you have any.
My very initial thought on architecture is that we could have two main components, a webui component and a validator/resultsdb submitter component.
The webui component would be exactly that, the actual web UI for users to interact with and submit their results to. It would query the validator/submitter component to find out what relevant 'test events' were available, and what tests and environments and so forth for each event, and then present an appropriate UI to the user for them to fill in their results.
The validator/submitter component would be responsible for watching out for new composes and keeping track of tests and 'test environments' (if we keep that concept); it would have an API with endpoints you could query for this kind of information in order to construct a result submission, and for submitting results in some kind of defined form. On receiving a result it would validate it according to some schemas that admins of the system could configure (to ensure the report is for a known compose, image, test and test environment, and do some checking of stuff like the result status, user who submitted the result, comment content, stuff like that). Then it'd forward the result to resultsdb.
This is just an idea, though. There are a few reasons I thought it might make sense to separate these two elements:
* It gives us flexibility in a few important respects: * The validator/submitter could accept results from other things, not just the webUI - e.g. relval * The validator/submitter count send results to other things, not just ResultsDB - e.g. the wiki * The validator/submitter could be written to allow expansion to cover things other than release validation results, e.g. Test Day results, so a future rewrite of the 'testdays' webapp could use it
* It should help with splitting up the work between people; different people can work on the web UI and the validator/submitter without blocking each other too often
So these are just my very early thoughts on the project, it'd be great to know what other folks think! If we can agree on a basic architecture and plan we could start setting up projects (I think I'd suggest we do this in Pagure, but we can also consider Phabricator) and tickets for the initial work.