Hi folks!
We should probably set up some projects and so on for this so we can use issue trackers, but I thought before committing to any structure we could have at least a short mailing list discussion for planning the 'release validation NG' work.
For anyone who forgot / didn't know - 'release validation NG' is my nickname for the project to write a dedicated system for manual release validation testing result submission, using resultsdb for storage. The goal is to make manual validation testing result submission easier and less error-prone, and also to allow for improvement analysis of results and integration of manual results with results from other systems (taskotron, openQA, autocloud etc). This would be designed to replace the system of editable wiki pages that I call 'Wikitcms':
https://fedoraproject.org/wiki/Test_Results:Current_Installation_Test (etc.) https://fedoraproject.org/wiki/Wikitcms
the latter page is a broad overview of how I see the Wikitcms 'system' working at present. It's that system we'd be replacing, so it may help you to read through that page to get some context and background on how we got here and why 'release validation NG' might be a good idea :)
We have a ticket open with the design team: https://pagure.io/design/issue/483
where kathryng is helping us with design mock ups based on my initial rough sketches, which is great. Please do take a look at the mockups and discussion there and add thoughts if you have any.
My very initial thought on architecture is that we could have two main components, a webui component and a validator/resultsdb submitter component.
The webui component would be exactly that, the actual web UI for users to interact with and submit their results to. It would query the validator/submitter component to find out what relevant 'test events' were available, and what tests and environments and so forth for each event, and then present an appropriate UI to the user for them to fill in their results.
The validator/submitter component would be responsible for watching out for new composes and keeping track of tests and 'test environments' (if we keep that concept); it would have an API with endpoints you could query for this kind of information in order to construct a result submission, and for submitting results in some kind of defined form. On receiving a result it would validate it according to some schemas that admins of the system could configure (to ensure the report is for a known compose, image, test and test environment, and do some checking of stuff like the result status, user who submitted the result, comment content, stuff like that). Then it'd forward the result to resultsdb.
This is just an idea, though. There are a few reasons I thought it might make sense to separate these two elements:
* It gives us flexibility in a few important respects: * The validator/submitter could accept results from other things, not just the webUI - e.g. relval * The validator/submitter count send results to other things, not just ResultsDB - e.g. the wiki * The validator/submitter could be written to allow expansion to cover things other than release validation results, e.g. Test Day results, so a future rewrite of the 'testdays' webapp could use it
* It should help with splitting up the work between people; different people can work on the web UI and the validator/submitter without blocking each other too often
So these are just my very early thoughts on the project, it'd be great to know what other folks think! If we can agree on a basic architecture and plan we could start setting up projects (I think I'd suggest we do this in Pagure, but we can also consider Phabricator) and tickets for the initial work.
On Mon, 2016-11-28 at 09:40 -0800, Adam Williamson wrote:
The validator/submitter component would be responsible for watching out for new composes and keeping track of tests and 'test environments' (if we keep that concept); it would have an API with endpoints you could query for this kind of information in order to construct a result submission, and for submitting results in some kind of defined form. On receiving a result it would validate it according to some schemas that admins of the system could configure (to ensure the report is for a known compose, image, test and test environment, and do some checking of stuff like the result status, user who submitted the result, comment content, stuff like that). Then it'd forward the result to resultsdb.
It occurs to me that it's possible resultsdb might be designed to do all this already, or it might make sense to amend resultsdb to do all or some of it; if that's the case, resultsdb folks, please do jump in and suggest it :)
On Mon, Nov 28, 2016 at 6:48 PM, Adam Williamson <adamwill@fedoraproject.org
wrote:
On Mon, 2016-11-28 at 09:40 -0800, Adam Williamson wrote:
The validator/submitter component would be responsible for watching out for new composes and keeping track of tests and 'test environments' (if we keep that concept); it would have an API with endpoints you could query for this kind of information in order to construct a result submission, and for submitting results in some kind of defined form. On receiving a result it would validate it according to some schemas that admins of the system could configure (to ensure the report is for a known compose, image, test and test environment, and do some checking of stuff like the result status, user who submitted the result, comment content, stuff like that). Then it'd forward the result to resultsdb.
It occurs to me that it's possible resultsdb might be designed to do all this already, or it might make sense to amend resultsdb to do all or some of it; if that's the case, resultsdb folks, please do jump in and suggest it :)
That's what I thought, when reading the proposal - the "Submitter" seems like an unnecessary layer, to some extent - submitting stuff to resultsdb is pretty easy. What resultsdb is not doing now, though is the data validation - let's say you wanted to check that specific fields are set (on top of what resultsdb requires, which basically is just testcase and outcome) - that can be done in resultsdb (there is a diff with that functionality), but at the moment only on global level. So it might not necessarily make sense to set e.g. 'compose' as a required field for the whole resultsdb, since testday-related results might not even have that. So if this is what you wanted to do (data validation), it might be a good idea to have that submitter middleware. Or (and I'm not sure it's the better solution) I could try and make that configuration more granular, so you could set the requirements e.g. per namespace, thus effectively allowing setting the constraints even per testcase. But that would need even more though - should the constraints be inherited from the upper layers? How about when all but one testcases in a namespace need to have parameter X, but for the one, it does not make sense? (Probably a design error, but needs to be thought-through in the design phase).
So, even though resultsDB could do that, it is borderline "too smart" for it (I really want to keep any semantics out of ResultsDB). I'm not necessarily against it (especially if we end up wanting that on more places), but until now, we more or less worked with "clients that submits data makes sure all required fields are set" i.e "it's not resultsdb's place to say what is or is not required for a specific usecase". I'm not against the change, but at least for the first implementation (of the Release validation NG) I'd vote for the middleware solution. We can add the data validation functionality to ResultsDB later on, when we have a more concrete idea.
Makes sense?
Joza
On Wed, 2016-11-30 at 09:38 +0100, Josef Skladanka wrote:
So if this is what you wanted to do (data validation), it might be a good idea to have that submitter middleware.
Yeah, that's really kind of the key 'job' of that layer. Remember, we're dealing with *manual* testing here. We can't really just have a webapp that forwards whatever the hell people manage to stuff through its input fields into ResultsDB.
Really there's two kinds of 'validation' going on, if you'd like to think of it that way: we need to tell the web UI 'these are the possible scenarios for which you should prompt users to input results at all' (which for release validation is all the 'notice there's a new compose, combine it with the defined release validation test cases and expose all that info to the UI' work), and we need to take the data the web UI generates from user input, make sure it actually matches up with the schema we decide on for storing the results before forwarding it to resultsdb, and tell the web UI there's a problem if it doesn't.
That's how I see it, anyhow. Tell me if I seem way off. :)
On Wed, 2016-11-30 at 02:10 -0800, Adam Williamson wrote:
On Wed, 2016-11-30 at 09:38 +0100, Josef Skladanka wrote:
So if this is what you wanted to do (data validation), it might be a good idea to have that submitter middleware.
Yeah, that's really kind of the key 'job' of that layer. Remember, we're dealing with *manual* testing here. We can't really just have a webapp that forwards whatever the hell people manage to stuff through its input fields into ResultsDB.
I guess another way you could look at it is, this would be the layer where we actually define what kinds of manual test results we want to store in ResultsDB, and what the format for each type should be. I kinda like the idea that we could use the same middleware to do that job for various different frontends for submitting and viewing results, e.g. the webUI part of this project, a CLI app like relval, and a different webUI like testdays...
On Wed, Nov 30, 2016 at 11:14 AM, Adam Williamson < adamwill@fedoraproject.org> wrote:
On Wed, 2016-11-30 at 02:10 -0800, Adam Williamson wrote:
On Wed, 2016-11-30 at 09:38 +0100, Josef Skladanka wrote:
So if this is what you wanted to do (data validation), it might be a
good
idea to have that submitter middleware.
Yeah, that's really kind of the key 'job' of that layer. Remember, we're dealing with *manual* testing here. We can't really just have a webapp that forwards whatever the hell people manage to stuff through its input fields into ResultsDB.
I guess another way you could look at it is, this would be the layer where we actually define what kinds of manual test results we want to store in ResultsDB, and what the format for each type should be. I kinda like the idea that we could use the same middleware to do that job for various different frontends for submitting and viewing results, e.g. the webUI part of this project, a CLI app like relval, and a different webUI like testdays...
Yes, that IMO makes a lot of sense. Especially if we want to target
multiple "input tools". Then it might make sense to have what I was discussing in the previous post (and what you have been, I think talking about) - a format (two of them, actually) that defines: 1) what testcases are relevant for X (where X is, say Rawhide nightly testing, Testday for translations, foobar) 2) required structure (fields, types of the field) of the response
The question here is, whether the "required structure" is better off "per testcase" (i.e. "this testcase always requires these fields") or "per context" (i.e. results for this "thing" always require these fields) or event those combined ("this testcase, in this context, requires X, Y and Z, but in this other context, it only needs FOOBAR")
I would try not to go the third way, because that is really prone to erros IMO, and I'm not sure that "per context" is always right. So for me, the "TCMS" part of the data, should be: 1) testcases (with required fields/types of the fields in the "result response" 2) testplans - which testcases, possibly organized into groups. Maybe even dependencies + saying "I need testcase X to pass, Y can be pass or warn, Z can be whatever when A passes, for the testplan to pass"
But this is fairly complex thing, to be honest, and it would be the first and only useable TCMS in the world (from my point of view).
Let's do it!
On Wed, 2016-11-30 at 18:20 +0100, Josef Skladanka wrote:
I would try not to go the third way, because that is really prone to erros IMO, and I'm not sure that "per context" is always right. So for me, the "TCMS" part of the data, should be:
- testcases (with required fields/types of the fields in the "result
response" 2) testplans - which testcases, possibly organized into groups. Maybe even dependencies + saying "I need testcase X to pass, Y can be pass or warn, Z can be whatever when A passes, for the testplan to pass"
But this is fairly complex thing, to be honest, and it would be the first and only useable TCMS in the world (from my point of view).
I have rather different opinions, actually...but I'm not working on this right now and I'd rather have something concrete to discuss than just opinions :)
On Wed, Nov 30, 2016 at 6:29 PM, Adam Williamson <adamwill@fedoraproject.org
wrote:
On Wed, 2016-11-30 at 18:20 +0100, Josef Skladanka wrote:
I would try not to go the third way, because that is really prone to
erros
IMO, and I'm not sure that "per context" is always right. So for me, the "TCMS" part of the data, should be:
- testcases (with required fields/types of the fields in the "result
response" 2) testplans - which testcases, possibly organized into groups. Maybe
even
dependencies + saying "I need testcase X to pass, Y can be pass or warn,
Z
can be whatever when A passes, for the testplan to pass"
But this is fairly complex thing, to be honest, and it would be the first and only useable TCMS in the world (from my point of view).
I have rather different opinions, actually...but I'm not working on this right now and I'd rather have something concrete to discuss than just opinions :)
We should obviously set goals properly, before diving into implementation
details :) I'm interested in what you have in mind, since I've been thinking about this particular kind of thing for the last few years, and it really depends on what you expect of the system.
On Thu, 2016-12-01 at 14:25 +0100, Josef Skladanka wrote:
On Wed, Nov 30, 2016 at 6:29 PM, Adam Williamson <adamwill@fedoraproject.org
wrote: On Wed, 2016-11-30 at 18:20 +0100, Josef Skladanka wrote:
I would try not to go the third way, because that is really prone to
erros
IMO, and I'm not sure that "per context" is always right. So for me, the "TCMS" part of the data, should be:
- testcases (with required fields/types of the fields in the "result
response" 2) testplans - which testcases, possibly organized into groups. Maybe
even
dependencies + saying "I need testcase X to pass, Y can be pass or warn,
Z
can be whatever when A passes, for the testplan to pass"
But this is fairly complex thing, to be honest, and it would be the first and only useable TCMS in the world (from my point of view).
I have rather different opinions, actually...but I'm not working on this right now and I'd rather have something concrete to discuss than just opinions :)
We should obviously set goals properly, before diving into implementation
details :) I'm interested in what you have in mind, since I've been thinking about this particular kind of thing for the last few years, and it really depends on what you expect of the system.
Well, the biggest point where I differ is that I think your 'third way' is kind of unavoidable. For all kinds of reasons.
We re-use test cases between package update testing, Test Days, and release validation testing, for instance; some tests are more or less unique to some specific process, but certainly not all of them. The desired test environments may be significantly different in these different cases.
We also have secondary arch teams using release validation processes similar to the primary arch process: they use many of the same test cases, but the desired test environments are of course not the same.
Of course, in a non-wiki based system you could plausibly argue that a test case could be stored along with *all* of its possible environments, and then the configuration for a specific test event could include the information as to which environments are relevant and/or required for that test event. But at that point I think you're rather splitting hairs...
In my original vision of 'relval NG' the test environment wouldn't actually exist at all, BTW. I was hoping we could simply list test cases, and the user could choose the image they were testing, and the image would serve as the 'test environment'. But on second thought that's unsustainable as there are things like BIOS vs. UEFI where we may want to run the same test on the same image and consider it a different result. The only way we could stick to my original vision there would be to present 'same test, different environment' as another row in the UI, kinda like we do for 'two-dimensional test tables' in Wikitcms; it's not actually horrible UI, but I don't think we'd want to pretend in the backend that these were two completely different. I mean, we could. Ultimately a 'test case' is going to be a database row with a URL and a numeric ID. We don't *have* to say the URL key is unique. ;)
There are really all kinds of ways you can structure it, but I think fundamentally they'd all boil down to the same inherent level of complexity; some of them might be demonstrably worse than others (like...sticking them all in wikicode and parsing wiki table syntax to figure out when you have different 'test instances' for the same test case! that sounds like a *really bad* way to do it!)
Er. I'm rambling, aren't I? One reason I actually tend to prefer just sitting down and writing something to trying to plan it all out comprehensively is that when I just sit here and try to think out planning questions I get very long-winded and fuzzy and chase off down all possible paths. Just writing a damn thing is usually quite quick and crystallizes a lot of the questions wonderfully...
On Thu, Dec 1, 2016 at 6:04 PM, Adam Williamson adamwill@fedoraproject.org wrote:
On Thu, 2016-12-01 at 14:25 +0100, Josef Skladanka wrote:
On Wed, Nov 30, 2016 at 6:29 PM, Adam Williamson <
adamwill@fedoraproject.org
wrote: On Wed, 2016-11-30 at 18:20 +0100, Josef Skladanka wrote:
I would try not to go the third way, because that is really prone to
erros
IMO, and I'm not sure that "per context" is always right. So for me,
the
"TCMS" part of the data, should be:
- testcases (with required fields/types of the fields in the "result
response" 2) testplans - which testcases, possibly organized into groups. Maybe
even
dependencies + saying "I need testcase X to pass, Y can be pass or
warn,
Z
can be whatever when A passes, for the testplan to pass"
But this is fairly complex thing, to be honest, and it would be the
first
and only useable TCMS in the world (from my point of view).
I have rather different opinions, actually...but I'm not working on this right now and I'd rather have something concrete to discuss than just opinions :)
We should obviously set goals properly, before diving into
implementation
details :) I'm interested in what you have in mind, since I've been thinking about this particular kind of thing for the last few years, and
it
really depends on what you expect of the system.
Well, the biggest point where I differ is that I think your 'third way' is kind of unavoidable. For all kinds of reasons.
We re-use test cases between package update testing, Test Days, and release validation testing, for instance; some tests are more or less unique to some specific process, but certainly not all of them. The desired test environments may be significantly different in these different cases.
We also have secondary arch teams using release validation processes
similar to the primary arch process: they use many of the same test cases, but the desired test environments are of course not the same.
I think we actually agree, but I'm not sure, since I don't really know what you mean by "test environment" and how should it 1) affect the data stored with the result 2) affect the testcase itself
I have a guess, and I base the rest of my response on it, but I'd rather know, than assume :)
Of course, in a non-wiki based system you could plausibly argue that a test case could be stored along with *all* of its possible environments, and then the configuration for a specific test event could include the information as to which environments are relevant and/or required for that test event. But at that point I think you're rather splitting hairs...
In my original vision of 'relval NG' the test environment wouldn't actually exist at all, BTW. I was hoping we could simply list test cases, and the user could choose the image they were testing, and the image would serve as the 'test environment'. But on second thought that's unsustainable as there are things like BIOS vs. UEFI where we may want to run the same test on the same image and consider it a different result. The only way we could stick to my original vision there would be to present 'same test, different environment' as another row in the UI, kinda like we do for 'two-dimensional test tables' in Wikitcms; it's not actually horrible UI, but I don't think we'd want to pretend in the backend that these were two completely different. I mean, we could. Ultimately a 'test case' is going to be a database row with a URL and a numeric ID. We don't *have* to say the URL key is unique. ;)
I got a little lost here, but I think I understand what you are saying. This is IMO one of the biggest pain-points we have currently - the stuff where we kind of consider "Testcase FOO" for BIOS and UEFI to be the same, but different at the same time, and I think this is where the TCMS should come in play, actually.
Because I believe, that there is a fundamental difference between 1) the 'text' of the testcase (which says 'how to do it' basically) 2) the (what I think you call) environment - aka UEFI vs BIOS, 64bit vs ARM, ... 3) the testplan
And this might be us saying the same things, but we often can end up in a situation, where we say stuff like "this test(case?) makes sense for BIOS and UEFI, for x86_64 and ARM, for physical host and virtual machine, ...)" and sometimes it would make sense to store the 'variables of the env' with the testcases, and sometims in the testplan, and figuring out the split is a difficult thing to do.
I kind of believe, that the "environment requirements" should be a part of the testplan - we should say that "testplan X needs testcase Y ran on Foo and Bar" in the testplan. Instead of listing all the different options in the testcase, and then just selecting "a version of it" in testplan.
And this leads, to the "context" I was mentioning earlier - it might well happen, that resultsdb will hold results for Testcase X, with totally different "env values" and/or even "sets of env values". But this is fine, since the testplan know, what kind of values it needs, and will only take those which have it set in concern.
While this is nice and awesome, this might lead to instances of data, where an env value of the same "meaning" (like UEFI vs BIOS) is present, but named differently (like "boot style" vs "firmware", does not really make sense, but for the sake of the argument...), but this would/could be a problem even if we kept the env options with the testcases, and can only be handled by firm guidelines.
Not sure I make much sense, it's too much stuff to just put into one email, I guess...
There are really all kinds of ways you can structure it, but I think
fundamentally they'd all boil down to the same inherent level of complexity; some of them might be demonstrably worse than others (like...sticking them all in wikicode and parsing wiki table syntax to figure out when you have different 'test instances' for the same test case! that sounds like a *really bad* way to do it!)
Er. I'm rambling, aren't I? One reason I actually tend to prefer just sitting down and writing something to trying to plan it all out comprehensively is that when I just sit here and try to think out planning questions I get very long-winded and fuzzy and chase off down all possible paths. Just writing a damn thing is usually quite quick and crystallizes a lot of the questions wonderfully...
While I agree, I also know that temporary solutions tend to stick (as did Wiki, and old Resultsdb), so I'd like this thing we/you write to be based on at least an informal, but still to some extent thought-through spec. I'm the first one to say that putting together the spec is a boring, sometimes daunting task, and that a better half of my project are exactly the "let's write something, it will crystalize", but for stuff with (at least the intent of) a bigger impact, it is far better to have the spec. Trust me, that once you try and put some together, stuff will crystalize the same way it would if you were actually writing the code. Sure, you might not (and will not) catch'em'all, but at least some of the corners and limitations will be known upfront, so we don't just end up blindly pushing ourselves to said corner.
I highly suggest putting down a few detailed use-cases, as a basis. Helps a lot with figuring out what is actually needed, and serves as a nice doc at the same time.
Joza
On Mon, 2016-12-05 at 10:33 +0100, Josef Skladanka wrote:
I kind of believe, that the "environment requirements" should be a part of the testplan - we should say that "testplan X needs testcase Y ran on Foo and Bar" in the testplan. Instead of listing all the different options in the testcase, and then just selecting "a version of it" in testplan.
Oh! Then yes, we absolutely agree. That's what I think too.
In my head, a 'test case' - to this system - is some kind of resource locator for what's expected to be instructions on testing something, and an id. And that's all it is. Absolutely I agree we don't store any other metadata associated with the test case, but with the test plan.
And yes, you were right in your assumption about what I meant by 'test environment'.
One interesting thought I had, though - should we store the *test cases* in the middleware 'validate/report' thing I've been describing here, or should we store them in ResultsDB?
The 'test plan' stuff should clearly go in the middleware, I think. But it's not so straightforward whether we keep the 'test cases' there or in ResultsDB, especially if they're just super-simple 'here's a URL and some identifiers for it' objects.
Oh, BTW, I definitely was thinking that we should cope with test cases being moved around and having their human-friendly names changed, but still being recognized as 'the same test case'.
On Mon, 2016-12-05 at 11:39 -0800, Adam Williamson wrote:
One interesting thought I had, though - should we store the *test cases* in the middleware 'validate/report' thing I've been describing here, or should we store them in ResultsDB?
Er, sorry, to be clear here, by 'store the test cases' I mean 'the database entries for test cases'. *Not* the actual test case text itself, I don't think that should be in the middleware system or in resultsdb.
On Wed, Nov 30, 2016 at 11:10 AM, Adam Williamson < adamwill@fedoraproject.org> wrote:
On Wed, 2016-11-30 at 09:38 +0100, Josef Skladanka wrote:
So if this is what you wanted to do (data validation), it might be a good idea to have that submitter middleware.
Yeah, that's really kind of the key 'job' of that layer. Remember, we're dealing with *manual* testing here. We can't really just have a webapp that forwards whatever the hell people manage to stuff through its input fields into ResultsDB.
I'm not sure I'm getting it right, but the people will pass the data through a "tool" (say web app) which will provide fields to fill, and will most probably end up doing the data "sanitation" on its own. So the "frontend" could store data directly in ResultsDB, since the frontend would make the user fill all the fields. I guess I know what you are getting at ("but this is exactly the double validation!") but it is IMHO actually harder to have "generic stupid frontend" that gets the "form schema" from the middleware, shows the form, and blindly forwads data to the middleware, showing errors back, than 1) having a separate app for that, that will know the validation rules 2) it being an actual frontend on the middleware, thus reusing the "check" code internally
R...we need to tell the web UI 'these are the possible scenarios for which you should prompt users to input results at all'
Agreed
(which for release validation is all the 'notice there's a new compose, combine it with the defined release validation test cases and expose all that info to the UI' work),
That is IMO a separate problem, but yeah.
and we need to take the data the web UI generates from user input, make sure it actually matches up with the schema we decide on for storing the results before forwarding it to resultsdb, and tell the web UI there's a problem if it doesn't.
And this is what I have been discussing in the first part of the reply.
That's how I see it, anyhow. Tell me if I seem way off. :)
Adam Williamson Fedora QA Community Monkey IRC: adamw | Twitter: AdamW_Fedora | XMPP: adamw AT happyassassin . net http://www.happyassassin.net _______________________________________________ qa-devel mailing list -- qa-devel@lists.fedoraproject.org To unsubscribe send an email to qa-devel-leave@lists.fedoraproject.org
qa-devel@lists.fedoraproject.org