On Thu, Dec 1, 2016 at 6:04 PM, Adam Williamson <adamwill@fedoraproject.org> wrote:
On Thu, 2016-12-01 at 14:25 +0100, Josef Skladanka wrote:
> On Wed, Nov 30, 2016 at 6:29 PM, Adam Williamson <adamwill@fedoraproject.org
> > wrote:
> > On Wed, 2016-11-30 at 18:20 +0100, Josef Skladanka wrote:
> > > I would try not to go the third way, because that is really prone to
> >
> > erros
> > > IMO, and I'm not sure that "per context" is always right. So for me, the
> > > "TCMS" part of the data, should be:
> > > 1) testcases (with required fields/types of the fields in the "result
> > > response"
> > > 2) testplans - which testcases, possibly organized into groups. Maybe
> >
> > even
> > > dependencies + saying "I need testcase X to pass, Y can be pass or warn,
> >
> > Z
> > > can be whatever when A passes, for the testplan to pass"
> > >
> > > But this is fairly complex thing, to be honest, and it would be the first
> > > and only useable TCMS in the world (from my point of view).
> >
> > I have rather different opinions, actually...but I'm not working on
> > this right now and I'd rather have something concrete to discuss than
> > just opinions :)
> >
> > We should obviously set goals properly, before diving into implementation
>
> details :) I'm interested in what you have in mind, since I've been
> thinking about this particular kind of thing for the last few years, and it
> really depends on what you expect of the system.

Well, the biggest point where I differ is that I think your 'third way'
is kind of unavoidable. For all kinds of reasons.

We re-use test cases between package update testing, Test Days, and
release validation testing, for instance; some tests are more or less
unique to some specific process, but certainly not all of them. The
desired test environments may be significantly different in these
different cases.
 
We also have secondary arch teams using release validation processes
similar to the primary arch process: they use many of the same test
cases, but the desired test environments are of course not the same.


I think we actually agree, but I'm not sure, since I don't really know what you mean by "test environment" and how should it
1) affect the data stored with the result
2) affect the testcase itself

I have a guess, and I base the rest of my response on it, but I'd rather know, than assume :)

 
Of course, in a non-wiki based system you could plausibly argue that a
test case could be stored along with *all* of its possible
environments, and then the configuration for a specific test event
could include the information as to which environments are relevant
and/or required for that test event. But at that point I think you're
rather splitting hairs...

In my original vision of 'relval NG' the test environment wouldn't
actually exist at all, BTW. I was hoping we could simply list test
cases, and the user could choose the image they were testing, and the
image would serve as the 'test environment'. But on second thought
that's unsustainable as there are things like BIOS vs. UEFI where we
may want to run the same test on the same image and consider it a
different result. The only way we could stick to my original vision
there would be to present 'same test, different environment' as another
row in the UI, kinda like we do for 'two-dimensional test tables' in
Wikitcms; it's not actually horrible UI, but I don't think we'd want to
pretend in the backend that these were two completely different. I
mean, we could. Ultimately a 'test case' is going to be a database row
with a URL and a numeric ID. We don't *have* to say the URL key is
unique. ;)

I got a little lost here, but I think I understand what you are saying.
This is IMO one of the biggest pain-points we have currently - the stuff
where we kind of consider "Testcase FOO" for BIOS and UEFI to be
the same, but different at the same time, and I think this is where the
TCMS should come in play, actually.

Because I believe, that there is a fundamental difference between
1) the 'text' of the testcase (which says 'how to do it' basically)
2) the (what I think you call) environment - aka UEFI vs BIOS, 64bit vs ARM, ...
3) the testplan

And this might be us saying the same things, but we often can end up in a
situation, where we say stuff like "this test(case?) makes sense for BIOS and
UEFI, for x86_64 and ARM, for physical host and virtual machine, ...)"
and sometimes it would make sense to store the 'variables of the env' with
the testcases, and sometims in the testplan, and figuring out the split is
a difficult thing to do.

I kind of believe, that the "environment requirements" should be a part of the
testplan - we should say that "testplan X needs testcase Y ran on Foo and Bar"
in the testplan. Instead of listing all the different options in the testcase, and then
just selecting "a version of it" in testplan.

And this leads, to the "context" I was mentioning earlier - it might well happen,
that resultsdb will hold results for Testcase X, with totally different "env values"
and/or even "sets of env values". But this is fine, since the testplan know, what
kind of values it needs, and will only take those which have it set in concern.

While this is nice and awesome, this might lead to instances of data, where
an env value of the same "meaning" (like UEFI vs BIOS) is present, but named
differently (like "boot style" vs "firmware", does not really make sense, but for
the sake of the argument...), but this would/could be a problem even if we kept
the env options with the testcases, and can only be handled by firm guidelines.

Not sure I make much sense, it's too much stuff to just put into one email,
I guess...

There are really all kinds of ways you can structure it, but I think
fundamentally they'd all boil down to the same inherent level of
complexity; some of them might be demonstrably worse than others
(like...sticking them all in wikicode and parsing wiki table syntax to
figure out when you have different 'test instances' for the same test
case! that sounds like a *really bad* way to do it!)

Er. I'm rambling, aren't I? One reason I actually tend to prefer just
sitting down and writing something to trying to plan it all out
comprehensively is that when I just sit here and try to think out
planning questions I get very long-winded and fuzzy and chase off down
all possible paths. Just writing a damn thing is usually quite quick
and crystallizes a lot of the questions wonderfully...

While I agree, I also know that temporary solutions tend to stick (as did Wiki, and
 old Resultsdb), so I'd like this thing we/you write to be based on at least
an informal, but still to some extent thought-through spec.
I'm the first one to say that putting together the spec is a boring, sometimes
daunting task, and that a better half of my project are exactly the
"let's write something, it will  crystalize", but for stuff with (at least the intent of)
a bigger impact, it is far better to have the spec. Trust me, that once you
try and put some together, stuff will crystalize the same way it would if
you were actually writing the code. Sure, you might not (and will not)
catch'em'all, but at least some of the corners and limitations will be
known upfront, so we don't just end up blindly pushing ourselves to said
corner.

I highly suggest putting down a few detailed use-cases, as a basis. Helps
a lot with figuring out what is actually needed, and serves as a nice
doc at the same time.

Joza