On Tue, Mar 07, 2017 at 12:18:17PM -0500, Kamil Paral wrote:
> > I'll go slightly off-topic here, but consider this:
'scenario' will
> > help us identify unique results. But it will not help us decide
> > whether "all" results have been submitted (maybe some are still
> > running, maybe some have crashed hard and not sent any results). So
> > there's a good chance that rel-eng tools will need to have deep
> > knowledge of the testcases anyway. Not just list of all monitored
> > testcases, but also that this particular test case needs to be
> > performed for bios and uefi (or, that this particular testcase needs
> > to be present with these two scenarios).
> >
> > So while having scenarios seems definitely helpful in certain cases,
> > it might not help us in avoiding having too much knowledge in the
> > gating consumers. Just food for thought.
>
> I have actually been thinking about that problem as well :) And it is
> indeed a tricky one. Bodhi already basically ducks the question; it
> just shows whatever results are there at the time. We don't have a good
> answer for cases like 'can we ship this compose?' or check-compose,
> cases where the tool needs to know that testing is complete.
>
> My best idea so far is that we should implement a 'testing complete'
> fedmsg with a consistent name for each testing system - so
> whatever.test.system.prefix.is.testing.complete or something like that.
I'm skeptical about this. The implementation is easier for OpenQA,
because it's a standalone system and you know that the testing is
"complete" once all the test cases passed. But for generic Taskotron
tasks (depcheck, rpmgrill, etc), we schedule them in the trigger
based on fedmsg contents and then we don't track them anymore. We
have no idea if and when all of them completed. We could make a
complex system to track that, of course, but it seems to me that
it's fundamentally wrong anyway. It moves the test plan knowledge
from the consumer to the producer. So even though rel-eng gating
script is the consumer that should decide whether we're good to go
or not, we would move this logic into OpenQA or Taskotron just to be
able to trigger "testing complete" message. That doesn't seem worth
it. And it will break once we have multiple consumers with different
requirements (what happens when one of the consumers needs depcheck
and the other doesn't, when is the testing "complete"?).
Yeah, me too. Putting the complexity of "completion" in the test
execution systems seems tough. In other environments, Jenkins jobs
won't know about total completion per item, either.
I'd rather have a definition file in releng's pagure and send
PRs
against that, than emulate the logic in all our testing systems.
Yeah - that's a stopgap that could work.
So yes, this is a tricky problem indeed. If anyone has a magic
solution, I'm interested.
Long-term (in the new few months) I want to start work on the
PolicyEngine service I presented at devconf: a service to store
descriptions of what test results are "good enough" at various gate
points, for different content sets. Something kept in a DB, managed
through a web UI - a place for packagers with ACLs to manage specific
requirements for their package sets, but with the ability for fedora
qe and fedora releng to overlay that with their own global policies.