Hi folks! So I was wondering: what do folks think about running openQA's 'all' mode nightly (on the BOS deployment) instead of the 'current' mode?
'all' mode is the one I wrote a while back that tests each night's Rawhide and Branched composes in addition to the 'current' validation compose. I think it'd be nice to have an idea every day if Rawhide and/or Branched are broken. We could then actually write some kind of 'israwhidebroken.com' thing. Ideally we'd be doing this with fedmsg integration, but doing it nightly isn't too terrible.
So can anyone see a reason we shouldn't? We have more tests now and it's taking longer, but we should still comfortably have enough time to do three complete test runs in a day, which would be the maximum. IIRC the 'current' tests get run first, so the nightly tests would not be delaying the more important validation tests (and if that's not the case it would be easy to change).
The reason I don't do this on the happyassassin deployment is simply bandwidth - I'm already nearly at my monthly cap and downloading a bunch of images nightly would push me over. But I don't think that applies to bos.
On Mon, Aug 17, 2015 at 09:08:31AM -0700, Adam Williamson wrote:
Hi folks! So I was wondering: what do folks think about running openQA's 'all' mode nightly (on the BOS deployment) instead of the 'current' mode?
+1; do it!
compose. I think it'd be nice to have an idea every day if Rawhide and/or Branched are broken. We could then actually write some kind of 'israwhidebroken.com' thing. Ideally we'd be doing this with fedmsg
How do we get from "it's broken today" to "and someone should do something about it?"
On Mon, 2015-08-17 at 15:31 -0400, Matthew Miller wrote:
On Mon, Aug 17, 2015 at 09:08:31AM -0700, Adam Williamson wrote:
Hi folks! So I was wondering: what do folks think about running openQA's 'all' mode nightly (on the BOS deployment) instead of the 'current' mode?
+1; do it!
compose. I think it'd be nice to have an idea every day if Rawhide and/or Branched are broken. We could then actually write some kind of 'israwhidebroken.com' thing. Ideally we'd be doing this with fedmsg
How do we get from "it's broken today" to "and someone should do something about it?"
Electric shock collars?
No, seriously, I'm kinda working on some stuff to do some sort of daily compose status report, which should help. openQA doesn't do automatic bug submission and we probably don't want to do it (because of cases where the failures are more to do with openQA than Fedora, and avoiding potential excessive duplication when libreport dupe detection fails), so the first thing that needs to happen is someone (i.e. one of us) goes and looks at the failures and submits bugs if they're actually valid problems.
If the bugs are significant enough they'll wind up as release blockers, which is probably an adequate short-term mechanism.
We're still a reasonably long distance from true CI in the sense of '...and if it's broken, we block the change that broke it', because daily is still far too coarse granularity for that. For that we'd need Dennis' pet project that somehow knows all the things that can possibly influence the behaviours of the images, and we'd need to run the tests every time any of those things changed, and we'd need to do it fast enough to gate the changes. Which is all fairly challenging stuff. This is more 'observe and report' level stuff now and for the medium-term foreseeable future, I'd say.
What we could (probably) do right now is hack up a dumb lil' thing with a hardcoded list of packages which would re-spin an image every time a build of one of those packages happened, and run maybe a subset of the most critical tests on it (to avoid the tests taking too long). But it would probably be a bit of a duct-tape project for now, and it'd be a few days' work at least (and that's an estimate with my 'pretend developer' hat on, so I'm now going to switch into my 'QA' hat and revise that idiot's estimate to 'two weeks at least').
I'm all for it. BOS is idle most of the time anyway and if we look at it from time to time, perhaps we will notice that our tests are broken sooner than later.
But it raises a question (and I've talked about it with Kamil before): Where are we gonna put results of tests that aren't in test matrices (if we are even planning to put them anywhere else in the first place)? This isn't specific merely to nightly testing, but for other tests also.
2015-08-17 18:08 GMT+02:00 Adam Williamson adamwill@fedoraproject.org:
Hi folks! So I was wondering: what do folks think about running openQA's 'all' mode nightly (on the BOS deployment) instead of the 'current' mode?
'all' mode is the one I wrote a while back that tests each night's Rawhide and Branched composes in addition to the 'current' validation compose. I think it'd be nice to have an idea every day if Rawhide and/or Branched are broken. We could then actually write some kind of 'israwhidebroken.com' thing. Ideally we'd be doing this with fedmsg integration, but doing it nightly isn't too terrible.
So can anyone see a reason we shouldn't? We have more tests now and it's taking longer, but we should still comfortably have enough time to do three complete test runs in a day, which would be the maximum. IIRC the 'current' tests get run first, so the nightly tests would not be delaying the more important validation tests (and if that's not the case it would be easy to change).
The reason I don't do this on the happyassassin deployment is simply bandwidth - I'm already nearly at my monthly cap and downloading a bunch of images nightly would push me over. But I don't think that applies to bos. -- Adam Williamson Fedora QA Community Monkey IRC: adamw | Twitter: AdamW_Fedora | XMPP: adamw AT happyassassin . net http://www.happyassassin.net
qa-devel mailing list qa-devel@lists.fedoraproject.org https://admin.fedoraproject.org/mailman/listinfo/qa-devel
On Tue, 2015-08-18 at 08:26 +0200, Jan Sedlak wrote:
I'm all for it. BOS is idle most of the time anyway and if we look at it from time to time, perhaps we will notice that our tests are broken sooner than later.
But it raises a question (and I've talked about it with Kamil before): Where are we gonna put results of tests that aren't in test matrices (if we are even planning to put them anywhere else in the first place)? This isn't specific merely to nightly testing, but for other tests also.
Well, for now 'just leave 'em in openQA' seems fine to me. It's *meant* to act as a results store - otherwise they wouldn't bother making it look nice in the web UI. It works perfectly fine, at least in recent updates the jobs group nicely by 'BUILD' value - all it takes is for us to have a look at it each day and look into failures. It would be pretty easy to have wikitcms create a validation page for each branch each night and stuff the results there, but I'm not convinced it's of much *value* to do that (except we could then use the testcase_stats stuff to handily plot status over time, but...meh, that's a pretty small win).
In the long term, I think the plan we're kinda looking towards is to try and centralize result storage in resultsdb, but we can look into the details of that as we move along.
As part of this 'compose status' reporting stuff I'm working on ATM, I can include openQA test result status, though of course if the nightly tests are running in BOS we'll have to run the script on a machine behind the firewall. Shouldn't be a problem. The report could, say, list out all test fails and highlight any that passed the previous day ('new fails').
Heck, if it seems to fit right we can have the openQA dispatcher run the 'compose status' checks and send out the emails. But it doesn't really matter exactly how the bits plug together, point is it's all stuff we can do relatively easy right now. I'm about to push some new bits to fedfind that implement checking any given release to see if any 'expected important images' are missing, and diffing the images present for any one release against any other.
I'll wait to hear from Josef on the 'all' idea, and if he doesn't mind, I'll poke the cron job (after making sure 'all' still works right of course :>). Thanks for the feedback!
qa-devel@lists.fedoraproject.org