Greetings.
On the devel list recently there was a long thread discussing how we could improve the current https://fedoraproject.org/wiki/Updates_Policy
I gathered up ALL the ideas people put forth that were concrete in a list for fesco to look into. I'd love feedback from QA/Testers as well.
If you have additional (concrete) ideas to add, please feel free.
Note that these are not my ideas. ;)
The list I have so far:
General:
* Just drop all the requirements/go back to before we had any updates criteria.
* back off current setup until autoqa is ready, see what we want to do after that lands.
* Change FN-1 to just security and major bugfix. Nothing else allowed.
* allow packages with a %check section to go direct to stable.
* setup a remote test env that people could use to test things.
* require testing only for packages where people have signed up to be testers.
* Ask maintainers to provide test cases / test cases in wiki for each package
* have a way to get interested testers notified on bodhi updates for packages they care about.
* reduced karma requirement on other releases when one has gone stable
* aggregated karma across the releases for the same package version.
* PK updates-testing integration of some kind.
* allow anon karma to count.
* setup fedora-qa package or group to more easily bring up more testers.
* Testing is only required for certain packages. Those packages are the packages where problems have occurred before so fesco or other maintainers affected by the changes deem it necessary to supplement the maintainer's testing with outside help.
- Option: supplement this list with critpath packages where the maintainers desire extra testing. This means that we would no longer be dragging in dependencies immediately... only if updates by the dependency's maintaner to that package are breaking things.
* updates that only modify the spec could have a lower requirement. (ie, to fix a packaging issue, no changes in the upstream software).
Security updates:
* allow security updates to go direct to stable
* ask QA to commit to testing security updates
* allow timeout for security updates before going to stable.
Critpath updates:
* allow critpath timeout for going to stable.
Non critpath/security:
* reduce timeout for non critpath from 7 to 3 days.
* change default autokarma to 2 or 1.
kevin
On Mon, 2010-11-29 at 09:56 -0700, Kevin Fenzi wrote:
Greetings.
On the devel list recently there was a long thread discussing how we could improve the current https://fedoraproject.org/wiki/Updates_Policy
I gathered up ALL the ideas people put forth that were concrete in a list for fesco to look into. I'd love feedback from QA/Testers as well.
If you have additional (concrete) ideas to add, please feel free.
Note that these are not my ideas. ;)
Thanks for sending these along Kevin.
The list I have so far:
General:
- Just drop all the requirements/go back to before we had any updates criteria.
Hmm, certainly an idea. I feel like this is definitely a step backward, not forward. Has the initial motivation for an updates policy gone away or changed? Have we encountered problems that didn't yet exist, or weren't as painful, when the policy was first enabled? Are there other problems we need to focus on resolving (I suspect this is the case)?
- back off current setup until autoqa is ready, see what we want to do
after that lands.
Much the same as above. In addition, automation doesn't magically solve problems, it only automates a repeatable process that already exists. If we haven't documented the steps manually, I wouldn't know where to get started. I'd like to see us work out the manual steps required to test stuff ... then we can incorporate automation where sensible.
- Change FN-1 to just security and major bugfix. Nothing else allowed.
I don't have objections, and with repos.fedoraproject.org we have an avenue for maintainers to provide more disruptive packages to existing releases. From a QA standpoint, this would certainly lower the volume of updates that need QA attention. But this definitely seems like a general Fedora strategy/audience topic, than a QA specific one.
- allow packages with a %check section to go direct to stable.
Interesting, I like the spirit of this idea, but would like to see if we can incorporate this with autoqa karma plans. So, perhaps packages with %check get automated karma? Just the same as with packages that pass automated tests ... they'll eventually get positive karma of some form.
- setup a remote test env that people could use to test things.
I could use more details on this point. Is this talking about setting up QA systems hosted in Fedora infrastructure that any tester could login and use to test updates?
- require testing only for packages where people have signed up to be
testers.
Hmm, I like this idea in part as it allows an opt-in for testing for maintainers. I'd stand behind this for packages outside critpath, but for critpath, we need to test them. We've moved forward with the policy without a implementation from QA for hosting/contributing test documentation. I suspect that may be the cause of the policy push back, that there isn't enough test feedback. I believe that's in part because we haven't provided clear test instructions for people to follow. It's a *significant* time investment to test an update that has no documented test instructions and that you aren't familiar with.
QA plans to address this starting in Fedora 15 by laying some wiki ground work for documenting test procedures for specific updates.
- Ask maintainers to provide test cases / test cases in wiki for each package
This is always the case. Proper testing comes from all levels, including development. However, as noted above, Fedora QA needs to provide some area/place for test documentation to develop/mature. Currently, we can discuss test guidance and instructions on the test@ mailing list. This practice works well for tests related to release verification. This is a great public forum to knock out the details. Moving forward, we need to capture those instructions on the wiki ... and ideally integrate it with the bodhi update request (e.g. "Click here for test instructions").
- have a way to get interested testers notified on bodhi updates for packages
they care about.
Interesting, almost like a watch-list for bodhi updates. Seems like a feature worth exploring in more detail. I use a similar feature with koji to watch for new builds.
- reduced karma requirement on other releases when one has gone stable
- aggregated karma across the releases for the same package version.
I don't have data to indicate how many updates have been released, and then reverted/obsoleted on only a subset of releases.
- PK updates-testing integration of some kind.
Open to any ideas here ... are you thinking about some PK updates-testing feedback workflow. Like integrating fedora-easy-karma? Something else?
- allow anon karma to count.
Or maybe it counts, but counts less (.5 karma or something).
- setup fedora-qa package or group to more easily bring up more testers.
Certainly not opposed to it, we've been focused on building a larger community of testers (proventesters). Over time, I could see that group breaking out into more specific interest groups. I'm inclined to listen to that request more if it consistently came from proventesters noting it as a obstacle to participation. Meaning, I'm not aware that the lack of component-specific test groups is preventing tester participation.
That said, I like the idea of micro-communities developing to focus on specific use cases or components.
Testing is only required for certain packages. Those packages are the packages where problems have occurred before so fesco or other maintainers affected by the changes deem it necessary to supplement the maintainer's testing with outside help.
- Option: supplement this list with critpath packages where the maintainers desire extra testing. This means that we would no longer be dragging in dependencies immediately... only if updates by the dependency's maintaner to that package are breaking things.
updates that only modify the spec could have a lower requirement.
(ie, to fix a packaging issue, no changes in the upstream software).
All %obsoletes, %requires, %provides, %files and %patch statements are only recorded in the .spec file. Just because they are in the .spec file doesn't mean they are any less disruptive.
Security updates:
- allow security updates to go direct to stable
Risky
- ask QA to commit to testing security updates
We can't commit to testing without guidance or instructions. Let's commit to documenting repeatable procedures that testers can follow and expand upon.
For some security updates, they may have already been functionally tested upstream. I think it's reasonable to provide proxy karma linking to upstream functional tests. Though, I don't think upstream functional tests alone can bless a security update.
- allow timeout for security updates before going to stable.
Critpath updates:
- allow critpath timeout for going to stable.
I think this will come back to hurt us. These things landed in critpath for a reason. I'd like to work on increasing tester engagement, before we loosen the process around critpath packages.
Non critpath/security:
reduce timeout for non critpath from 7 to 3 days.
change default autokarma to 2 or 1.
No immediate thoughts on these points.
Thanks, James
On Mon, 29 Nov 2010 12:40:25 -0500, James wrote:
- updates that only modify the spec could have a lower requirement.
(ie, to fix a packaging issue, no changes in the upstream software).
All %obsoletes, %requires, %provides, %files and %patch statements are only recorded in the .spec file. Just because they are in the .spec file doesn't mean they are any less disruptive.
True, but (1) it is considerably easier for a project like autoqa to catch bad deps/conflicts/obsoletes than (2) to catch software bugs introduced in a version upgrade. Plus (3) you don't want to do baby-sitting for packagers who are expected to know what they're doing wrt packaging.
Non critpath/security:
reduce timeout for non critpath from 7 to 3 days.
change default autokarma to 2 or 1.
No immediate thoughts on these points.
Bodhi ought to make it impossible that the update submitter spends +1 on her own update. It has been abused already.
And there ought to be an _enforced_ minimum number of days in updates-testing for certain packages. They need time to be picked up by the mirror-system. And testers need more time to become aware of new test updates and then spend additional time on evalulating the updates. It is completely useless if some testers _skip_ or shorten the updates-testing period by giving +1 for koji builds or within 24 hours. That is what has happened for a "mesa" update that hasn't seen sufficient testing due to the short time it was offered as a test-update: https://admin.fedoraproject.org/updates/mesa-7.9-1.fc14
On Mon, 2010-11-29 at 19:04 +0100, Michael Schwendt wrote:
On Mon, 29 Nov 2010 12:40:25 -0500, James wrote:
- updates that only modify the spec could have a lower requirement.
(ie, to fix a packaging issue, no changes in the upstream software).
All %obsoletes, %requires, %provides, %files and %patch statements are only recorded in the .spec file. Just because they are in the .spec file doesn't mean they are any less disruptive.
True, but (1) it is considerably easier for a project like autoqa to catch bad deps/conflicts/obsoletes than (2) to catch software bugs introduced in a version upgrade. Plus (3) you don't want to do baby-sitting for packagers who are expected to know what they're doing wrt packaging.
Certainly, and we intend for autoqa to catch packaging snafus (1). However, my point was that nasty things can happen by small changes to the spec file. A small .spec file change doesn't necessary mean it requires less testing. The change could be adding/removing patches or altering the %build %install logic.
Non critpath/security:
reduce timeout for non critpath from 7 to 3 days.
change default autokarma to 2 or 1.
No immediate thoughts on these points.
Bodhi ought to make it impossible that the update submitter spends +1 on her own update. It has been abused already.
And there ought to be an _enforced_ minimum number of days in updates-testing for certain packages. They need time to be picked up by the mirror-system. And testers need more time to become aware of new test updates and then spend additional time on evalulating the updates. It is completely useless if some testers _skip_ or shorten the updates-testing period by giving +1 for koji builds or within 24 hours. That is what has happened for a "mesa" update that hasn't seen sufficient testing due to the short time it was offered as a test-update: https://admin.fedoraproject.org/updates/mesa-7.9-1.fc14
Re: time - I've seen maintainers request test feedback for updates, but the update wasn't pushed to 'updates-testing' yet. While pulling down the update from koji is an option, this shouldn't be used for testing as it bypasses the normal update mechanism and depending on what packages are downloaded from koji, may not install all multilib packages.
So there is a lag time for mirrors to receive updates. Does anyone know what that average time for mirrors to update is?
Thanks, James
On Mon, 29 Nov 2010 13:44:02 -0500 James Laska jlaska@redhat.com wrote:
...snip...
So there is a lag time for mirrors to receive updates. Does anyone know what that average time for mirrors to update is?
It varies.
I've been doing pushes every day for a while now, I start them in the morning and they usually go out between 1-3pm my time. (MST). At that point the master mirror has them. kernel.org syncs pretty quickly, so I would say at least some mirrors have the updates later that night.
kevin
On Mon, 2010-12-06 at 16:21 -0700, Kevin Fenzi wrote:
On Mon, 29 Nov 2010 13:44:02 -0500 James Laska jlaska@redhat.com wrote:
...snip...
So there is a lag time for mirrors to receive updates. Does anyone know what that average time for mirrors to update is?
It varies.
I've been doing pushes every day for a while now, I start them in the morning and they usually go out between 1-3pm my time. (MST). At that point the master mirror has them. kernel.org syncs pretty quickly, so I would say at least some mirrors have the updates later that night.
That's at least something to consider for maintainers asking for feedback on updates that just got pushed, or haven't yet been pushed. Certainly doesn't apply for updates that were pushed days/weeks ago.
Thanks, James
On Mon, 2010-11-29 at 12:40 -0500, James Laska wrote:
- Just drop all the requirements/go back to before we had any updates criteria.
Hmm, certainly an idea. I feel like this is definitely a step backward, not forward. Has the initial motivation for an updates policy gone away or changed? Have we encountered problems that didn't yet exist, or weren't as painful, when the policy was first enabled? Are there other problems we need to focus on resolving (I suspect this is the case)?
As I see it, the thing that everyone agrees is problematic is critpath updates for old releases not getting pushed or taking a very long time to push. It's also generally agreed that the quality of critpath testing can be improved by taking some steps we're already looking at (package-specific test cases).
Things that some people see as problematic are:
* Having to wait a week to push an update if you can't find testing * Testing being required for packages with automated test suites * The delay to security updates which is introduced by the testing requirements
- allow packages with a %check section to go direct to stable.
Interesting, I like the spirit of this idea, but would like to see if we can incorporate this with autoqa karma plans. So, perhaps packages with %check get automated karma? Just the same as with packages that pass automated tests ... they'll eventually get positive karma of some form.
Yes, I was going to suggest the same thing. I'd suggest packages with a %check section should get +1 proventester karma. Of course, that relies on the automated test suite actually testing the things proventester testing is meant to cover; do we want to audit the test suites in question?
- setup a remote test env that people could use to test things.
I could use more details on this point. Is this talking about setting up QA systems hosted in Fedora infrastructure that any tester could login and use to test updates?
Yes - it's an idea to make it easier to test older releases, or packages you don't want to / can't install (or configure) on your active systems.
- reduced karma requirement on other releases when one has gone stable
- aggregated karma across the releases for the same package version.
I don't have data to indicate how many updates have been released, and then reverted/obsoleted on only a subset of releases.
Yeah, I'd really like to see some data here. I did ask Luke on the -devel thread, but so far no response. Everyone more or less agrees on the factors here (on the one hand it's hard to get testing for old releases, on the other hand it *is* possible for the 'same' update to work fine on one distro but not another), it's hard to balance these without hard numbers. I think you can possibly draw a distinction between different types of updates here too: as I wrote on the -devel list, I can see the argument for a single leaf package update, but pushing an update to, say, an entire desktop environment and relying on testing from another release seems scary.
- allow anon karma to count.
Or maybe it counts, but counts less (.5 karma or something).
Something else to consider here is to make more people login; I suspect relatively few people are actually doing testing who don't have a FAS account, but I think we could make the login link more prominent, and try harder to get people to log in (have a big scare-step when posting anonymous feedback which says 'your feedback will not count unless you log in!' and requires you to re-confirm to submit the feedback anonymously; a nag screen, basically.
Security updates:
- allow security updates to go direct to stable
Risky
Right. As was pointed out on -devel, the update which caused us to start thinking about an update testing process in the first place - the infamous udev update - was a security update.
- ask QA to commit to testing security updates
We can't commit to testing without guidance or instructions. Let's commit to documenting repeatable procedures that testers can follow and expand upon.
For some security updates, they may have already been functionally tested upstream. I think it's reasonable to provide proxy karma linking to upstream functional tests. Though, I don't think upstream functional tests alone can bless a security update.
That seems like it'd be tricky to automate.
Non critpath/security:
reduce timeout for non critpath from 7 to 3 days.
change default autokarma to 2 or 1.
No immediate thoughts on these points.
I suggested the default auto-push karma change, though really what should change is the linking of auto-push and approval. Right now, whatever you set as the threshold for auto-push is *also* the threshold for approval, which is more of a hack/unintended consequence than intentional design.
On Mon, 2010-11-29 at 10:08 -0800, Adam Williamson wrote:
On Mon, 2010-11-29 at 12:40 -0500, James Laska wrote:
- Just drop all the requirements/go back to before we had any updates criteria.
Hmm, certainly an idea. I feel like this is definitely a step backward, not forward. Has the initial motivation for an updates policy gone away or changed? Have we encountered problems that didn't yet exist, or weren't as painful, when the policy was first enabled? Are there other problems we need to focus on resolving (I suspect this is the case)?
As I see it, the thing that everyone agrees is problematic is critpath updates for old releases not getting pushed or taking a very long time to push. It's also generally agreed that the quality of critpath testing can be improved by taking some steps we're already looking at (package-specific test cases).
Things that some people see as problematic are:
1 Having to wait a week to push an update if you can't find testing 2 Testing being required for packages with automated test suites 3 The delay to security updates which is introduced by the testing requirements
(Note, I've numbered your bullets above to respond to them individually)
#1 - if we don't have testers familiar with the packages, and it's a popular package (either in terms of # of installed systems, interest or critpath), we (includes maintainers) should be looking to create/develop testing interest, right? Or, if no engaged testers were found ... do we 1. Wait X days for exploratory test results 2. Or push immediately
#2 - I'd take this case-by-case ... it depends on what is "automated". Certainly, I could see a scenario where %check includes the upstream unittest, and the automated package update acceptance plan passes ... the update is in good shape and ready for additional functional tests ('updates-testing') or stable.
TODO - draft an autoqa test that confirms whether %check ran during build-time, and ensures it passed.
#3 - Seems similar to #1 to me. Testing is *always* a delay. In a way, that's by design. That's not to say we as testers intentionally want to slow a process down to a crawl. So if there are unnecessary delays unrelated to the act of testing ... let's work those. It sounds like the delay here is similar to #1, in that testers aren't providing feedback on security updates?
TODO - draft general security update test procedure
- allow packages with a %check section to go direct to stable.
Interesting, I like the spirit of this idea, but would like to see if we can incorporate this with autoqa karma plans. So, perhaps packages with %check get automated karma? Just the same as with packages that pass automated tests ... they'll eventually get positive karma of some form.
Yes, I was going to suggest the same thing. I'd suggest packages with a %check section should get +1 proventester karma.
Note though that the %check needs to also pass. I've seen plenty of builds that include a %check, but don't fail if the %check fails. There is an AutoQA test in the making here, anyone interested in helping?
Of course, that relies on the automated test suite actually testing the things proventester testing is meant to cover;
http://fedoraproject.org/wiki/QA:Package_Update_Acceptance_Test_Plan
do we want to audit the test suites in question?
No, but I think we would want to at least ensure they pass. I don't think it would be horrible to pull together an AutoQA test that checks for the presence of a %check and whether it passed.
- setup a remote test env that people could use to test things.
I could use more details on this point. Is this talking about setting up QA systems hosted in Fedora infrastructure that any tester could login and use to test updates?
Yes - it's an idea to make it easier to test older releases, or packages you don't want to / can't install (or configure) on your active systems.
I don't object to this. We have experience using shared test systems internally at Red Hat for many years. There are a lot of system state issues we'd need to work through, but there are certainly benefits to having public test systems.
I'm inclined to think the more immediate problem though is the lack of test instructions, not the lack of hardware. Additionally, over time the lack of hardware will be less of an issue right (e.g. virt)? But again, no major objections to shared test hardware, other than it does involve some setup/maint, and doesn't address the lack of test instructions.
- reduced karma requirement on other releases when one has gone stable
- aggregated karma across the releases for the same package version.
I don't have data to indicate how many updates have been released, and then reverted/obsoleted on only a subset of releases.
Yeah, I'd really like to see some data here. I did ask Luke on the -devel thread, but so far no response. Everyone more or less agrees on the factors here (on the one hand it's hard to get testing for old releases, on the other hand it *is* possible for the 'same' update to work fine on one distro but not another), it's hard to balance these without hard numbers. I think you can possibly draw a distinction between different types of updates here too: as I wrote on the -devel list, I can see the argument for a single leaf package update, but pushing an update to, say, an entire desktop environment and relying on testing from another release seems scary.
Ah, good point.
- allow anon karma to count.
Or maybe it counts, but counts less (.5 karma or something).
Something else to consider here is to make more people login; I suspect relatively few people are actually doing testing who don't have a FAS account, but I think we could make the login link more prominent, and try harder to get people to log in (have a big scare-step when posting anonymous feedback which says 'your feedback will not count unless you log in!' and requires you to re-confirm to submit the feedback anonymously; a nag screen, basically.
Good thinking. Is this something we can get on the bodhi roadmap (https://fedorahosted.org/bodhi/roadmap)? I think Luke monitors this list, perhaps he'll jump in.
Security updates:
- allow security updates to go direct to stable
Risky
Right. As was pointed out on -devel, the update which caused us to start thinking about an update testing process in the first place - the infamous udev update - was a security update.
Yeah.
- ask QA to commit to testing security updates
We can't commit to testing without guidance or instructions. Let's commit to documenting repeatable procedures that testers can follow and expand upon.
For some security updates, they may have already been functionally tested upstream. I think it's reasonable to provide proxy karma linking to upstream functional tests. Though, I don't think upstream functional tests alone can bless a security update.
That seems like it'd be tricky to automate.
Eeew, it certainly would be! I was more just adding some extra flavor on the karma "by proxy" practice.
Non critpath/security:
reduce timeout for non critpath from 7 to 3 days.
change default autokarma to 2 or 1.
No immediate thoughts on these points.
I suggested the default auto-push karma change, though really what should change is the linking of auto-push and approval. Right now, whatever you set as the threshold for auto-push is *also* the threshold for approval, which is more of a hack/unintended consequence than intentional design.
I see what you mean.
Thanks, James
Adam Williamson <awilliam <at> redhat.com> writes:
- allow anon karma to count.
Or maybe it counts, but counts less (.5 karma or something).
Something else to consider here is to make more people login; I suspect relatively few people are actually doing testing who don't have a FAS account, but I think we could make the login link more prominent, and try harder to get people to log in (have a big scare-step when posting anonymous feedback which says 'your feedback will not count unless you log in!' and requires you to re-confirm to submit the feedback anonymously; a nag screen, basically.
I think a login should always be required. If anonymous karma counts, then some long-term contributors may start depending on that and never create an account. If someone then starts DOS'ing Bodhi and forces anonymous karma and new login accounts to be disabled, these contributors will have to each contact someone to set up an account so they can carry on, which will waste everyone's time and cause a delay in getting packages pushed. If the same thing happens with login required, then only new accounts have to be disabled and existing users are unaffected. Besides, a lot of anonymous karma probably comes from people who don't know that it doesn't count, and would create an account if they did. Requiring a login would generate at least some additional usable karma for that reason - maybe not as much as allowing anonymous karma to count, but without making Bodhi vulnerable.
On 29/11/10 19:08, Adam Williamson wrote:
On Mon, 2010-11-29 at 12:40 -0500, James Laska wrote:
Things that some people see as problematic are:
- Having to wait a week to push an update if you can't find testing
- Testing being required for packages with automated test suites
- The delay to security updates which is introduced by the testing
requirements
Testing would get much easier, if packagers could provide some test cases. The packager could send mail to -devel or -testing to get some testers.
- allow packages with a %check section to go direct to stable.
I think this is a bad idea. Just insert a null- %check section (package gets a +1 from provenpackager, add a (pseudo-anonymous) +1 vote and voila: package goes directly to stable.
Testing is costly, in each case. Seems like testing gets suppressed this way.
Matthias
On Tue, 30 Nov 2010 09:31:31 +0100, Matthias wrote:
On 29/11/10 19:08, Adam Williamson wrote:
On Mon, 2010-11-29 at 12:40 -0500, James Laska wrote:
Things that some people see as problematic are:
- Having to wait a week to push an update if you can't find testing
- Testing being required for packages with automated test suites
- The delay to security updates which is introduced by the testing
requirements
Testing would get much easier, if packagers could provide some test cases. The packager could send mail to -devel or -testing to get some testers.
Sounds backwards to me. Given the life cycle of a bug, there is activity in bugzilla prior to the maintainer developing a fix. Plus, bodhi adds update notifications to bugzilla. If I were to expect someone to test the fix, it would be the bug reporter to be the additional tester. Even more so if the fix is released by upstream. In that case, coming up with test-cases would be a lot of extra(/duplicate?) work for package maintainers, especially for version upgrades that contain plenty of fixes and changes.
On 30/11/10 10:51, Michael Schwendt wrote:
On Tue, 30 Nov 2010 09:31:31 +0100, Matthias wrote:
On 29/11/10 19:08, Adam Williamson wrote:
On Mon, 2010-11-29 at 12:40 -0500, James Laska wrote:
Things that some people see as problematic are:
- Having to wait a week to push an update if you can't find testing
- Testing being required for packages with automated test suites
- The delay to security updates which is introduced by the testing
requirements
Testing would get much easier, if packagers could provide some test cases. The packager could send mail to -devel or -testing to get some testers.
Sounds backwards to me. Given the life cycle of a bug, there is activity in bugzilla prior to the maintainer developing a fix. Plus, bodhi adds update notifications to bugzilla. If I were to expect someone to test the fix, it would be the bug reporter to be the additional tester. Even more so if the fix is released by upstream. In that case, coming up with test-cases would be a lot of extra(/duplicate?) work for package maintainers, especially for version upgrades that contain plenty of fixes and changes.
The bug reporter will probably verify the fixture for his bug. But, given the fact, this new version breaks other things, this could be possibly covered by a test case (in best case).
IMHO you should write test cases only once (a general sheet, what to test). I see, we need to get more testers. But those people need to know what to test, sometimes even: how to test.... If we don't provide those information, we will get a lot of feedback: +1, "works for me". What do we really know then? What was tested? Does it prevent us from shipping broken updates? Definitely not.
On Tue, 30 Nov 2010 12:24:32 +0100, Matthias wrote:
The bug reporter will probably verify the fixture for his bug. But, given the fact, this new version breaks other things, this could be possibly covered by a test case (in best case).
A _new version_ will need to stay in updates-testing to allow for _more_ testers to get an opportunity to test the software with daily usage. It will be beneficial to wait for more testers instead of rushing out an update within 24 hours based on possibly superficial testing done by update-freaks and koji-leechers.
=> bodhi karma automatism should be _off_ by default => packagers, who turn it on, must be aware of the consequences
IMHO you should write test cases only once (a general sheet, what to test).
What is needed is brave Fedora Users, who actually use the software and know where/how individual packages fit into the system.
I see, we need to get more testers. But those people need to know what to test, sometimes even: how to test.... If we don't provide those information, we will get a lot of feedback: +1, "works for me". What do we really know then? What was tested? Does it prevent us from shipping broken updates? Definitely not.
Oh so true, ... but that isn't any excuse for lazy/poor test results IMO. Unfortunately, the system so far encourages people to post quick +1 because that will speed up the release of the update and add to the metrics.
Packagers need to become aware of their accountability with regard to broken updates and think twice before relying on bodhi karma automatism. And the user community needs to understand that some types of updates won't be marked stable unless more users give positive feedback. It's the packager's responsibility to tell what specific test results are still needed.
On Tue, 2010-11-30 at 10:51 +0100, Michael Schwendt wrote:
On Tue, 30 Nov 2010 09:31:31 +0100, Matthias wrote:
On 29/11/10 19:08, Adam Williamson wrote:
On Mon, 2010-11-29 at 12:40 -0500, James Laska wrote:
Things that some people see as problematic are:
- Having to wait a week to push an update if you can't find testing
- Testing being required for packages with automated test suites
- The delay to security updates which is introduced by the testing
requirements
Testing would get much easier, if packagers could provide some test cases. The packager could send mail to -devel or -testing to get some testers.
Sounds backwards to me. Given the life cycle of a bug, there is activity in bugzilla prior to the maintainer developing a fix. Plus, bodhi adds update notifications to bugzilla. If I were to expect someone to test the fix, it would be the bug reporter to be the additional tester.
Proven tester testing is not really about testing the bug fix contained within an update; it's more about making sure the update doesn't cause regressions, especially regressions that would negatively affect the rest of the system.
On Tue, Nov 30, 2010 at 09:31:31 +0100, Matthias Runge mrunge@matthias-runge.de wrote:
- allow packages with a %check section to go direct to stable.
I think this is a bad idea. Just insert a null- %check section (package gets a +1 from provenpackager, add a (pseudo-anonymous) +1 vote and voila: package goes directly to stable.
I think it is reasonable to assume our packagers aren't going to be malicious. While someone could copy over an empty check section without realizing what it does, this should be caught in the initial package review.
On Tue, 2010-11-30 at 07:48 -0600, Bruno Wolff III wrote:
On Tue, Nov 30, 2010 at 09:31:31 +0100, Matthias Runge mrunge@matthias-runge.de wrote:
- allow packages with a %check section to go direct to stable.
I think this is a bad idea. Just insert a null- %check section (package gets a +1 from provenpackager, add a (pseudo-anonymous) +1 vote and voila: package goes directly to stable.
I think it is reasonable to assume our packagers aren't going to be malicious. While someone could copy over an empty check section without realizing what it does, this should be caught in the initial package review.
We can test for the presence of a %check ... and whether it contains "something" with an rpmlint test. But I think the point here is that the presence of %check doesn't mean the package should get a free pass into stable. It can however, be another positive data point collected while running the package update acceptance test plan [1].
Anyone interested in drafting a quick rpmlint test for this? My initial inspection shows that creating a new python rpmlint test to check for a non-empty %check wouldn't be terribly difficult </famous_last_words>. To get started ... let's file a ticket [2] and start the discussion on autoqa-devel@lists.fedorahosted.org.
Thanks, James
[1] http://fedoraproject.org/wiki/QA:Package_Update_Acceptance_Test_Plan [2] https://fedorahosted.org/autoqa/newticket
On Mon, 29 Nov 2010 12:40:25 -0500 James Laska jlaska@redhat.com wrote:
...snip...
- setup a remote test env that people could use to test things.
I could use more details on this point. Is this talking about setting up QA systems hosted in Fedora infrastructure that any tester could login and use to test updates?
Yes.
Perhaps something with virtual instances? *tester requests a base f14 *machine is built and mails tester info. *tester logs into machine, applies update that they are testing, tests. *tester logs out and machine autoreaps away.
I admit this is pie in the sky without a cloud infrastructure in place, but it would be pretty cool. ;)
...snip...
- PK updates-testing integration of some kind.
Open to any ideas here ... are you thinking about some PK updates-testing feedback workflow. Like integrating fedora-easy-karma? Something else?
Yes, a fedora-easy-karma type thing. Also, abrt could offer to install a testing package where available when a package crashes?
Thanks for the excellent feedback!
kevin
On Mon, 2010-12-06 at 16:19 -0700, Kevin Fenzi wrote:
On Mon, 29 Nov 2010 12:40:25 -0500 James Laska jlaska@redhat.com wrote:
...snip...
- setup a remote test env that people could use to test things.
I could use more details on this point. Is this talking about setting up QA systems hosted in Fedora infrastructure that any tester could login and use to test updates?
Yes.
Perhaps something with virtual instances? *tester requests a base f14 *machine is built and mails tester info. *tester logs into machine, applies update that they are testing, tests. *tester logs out and machine autoreaps away.
I admit this is pie in the sky without a cloud infrastructure in place, but it would be pretty cool. ;)
...snip...
- PK updates-testing integration of some kind.
Open to any ideas here ... are you thinking about some PK updates-testing feedback workflow. Like integrating fedora-easy-karma? Something else?
Yes, a fedora-easy-karma type thing. Also, abrt could offer to install a testing package where available when a package crashes?
Hmm, I'm conflicted on this. I'm inclined to think that the problem is more about the lack of test documentation around updates, than it is lack of hardware. For me, if I see libasdf-querty4, the first questions I ask is "what is it, and how can I test it". Not, where can I find a disposable virt system to test it. That's not to say having shared test systems or disposable systems for testing updates wouldn't be valuable. I just don't know if that's the first thing I would consider.
Thanks for the excellent feedback!
Having such a solution would open up some doors for us, I'm thinking more for on-the-fly AutoQA provisioning for destructive tests.
Thanks, James