On Sun, 2022-02-27 at 21:55 +0100, Peter Boy wrote:
And therefore I’m looking for ways to find such bugs or issues, that
have "slipped through" so far, before the release date (so we can at
least add yet another Q&D to our collection beforehand). I wonder if
bugs like the ones described above are conceptually detectable at all
with automated testing like openQA. As a first step we started work
on an update of our technical specification and subsequently our test
and release criteria. Perhaps we can expand or sharpen our tests on
this basis.
I mean, the answer is broadly speaking 'yes'. If you can come up with a
definition of a precise set of steps that you want to test, we can
usually implement that as an automated test.
The issues are purely resource ones. We need the resources to write the
tests. We need the resources to run them. And we need the resources to
do something if the tests fail: someone needs to review the result and
find out if it's a real bug or a test failure. If it's a test failure
someone needs to fix the test. If it's a real bug someone needs to fix
the bug.
Practically speaking, there are limits on all of these in Fedora. We
only really have two people writing tests for openQA at the moment. We
only have a limited amount of machines to run the tests on. And it's
mostly only me reviewing failures and deciding what to do about them at
the moment.
Beyond that, the final limit there is the subtlest but maybe the most
important of all. One of the most important things I've picked up doing
this job for over a decade is that a test that runs every day and fails
every day is the most useless thing in software. There is very little
point in implementing a test if, when that test finds a bug, nobody is
going to fix it.
This is a key reason, for me, why it's not a good idea to just write
automated tests for *everything*. I tend to be quite strict about
wanting to only adding tests to openQA if I'm very sure that someone is
going to care if the test fails. For a long time we specifically only
ran tests in openQA that validate the release criteria, because it
neatly solves this problem almost entirely: if the test fails, then
that's a release-blocking bug, and somebody *has* to fix it or the
release doesn't go out.
In the last couple of years we have started carefully adding some tests
beyond that scope, but usually only in response to requests from
*developers*. If the FreeIPA or GNOME team comes to us and says, hey,
can we add a test for this feature we think is really important, I have
the confidence to say "yes" because I know that if the test finds a
bug, I can go back to that team and say "hey, this test you told us to
add found a bug, fix it". If someone who *isn't* the developer of Thing
X says "can we add a test for Thing X?", my first question is, "if the
test finds out that Thing X is broken, who do I email to fix it the
next day?"
So to go back to your message - updating the tech spec and release
criteria is an excellent idea. If we can get broad buy-in that "Thing X
must work" ought to be a release criterion, then I would be very
confident in adding a test for Thing X. In fact, I would very much
*want to have* a test for Thing X, because one of our key goals is to
automate testing for the release criteria as far as we possibly can.
But there does need to be a solid justification for why Thing X working
should be in the release criteria, and we need to have a "throat to
choke" to fix Thing X when it breaks.
Hope that makes sense!
--
Adam Williamson
Fedora QA
IRC: adamw | Twitter: adamw_ha
https://www.happyassassin.net