[Beaker-devel] Integration tests vs unit tests

Raymond Mancy rmancy at redhat.com
Fri Sep 6 04:49:55 UTC 2013



----- Original Message -----
> From: "Dan Callaghan" <dcallagh at redhat.com>
> To: "beaker-devel" <beaker-devel at lists.fedorahosted.org>
> Sent: Friday, September 6, 2013 1:22:53 PM
> Subject: Re: [Beaker-devel] Integration tests vs unit tests
> 
> Excerpts from Nick Coghlan's message of 2013-09-06 12:27:54 +1000:
> > Just dumping a transcript of an IRC conversation between rmancy and I
> > about unit tests vs integration tests (in the context of the patch that
> > makes the createrepo command configurable:
> > http://gerrit.beaker-project.org/#/c/2208/).
> > 
> > At the moment, Beaker's unit testing is fairly minimal, so we don't have
> > a good, quick, confidence building set of tests to run before pushing to
> > Gerrit, just the full set of integration tests (which can take half an
> > hour or more to run, depending on the details of your system).
> > 
> > It's going to take some time for us to improve Beaker's unit testing
> > story, this just struck me as an expedient way to get something on
> > record that this is the direction we're likely to be heading :)
> 
> One thing I don't see discussed below at all is the maintenance cost of
> the tests, which is something that concerns me a lot. Tests are written
> once but then maintained forevermore. Every time you write a test, you
> are trading the cost of writing it plus maintaining it forever, against
> the benefit of potential bugs it will find in future.
> 
> I think at least 75% of my development time has gone towards writing and
> maintaining tests, both on Beaker and earlier projects. You probably
> have a similar experience.
> 
> The reason I mention it is that you might think intuitively that unit
> tests have a lower maintenance cost than functional tests, because unit
> tests are faster. But in my experience unit tests have a much *higher*
> maintenance cost because they are invariably tied to implementation
> details, which means that any "refactoring" (changing implementation
> details without affecting the behaviour from an external point of view)
> means rewriting or rearranging huge swathes of unit tests, whereas
> a good functional test should require no changes at all.
> 
> Also, I think we should be clear about our terminology here. Beaker has
> "integration tests" (the directory is called IntegrationTests/) but only
> in the sense that they require an external service (database) which
> makes them unsuitable to run during a package build. The actual tests in
> that directory are a mixture of functional tests (tests which interact
> with the system from an external point of view) and unit tests (tests
> which touch internal implementation details at a low level), tending
> towards mainly functional.
> 
> In my opinion the speed of the whole test suite is irrelevant because
> they can be run by a robot while humans do other things. What matters to
> me is how much effort I have to expend fixing tests when I want to
> improve something in Beaker. For that reason I would rather have as much
> coverage as possible from small, simple, clear functional tests,
> resorting to "unit tests" (ones which touch internal implementation
> details) only when it is too hard to write a simple functional test for
> it.
> 
> Sometimes even *no* test is the right choice, because the maintenance
> cost of the test outweighs the likelihood of it finding a real bug.
> 
> Some recent examples of when I think a good functional test is too hard
> to write:
> 
> * testing a database constraint that is already enforced in other ways
>   by the application
> * testing failure modes that cannot be triggered by interacting with the
>   system in a normal way
> * tests which do complicated things that a test is not normally expected
>   to do, like restarting the server process in the middle of the test
> 

All that a test is normally expected to do is test something. Different tests
have to setup data, mock functions etc differently to achieve that one outcome.
We have some tests that do very complicated things (often involving race conditions),
all in the name of actually testing something that should be tested.

If the code is worth writing, then it's worth testing. Sometimes it can't 
be practically tested, that's fine. We have some tests that do very complicated 
things (often involving race conditions), all in the name of actually testing 
something that should be tested. If it can be practically tested and easily 
maintained and does not impact on other tests, then doing something that is 
'not normally expected' doesn't seem like a good reason to not test it.

> --
> Dan Callaghan <dcallagh at redhat.com>
> Software Engineer, Infrastructure Engineering and Development
> Red Hat, Inc.
> 
> _______________________________________________
> Beaker-devel mailing list
> Beaker-devel at lists.fedorahosted.org
> https://lists.fedorahosted.org/mailman/listinfo/beaker-devel
> 


More information about the Beaker-devel mailing list