[Beaker-devel] Integration tests vs unit tests

Nick Coghlan ncoghlan at redhat.com
Fri Sep 6 03:50:39 UTC 2013


On 09/06/2013 01:22 PM, Dan Callaghan wrote:
> Excerpts from Nick Coghlan's message of 2013-09-06 12:27:54 +1000:
>> Just dumping a transcript of an IRC conversation between rmancy and I
>> about unit tests vs integration tests (in the context of the patch that
>> makes the createrepo command configurable:
>> http://gerrit.beaker-project.org/#/c/2208/).
> I think at least 75% of my development time has gone towards writing and 
> maintaining tests, both on Beaker and earlier projects. You probably 
> have a similar experience.
> 
> The reason I mention it is that you might think intuitively that unit 
> tests have a lower maintenance cost than functional tests, because unit 
> tests are faster. But in my experience unit tests have a much *higher* 
> maintenance cost because they are invariably tied to implementation 
> details, which means that any "refactoring" (changing implementation 
> details without affecting the behaviour from an external point of view) 
> means rewriting or rearranging huge swathes of unit tests, whereas 
> a good functional test should require no changes at all.
> 
> Also, I think we should be clear about our terminology here. Beaker has 
> "integration tests" (the directory is called IntegrationTests/) but only 
> in the sense that they require an external service (database) which 
> makes them unsuitable to run during a package build. The actual tests in 
> that directory are a mixture of functional tests (tests which interact 
> with the system from an external point of view) and unit tests (tests 
> which touch internal implementation details at a low level), tending 
> towards mainly functional.

Ah, I *finally* understand your antipathy towards the idea of more unit
tests: your perspective seems to be that unit tests are *required* to be
white box tests, which is simply untrue :)

There is no functional vs unit test distinction as you describe it.
Instead, the distinction you're talking about is the one between black
box and white box testing. Both black box and white box tests can be
unit tests, as long as they test the component without relying on any
external services (like a database or web server) being available.

Black box testing is definitely preferable to white box testing, as it's
easier to maintain and still ensures the main thing we care about:
avoiding regressions in the behaviour of public APIs.

However, white box testing also has its place, for internal APIs where
exhaustive testing of alternatives through higher level APIs would be
difficult, or to provoke particular failure modes to ensure they're
correctly handled, even if higher level checks fail to prevent them.
Appropriate regression testing for internal APIs is also a good way of
future-proofing them, as they're more likely to work correctly, even if
other components start using them in slightly different ways.

White box testing is also sometimes the only way to guard against
particular bugs that are judged to be high enough risk that they should
be checked in the unit tests, rather than just in the integration tests.

> In my opinion the speed of the whole test suite is irrelevant because 
> they can be run by a robot while humans do other things. What matters to 
> me is how much effort I have to expend fixing tests when I want to 
> improve something in Beaker. For that reason I would rather have as much 
> coverage as possible from small, simple, clear functional tests, 
> resorting to "unit tests" (ones which touch internal implementation 
> details) only when it is too hard to write a simple functional test for 
> it.

Right, this is exactly how white box testing should be used: as a last
resort, for things you really want to test, but can't easily provoke
through a published API, or can't easily check for the desired result
without poking around inside the application state. A typical reason to
write a white box test is that for some things you *really* want
"defence in depth", where certain integrity checks are performed at a
higher level during normal execution of the application, but you want to
ensure the lower level API still does the right thing, even if those
higher level checks are skipped. Missing permission checks, for example,
can lead to disastrous security bugs, which is why it's desirable to
push such checks as low in the application model as is feasible, with
only a few highly privileged pieces of code being allowed to make
changes directly without going through the APIs that check for the
appropriate permissions first.

> Sometimes even *no* test is the right choice, because the maintenance 
> cost of the test outweighs the likelihood of it finding a real bug.

Agreed. This is especially applicable for bugs that are picked up by
static analysis - for those, we may as well just fix them without adding
a new test, since the static scanner will complain again if they're ever
reintroduced.

> Some recent examples of when I think a good functional test is too hard 
> to write:
> 
> * testing a database constraint that is already enforced in other ways 
>   by the application
> * testing failure modes that cannot be triggered by interacting with the 
>   system in a normal way
> * tests which do complicated things that a test is not normally expected 
>   to do, like restarting the server process in the middle of the test

These are all cases where we could use better unit test infrastructure.
For example, in the last case, we don't currently have an easy way to
check that configuration files are being read and the result correctly
applied in the application. Until we do, I consider an integration test
to be an acceptable workaround.

Reading configuration files is a great example of something that you
*really* want to test automatically, as there is always going to be a
high chance of typos, where the name of the config setting is either
incorrect in the config file or in the code that reads the setting.
Trusting in code reviews to catch typos instead of setting things up so
the computer can do it for you is almost always going to be a mistake.

Cheers,
Nick.

-- 
Nick Coghlan
Red Hat Infrastructure Engineering & Development, Brisbane

Testing Solutions Team Lead
Beaker Development Lead (http://beaker-project.org/)


More information about the Beaker-devel mailing list