DEV Community

Stuart Dobson
Stuart Dobson

Posted on • Edited on

Don't Chase Unit Test Coverage

In the long running battle between management chasing metrics and developers just trying to get the job done, (hi boss!) I wanted to provide some less obvious examples of why chasing code coverage is a damaging process.

Why Unit Test Code Coverage targets are bad

Code Coverage can be gamed, incentivising low quality tests

If the coverage percentage is all that matters, there are many ways to increase that number without actually improving the quality and reliability of the code. Having a test doesn’t mean no bugs - it just means the code is being run by a test, that the pathways through the code are exercised by another piece of code.

There’s not really any link in these tests to importance of functionality, or even any consideration of code doing the right thing, just that it’s being exercised.

Tests add a maintenance burden

In the writing of low value tests written only to satisfy KPIs, developers are actually making more work for themselves in the future.

All code adds a maintenance burden. Tests are code so adding tests adds to this burden. If they’re not valuable, they’re detrimental.

Chasing coverage leads to more brittle tests

The more coverage you have, the more brittle your tests actually become. By covering every internal path of a method, you are tightly coupling the tests with the implementation details.

This goes against the spirit of testing. You should be testing the functionality of the code - the inputs, outputs, and decisions it makes, and the edge cases.You don’t need to be too concerned with how it makes those decisions.

This in turn makes you write tests the right way - thinking about the desired functionality instead of the coverage.

Give a false impression of robust code

It should be obvious but it’s often overlooked. High coverage, which as discussed leads to low quality tests which often don’t test the right things, gives a false sense of security. You may have 80% coverage but if the last 20% is all the business critical functionality, or vital areas of the code, this could be hiding big problems.

Chasing metrics hides opportunities to improve the code quality

Let’s remember why we write tests in the first place. It’s to have confidence that our code is meeting the requirements, can handle edge cases, and is free of bugs. By focusing on coverage, you are missing the opportunity to write tests which do this - tests which actually lead to better quality code.

Thinking about what scenarios you need to cover when writing code, encourages you to write the code in a testable way. If you write it in a testable way, you’ll be more likely to write tests - valuable tests. Better still, you’ll be more inclined to write the tests, or at least stubs of them, first (Test Driven Development).

If you write code and then add tests as an afterthought, you’ll probably focus on coverage instead of functionality, because you’ll be thinking about “adding tests” and increasing coverage - rather than thinking about the original intentions of the code.

Why Code Coverage is good

Despite all these issues, Code coverage has a place in software development and there are some benefits to monitoring it, as opposed to incentivising it.

Identifies areas of low coverage

In combination with your knowledge of where important functionality lies, you can see where tests are needed. Without code coverage metrics, it’s pretty hard to do this.

Instills a culture of continually adding tests as you go

Let’s not forget that we want to be writing tests. We’ll get good coverage just by doing it as a habit, and by monitoring the coverage, we’ll see it go down if we write code that’s untested. So this encourages us to keep up with writing tests as we add features, and helps us see when we’re not adding them.

This keeps test writing on everyone’s minds, and influences the culture to make people value tests.

If you’re not measuring, you don’t know if you’re improving!

Conclusion

This article is very much a developer’s perspective, but at the end of the day, code coverage is a Return on Investment decision. Care should be taken on where the coverage is increased. Writing tests for existing code should be done in a way that prioritises high value, low effort new tests.

Code coverage is a good thing in that it gives a picture of where you are, and helps you monitor your testing journey. It’s always problematic to incentivise metrics, or even push to maintain a level. But we should be mindful of the level of coverage.

Focus on code coverage detracts from why you should be writing tests in the first place. The important thing is that we build tests around requirements and edge cases - tests which exercise and document real life scenarios. Tests are not about lines of code covered. This is an indicator - not an incentive.

Writing tests is a very important part of software development. Do it for the right reasons, and it will be done well.

Top comments (0)