Many engineers believe that for every public method of every class, they must create a corresponding "unit test."
That wasn't what many meant by "unit" when the term "unit testing" was first used.
The problem with that definition of unit testing is that you're no longer focused on testing behaviour but implementation.
The tests you write are tightly coupled to the underlying design of your code. Design is constantly evolving. You now not only have to refactor the designs of your production code—you have to change your tests too!
In other words, your tests should help you with the refactoring, giving confidence, but instead, it is only making work harder and giving no confidence of things still working correctly.
I will not even mention the mock hell for brevity(Please google about it).
But instead of abandoning refactoring, or unit tests, all you need to do is free yourself from the mistaken definition of "unit testing". Focus on testing behaviours!
Instead of writing unit tests for every public method of every class, write unit tests for every component (i.e., user, product, order, etc.), covering every behaviour of each component, focusing on the public interface of the unit.
To achieve that, you will need to learn how to structure your code properly. Please don't package your code by technical concerns(controllers, services, repositories...). Senior devs structure their code by domain. Check out my post You are structuring your code wrongly to learn more.
Code coverage
Lines of code covered doesn't mean it is ready for real life. It won't tell you that you were supposed to check that String for a leading slash and trim it off or ask you to check null values.
Code coverage reporting is dangerous. It tends to distract from the use cases that should drive the software development process.
If we are not allowed to ship when code coverage is below 80%, we will add more and more trivial tests without much value to ensure quality or bring confidence during refactoring.
Code coverage has nothing to do with code quality, which has been proven statistically and it has insignificant correlation for system defects
Kent Beck (Author of the book TDD By Example) answered the question how much unit tests a system should have
I get paid for code that works, not for tests, so my philosophy is to test as little as possible to reach a given level of confidence...
Martin Fowler answered if a system can have too much tests
Can you test too much? Sure you can. You are testing too much if you can remove tests while still having enough
That's not to say coverage doesn't have value, as Martin Fowler points out, it is a good way to identify untested code.
Implementing a Test Automation Strategy
Testing in these short Agile iterations requires a "shift left" approach. This shift left in the agile development process means testing starts much earlier in the application lifecycle. As a result, developers should own QA, which means developers write, execute, and maintain tests for the code they produce; it encourages everyone on the team to be engaged in delivering high-quality products to the customer quickly.
Now that we know who should write tests let's look at my recommended test strategy.
Testing should begin "in the small" and progress toward testing "in the large"; Because that, we should focus more on the Unit testing - the aim is to test each part of the software by separating it. It checks that the component matches the expected behaviour. Remember, the unit here is not the class.
Next comes the module test; the aim is to test different system parts in combination to assess if they work correctly together. By testing the units in groups, any faults in the way they interact together can be identified.
And finally, End-to-end testing; in this testing phase, different software systems are combined and tested as a group to ensure that the integrated system is working correctly.
E2E test plans generally cover user-level stories like:
"A user can log in."
"A user can make a deposit."
"A user can see its balance."
Conclusion
It is essential to understand that we should be writing tests to enable teams to move fast. Code is constantly evolving, and developers need to feel confident adding new features and improving the existing code continually.
Some ideas here can be controversial, and you don't need to agree with me, but leaders must challenge the status quo and look at new ways of doing things.
Don’t blindly accept fads, myths, authorities and established truths. Question everything, collect experience, judge for yourself.
Example
If you want to learn more about how to write efficient tests, you can checkout my opensource project where I show how to structure the code enabling meaningful unit tests creation.
I also recorded a video walking through the project highlighting what we have discoursed in this post.
Top comments (5)
Indeed, when tests hinder refactoring more than help, it's probably time to rethink the testing strategy!
Although mocking/stubbing doesn't inherently couple tests to implementation, a lot of people use them in a way that does that. I actually very rarely use stubs nowadays and generally build my own test fakes. Been wanting to write about that for months now but it's a rather complex and probably controversial topic!
Unit tests check basic correctness. They don't check how things work together... because then that would be an integration test, and not a unit test.
Unit tests put pressure on the developer to abide by most of the SOLID principles, because what makes code unit testable coincides with SOLID.
Behavior tests, such as espoused by BDD using something like gherkin, are great for both specifying requirements and being able to be exercised to verify the programs behavior matches the requirements. But behavior tests are not unit tests.
Code coverage only ensure that every line of code has been exercised, and as said, nothing to do with code quality. "Untested code is buggy code", and as such code coverage only ensure the code has been exercised and does not fail a basic correctness check. (Which is better than, say, exercising rarely-to-never run error handling code in an error situation, and discovering bug in the error handling code in production.)
My biggest ire against unit tests is not regarding unit testing itself, but it is due to languages that do not support design by contract.
For those languages (which is most languages) unit tests are a means to fill in that gap. But unit tests pale in comparison to having contracts. Contracts can alleviate the need for a majority of unit tests, and much better express the intent, and unlike unit tests contracts don't "rot over time" (or as Coplien would phrase it, unit tests become muda because of the effort of maintaining them).
One of my favorite languages, D — as part of the core language — supports both contracts and unit tests, as first class language constructs.
There's nothing inherently wrong with writing unit tests - as long as you're prepared to throw them away, as soon as they get in the way.
Yes, unit tests have a tendency to test implementation details - during development, that's exactly what I want. Especially in quirky languages like JavaScript, where the language itself doesn't guarantee much of anything. But I feel like unit tests are often a faster and safer way to get a working function.
Once you have functional or E2E tests in place, yes, the unit tests are often redundant - you can either throw them away, as soon as the next layer up is covered by tests, or you can keep them around until they get in the way of refactoring.
I wouldn't discourage anyone from writing them though. Especially on big projects, proving that you have a bunch of working units, before you begin integrating them, definitely has value.