Cover photo by Samuel Bourgeot
TL;DR
Follow the principle:
Test each thing at the lowest possible level.
Of course, do test all the things (interactions between things are also things).
This will naturally lead to a test pyramid emerging:
End-to-end (E2E) tests: Hard to write, slow & unreliable.
Acceptance tests: Describe features, hard to write without a framework, not so fast.
Unit tests: Testing small pieces of data processing code, easy to write, very fast.
This gets you to the sweet spot between just enough coverage, number of tests to write and speed of test execution. It also allows easier refactoring.
Introduction
Writing automated tests is hard. To make it easier I advise you to test before the code is implemented. This is also known as test-first development. Add a bit of thoughtful pacing & cleaning up and you get test-driven development. With a bit of experience with software architecture & paying attention to the ease (or lack thereof) of writing tests you get test-driven design.
All these are for another article, albeit related.
The test pyramid is a well-documented concept. The benefits are real. You can cover all or almost all functionality of your software with keeping tests to a minimum & at the same time allowing for lots of refactoring without breaking any tests.
Here's how to get there: When developing your software, take small steps and try to adhere to the principles described below. Always know where you are in the codebase and act accordingly.
I know, sounds mysterious. It isn't. Read on.
Onion architecture
The architecture of your system is going to impact your writing of automated tests. The Onion architecture is by far the best suited architecture for thoroughly-tested systems.
There are several variations of this and I warmly suggest you read this article which explains them all but also summarises the key points.
I like the name "onion architecture" best because the name itself hints at what the architecture looks like.
Here is a diagram of my approach to this architecture:
Only 2 test layers needed
This architecture enables me to have just 2 types (or layers) of functional tests:
- Acceptance tests
- Unit tests
There are also integration tests but let's ignore them in this article.
Here is a similar diagram, but with tests:
Acceptance tests
These tests have 2 main purposes:
- Describe a feature
- Verify all wiring between services and all their collaborators is correct
Unit tests
These are the widely known small & focused tests. They test only one class (sometimes just a function). The class ideally should not have any collaborators as this complicates tests with test doubles. They are very fast to execute, typically under 10ms per test.
A DB digression
In my variation of the architecture the database is included in acceptance tests. Queries contain business logic. Acceptance tests should capture behaviour since they cover whole features, not just interactions between classes.
However, notice that the DAO classes have low-level tests. These are similar to unit tests as they focus on small pieces of code with no collaborators. They are not unit tests "by the book" since they use the database. However, they function in the same way:
- If someone breaks a query (which is likely since queries tend to get complex) we will immediately know where the problem is because the query's test will fail.
- They enable us to test-drive SQL (or whatever query language is used). We could test-drive with a database & an SQL client, but this way we have a test as an executable document that the query does what it's supposed to. Also, some ORM libraries have their own abstractions over SQL and test-driving is only possible this way.
Conversely, notice that controllers are not covered by acceptance nor unit tests. This is because they don't (shouldn't) contain any business logic. They are glue code and should be tested separately (see last section of this article).
Top-down
While designing your software, always remember what type of code constructs are available and which you are focused on currently.
This architecture is perfect for a top-down development approach. The approach has the least cognitive load (i.e. it takes up smaller head-space).
For each feature:
- Write acceptance tests which describe a feature
- do not focus on a particular class; acceptance tests should be separated from the app's classes by abstraction layers so they can focus on behaviour of the system
- Implement service classes as needed, defining their collaborators & wiring them (piping input & output data between them to get the result)
- Avoid implementing collaborators:
- externals (adapters, see diagram) should be interfaces or abstract classes to be implemented later
- internals (logic or DAOs) should be implemented with stubs (methods returning dummy data or better yet, throwing "TODO" errors)
- Test-drive internals' implementation with unit tests
- Wire the acceptance test to the SUT (System Under Test), that is - the service class
- write test doubles to implement externals (don't use mocking libs ... another article)
- Work on all the pieces until acceptance tests pass
Repeat for next feature and so on.
A very important principle:
What can go down, should
Extract, extract, extract. Whenever you spot a piece of logic, arithmetic or data transformation in the service, try to extract it into a class or function which can then be unit tested.
The number of execution paths through a service (counting only the service code, not collaborators) should be forced down to a minimum.
With that we can do this:
What can be tested down, should
Code paths going through a service's collaborators should be tested only in unit tests of those collaborators.
The only thing tested in acceptance tests should be wiring of collaborators.
The benefits
Benefit 1: Fast tests
By forcing most of the logic and testing down into the unit level, we maximise the amount of test coverage done by fast tests. Overall, test-coverage is not lowered but total duration of test execution is.
Benefit 2: Easy refactoring
By not having any tests between the lowest layer of unit tests & the highest layer of acceptance test, we make it easier to refactor services & their usage of collaborators.
Maybe a service has multiple sub-services which talk to units. Maybe one does and the other doesn't.
All this is subject to change as the developer's understanding of the system grows but also when the customer's requirements shift & evolve.
Refactoring is an activity which shouldn't modify functionality, so why should it cause us to modify our functional tests?
The 3rd test layer
Acceptance & unit tests do not cover deployment configuration nor the deployment process itself.
A 3rd test layer is needed. These tests are end-to-end tests. They are slow & tricky to write, typically testing through a web UI or a public HTTP API.
The good thing is, we need just a few of them:
After we deploy our app there are some things we need to verify to be sure everything works. These are marked on the diagram as red dots:
- Is our app reachable from the outside world?
- Did the deployment process successfully deploy the latest version?
- Are configuration variables set to correct values?
- Like the ones used for connecting to the database, 3rd party & platform services.
These tests should be written after a new red dot comes up. Since the app should be deployed as soon as possible from the start of the project, end-to-end tests can be added continuously as the system grows.
They are of course ran against a deployed app on a testing or staging environment, the configuration of which needs to be as close to production as possible.
Emergence
The emerging test pyramid:
Unit tests: lots of them (counting individual methods, not classes) because we're covering all possible execution paths through units.
Acceptance tests: fewer, because we're covering one or two paths through a service
End-to-end tests: very few because we're covering only points of configuration
To be continued
This assumes only one piece of deployed software. What about apps with multiple pieces communicating through APIs? Doesn't that require more end-to-end tests to cover all the API communication?
No. Use API tests for that.
Top comments (0)