DEV Community

Bruno Paz
Bruno Paz

Posted on • Edited on

#Discuss - API Level Functional / Integration Tests in a Microservices Architecture

Hello.
Consider a project of around 10 Microservices, some very simple with no external dependencies and others more complex, depending on 2 or 3 other services:

How would you approach API Level Functional / Integration testing for each service?

Background:

  • All the services are developed by the same Development Team. (for now)
  • Development team is a normal scrum team ( ~ 5 API devs)
  • Right now, we are using a simplified version of Git Flow. (Feature Branch -> Develop -> Master). Then end goal is to get closer of a CI / CD workflow but that will take time.
  • Unit tests and static analysis being run on each PR, blocking the merge if failed.
  • Basic set of E2E Frontend tests covering the critical use cases.

Some possible approaches:

Run the tests on each PR?

Makes sense for functional tests to also be executed on each PR?

Running the tests on each PR will allow to detect problems earlier but will be more complex for ops, since they would need to spawn a new instance of the service under test as well as all the required dependencies like databases.
Tools like Kubernetes can help on that, but still, its extra work for the Ops team.

We can use Data fixtures or similar to populate databases with test data, but what
about service dependencies? Mock the dependent services, or spawn a real instance of that service?

The 2nd option will be harder to do when you have many services with dependencies and also will be costly to launch all the required infra on each PR and I believe we should be able to test each service in isolation.

Using mocks for dependent services allows to test each service in isolation but will not guarantee that all the communication between services will work and also that it wasn't introduced any breaking change on the API responses. "Contract Testing" can minimize that risk tough.

Run the tests on dedicated "Integration Tests" environment?

Have a dedicated Integration Tests environment with all the services running and a set of data fixtures that should be "compatible" with all the services.
This is easy to operate from ops point of view (just a new environment) and could easily catch configurations errors like a service pointing to a wrong url from other service.

Also it would allow to detect breaking changes in service responses without the need for Contract Testing. But in this scenario we are testing all the services together, with a common set of data. I think each service should be able to be tested in isolation.

A mix of both

This would be probably the ideal solution, but it could be more complex to maintain and take more time to implement.

  • On each PR, spawn an instance of the service under test plus all the required Data structures (Ex: Database)

  • The services that the service under test depends on should be mocked.

  • Have contract testing to detect breaking changes.

  • A failure on this tests will block the merge of the PR until fixed.

  • After successful merge, have an integration test environment where all services are running and run the tests there.

My main question in this scenario is how to handle data fixtures on the integration test environment. I dont want to have different test suites for isolated functional tests and integration tests. That would be too costly to maintain for such a small team.


Lets start the discussion. What do you think would be the best approach?

I know functional Tests and integration are different kinds of tests and probably would make sense to have then both (similar to third scenario), but keep in mind the constraints. What do you think should be the priority? Functional or Integration? We are not Netflix and we would like to have the simplest possible workflow, that would give us more confidence on releases towards the end goal of CI / CD.

Top comments (5)

Collapse
 
orkon profile image
Alex Rudenko

This is an excellent topic, and I would like to share my thoughts based on my experience.

First, I think it makes sense to classify dependencies of a microservice into three categories: infrastructural (database, Redis instance, etc.), internal (other services developed in your company) and external (services out of your control).

Second, it makes sense to classify (really basic classification) the layers of a typical service architecture:
1) gateway layer (provides access to 3rd party services via remote calls), 2) data layer (encapsulates the data sources of your microservice, here, I mean databases/caches, etc.)
3) service layer (business logic)
4) API layer (connects your APIs with your business layer).

Third, you need to classify your tests. I use the following classification:
1) unit tests. Simple tests that can test the code at any level without mocks/stubs/etc. And without access to the network or databases.
2) integration tests (here I mean the integration with other services and data sources, not how your code pieces integrate with each other). These tests should perform testing of the gateway and data layer using the real instance of the underlying system. It's ok to use mocks only in the cases when no test system is available (for example, you use 3rd party API, and they provide just a production system, and it does something important like payments).
3) component tests. These tests should cover your APIs and whatever the essential top-level "components" of your system are. These tests should use mocks for the gateway and data layers.

So with this in mind, I think the following is a reliable (and practical) approach:
1) cover every code of any importance with unit tests. Remember, if you are forced to mock something for a unit test, it's most likely not a good unit test. Unit tests should run without any dependencies on anything except for the code itself. It makes them fast, and you should probably run them first. It's good to force unit tests to fail if they try to reach outside or to the database.
2) write integration tests for everything in the gateway and data layer. It should run against real (test) instances of a related service or database. For internal services, I suggest having a separate environment where all these services are always running. We call it a development environment. For external services, use the test system provides by a 3rd party. For databases and instances, I suggest running a local version using docker compose. Alternatively, you can have a test instance of your data sources running somewhere constantly. Depending on the number of your dependencies, you may choose if you want to run the integration tests every time or only when there were some changes to the data or gateway layers.
3) write component tests for every API resource and every action you have in your service. Also, test error cases. I would say you can skip mocking the data layer because it's too much work and just use the real data layer code with a fresh copy of the data source started via docker (just like in integration tests). Also, I find it better to populate the data store with test data using the data layer code instead of loading SQL dumps or another form of snapshots. You can choose if you want to invest in mocks for the gateway level depending on the type of the service being integrated (external vs. internal) and the nature of your integration.
4) Additionally, have an end-to-end test which runs immediately after your service is deployed to the development environment. The e2e test should act as a real consumer of your service, and it should cover basic use cases.

Let me know what you think or if I can clarify some points.

Collapse
 
brpaz profile image
Bruno Paz

Great explanation! Thank you!
Your points 1 and 4, are already covered that way.

Regarding 2 and 3, that is more or less what I was thinking.
My main doubt was about what would be the best to start: Integration or Component testing. I would love to have both but with our current resources, we need to focus on what can give more value and safety on releases ,since we are still finding our way to a more Continuous Delivery approach.

So, I think we will start with Integration tests running like you describe in point 2.
In our case, we have one service that rely on an External provider that although they have a staging environment its not very good and there are many test cases that wont be possible to be tested without mocking that service. So we will use mocks for that service, but everything else will be a real instance of the service.

We have 1 or 2 services that are shared between different projects / teams. In that case we will try to do some component testing also.

Collapse
 
mortoray profile image
edA‑qa mort‑ora‑y

Regardless of the amount of work involved, high-level use-case driven integration tests are the most valuable of all tests. These are the ones that test something the user will do, and don't rely on any short-cuts in the system (though may skip the top-level UI and work on the front-end API instead).

Nobody cares if the modules work on their own if the service as a whole is failing.

Lower-level tests, unit tests, module tests, etc. are all great tools for debugging, and to catch errors early, but are not substitutes for high-level use-case tests.

You can reduce some coverage at the highest level, by only testing a single path through each module, and not testing all variants. This is then backed by more detailed module tests. It's a good way to lower cost without a significant loss of use-case testing.

Collapse
 
alexanderjersem profile image
AlexanderJersem

I think you can use a combination of these methods. On each PR, you can spawn an instance of the service under test plus all the required data structures like the database. The services that the service under test depends on should be mocked. This approach will allow you to test each service in isolation and detect problems earlier. Additionally, you can have contract testing to detect breaking changes, and a failure on these tests would block the merge of the PR until it's fixed. After a successful merge, you can have an integration test environment where all services are running and run the tests there. Regarding data fixtures on the integration test environment, you can consider having a common set of data that should be compatible with all the services.

Collapse
 
godhanyadav profile image
GodhanYadav • Edited

In my opinion, the mix of both approaches seems to be the ideal solution, where you can run functional tests on each PR, mock the dependent services, and use contract testing to detect breaking changes. Also, having an integration test environment where all services are running would allow you to test the services together and catch any configuration errors. You can also check out functional test examples to choose the appropriate testing tools that suit your requirements. Overall, the priority should be on functional testing as it ensures that each service works as expected, and then integration testing should follow to test the interactions between services.