In this post I will share with you guys an upgrade that I do to my API tests!
To test our system, I use system tests (integration / API / service) where I create a seed of data on database, then, make a request to an endpoint, and later check the results on the database.
The problem: When a test fails, on general, we don´t have much context about how things happened, and in general, we re run the tests locally to collect some info.
- Also, this tests are great to be used when devs make big refactors that doesn´t change the application rules
My solution: I create on each test, a context variable, that, during test execution, is filled with Id's, responses, data from database, and, after the test execution, if the test failed, we add this data to the XML report.
How I do this:
Before the test:
- Create a variable named testContext
- Fill this variable with basic data from our Seed (the principal data the test handle)
During the test:
- Collect API responses and save to the testContext
- Collect data from database and save to the testContext before validations
After the test:
- If the test fail, capture necessary info (test suite and name, failed reason) and add to testContext
- Save the testContext as a JSON file
This helps our debug, since, when a test fail, we could check this JSON, to see if the initial data is correct, if API responses was OK and how the data was saved into the database, all this inside a JSON of 50 lines hahaha.
With a JSON like this, checking the logs, would be easy find the reason why it failed
I will give you an example of this JSON:
After this if the test is running on pipeline, we add the JSON inside the test XML (I will make an post only about the XML handle)
Top comments (0)