I haven't posted here in quite a while. It's been a busy year!
One of the more challenging problems I've run into was getting our API tests in a place where they can run more quickly. I wrote an old post about API testing here.
In CI, our API tests run with one worker. A simple npm test
kicks things off, sends results to a log file and nicely formatted HTML report thanks to jest-html-reporters (it even has dark mode!). However, as the number of tests increase, so does execution time.
In Playwright, I describe how we accomplished sharding in our test pipelines in this other post. Unfortunately, Jest does not make this easy whatsoever.
Even though Jest does support a --shard
option, similar to Playwright, there doesn't appear to be any out-of-the-box solution to merging reports. That means that test results live in isolation in their own shards and in their own pipelines. Playwright offers this feature in a merge-reports
option that wraps a nice bow on all of the reports generated by each shard.
I won't post too much code for now. Below is the general gist of how we ended up sharding our Jest API tests.
Giving up on the report merge
No feasible solutions exist to merge HTML reports, so we ended up creating an HTML "landing page" with clickable links to reports generated by each shard. At first, I began to experiment with jest-stare, but my dedication to dark mode in the reports was too much to overcome! At the same time, using this library also meant outputting test results to JSON, then merging those JSON results by hand, then converting those to an HTML report, and then trying to produce some kind of custom dark mode styling. I felt like I was sinking too much time there, and since we were already producing reports with jest-html-reporters, the quickest way out was to just serve up the individual reports.
So this landing page ends being uploaded in a step in our pipeline, kind of looking like this:
aws s3 cp landing_page.html $S3_BUCKET/$BITBUCKET_BUILD_NUMBER/index.html
The individual shard child pipelines are instructed to upload their own results, which can then be accessed via the index page when all is said and done.
Aggregating test results: Bonkers use of Bash
When pulling test results from a single Jest run, we were using regex to scrape the test results from a log file. For each individual shard, we would download each log file and cat
them together. There was some seriously awful looking Bash scripts that had been put together to find the lines prefixed with Tests:
in the log file, and then attempts to aggregate the number of tests passed and failed. That was when I learned about BASH_REMATCH
, which sounds like an old NES game or something.
Here's a little snippet of some of that nastiness:
# Match lines that include failed, passed, and total
if [[ "$line" =~ Tests:\ *([0-9]+)\ failed,\ *([0-9]+)\ passed,\ *([0-9]+)\ total ]]; then
FAILED_TESTS_TOTAL=$((FAILED_TESTS_TOTAL + ${BASH_REMATCH[1]}))
PASSED_TESTS_TOTAL=$((PASSED_TESTS_TOTAL + ${BASH_REMATCH[2]}))
TOTAL_TESTS_TOTAL=$((TOTAL_TESTS_TOTAL + ${BASH_REMATCH[3]}))
To my eyes, this looks pretty awful, but thankfully Jest logs are consistent in generating summary of test results. I cannot say with confidence that this will continue to work with any new Jest version updates.
Final thoughts, and a small plead for help?
I think it's amazing that Playwright had the foresight to put a lot of work towards reporting, which in my opinion is one of the more vital and underrated pieces of test engineering. But why is this so hard to do in Jest? I know that Jest was originally intended as a quick and lightweight test runner for frontend unit tests, but I feel confident that I'm not the only one using Jest as a test runner for API testing. If anyone out there is also running into scalability challenges in Jest and reporting, give a shout!
Top comments (0)