Introduction
At Chili Piper, one of our ongoing challenges is ensuring our frontend applications stay in sync with backend changes.
For context: We utilize auto-generated clients via OpenAPI Codegen to maintain a strongly typed contract between frontend and backend.
A recurring issue we've faced is the backend team deprecating an endpoint or field and removing it after a few weeks. Issue is that often, the frontend team doesn't stop using the deprecated field in time, leading to runtime failures upon deployment.
This post outlines our approach to addressing this problem using contract tests with TypeScript and OpenAPI Codegen.
Minimal Setup for Contract Tests
To set up contract tests effectively, it's essential to have a mechanism to store the currently deployed versions for each service or application.
In our case, we use a YAML file in one of our repositories. Deployments are triggered when a version is changed through a PR, thanks to our diligent SRE team.
Steps to Implement Contract Tests
- Determine the deployed versions: Detect the versions of backend services currently deployed in the target environment.
- Generate API clients dynamically: Use OpenAPI Codegen to generate clients based on the detected backend versions.
- Checkout frontend versions: For each frontend application, checkout the tag corresponding to the deployed version.
- Override API clients: Replace the API clients in the frontend apps with the environment-based generated code.
- Run TypeScript checks: Use TypeScript to validate that no API contracts have been broken.
This process ensures that any breaking change in the backend will immediately fail the TypeScript checks, providing early feedback before deployment.
Performance Considerations
One downside of this approach is its computational expense.
With around 15+ micro-frontend (MFE) applications, running this check across all of them takes about 10 minutes, even when parallelized across three runners.
To improve performance, we implemented several optimizations:
Skipping checks if there is no breaking change: We use openapi-diff to detect whether the deployment contains a breaking change. If none is found, we skip the contract tests, reducing runtime to just 2 minutes.
Caching dependencies: By caching dependencies, we reduce redundant installations and builds.
Grouping tests: If multiple applications share the same monorepo tag, we run their checks together to avoid redundant executions.
Using TypeScript project references with incremental mode: This allows us to reuse previously built code, improving build times.
Skipping Frontend checks: Unlike backend deployments, we can't quickly determine if there is a breaking change in frontend applications (e.g.
openapi-diff
). However, we can detect version bumps and skip runs for unaltered apps.
One of the most interesting improvements was leveraging TypeScript project references with incremental mode.
This allows us to reuse compiled output from previous runs, significantly reducing build times.
But the best improvements were the ones were we would skip checks.
Why Contract Tests Instead of Just Relying on openapi-diff
?
While openapi-diff
helps detect breaking changes, it doesn’t tell us which frontend applications are affected.
Contract tests allow us to:
- Identify which apps are still using deprecated endpoints.
- Detect which apps need to be deployed together. For example, if a breaking change affects app A but not app B, we only need to deploy app A.
We considered using Pact.io, but it requires maintaining separate contract files, which can be forgotten or become outdated.
With TypeScript, the contract is directly enforced by the code, eliminating the risk of stale contracts.
Handling False Positives in TypeScript Project References
While TypeScript project references help improve performance, they introduce a key issue:
- TypeScript builds and reports errors for all files in a project, even unused ones.
- Example: If app A depends on a shared package B, and package B has an error in an unused file, app A will still fail the build, leading to false positives.
This is especially problematic because our automation blocks deployments, and false positives would require us to deploy more apps than actually needed.
Our Solution
To mitigate this, we created a small wrapper around the TypeScript build tool that suppresses errors from unused files, ensuring that only relevant issues cause test failures.
You can check out our Gist with the implementation here.
We initially considered going even further by leveraging AST (Abstract Syntax Tree) analysis to only report type errors for actually imported code. The current file-based filtering approach means that if function A has a type error but your app only uses function B, it will still report an error. With AST-based filtering, we could theoretically ignore errors from unimported code, reducing noise in the checks.
However, after discussing this with a friend who has more experience with ASTs, he pointed out that this would essentially mean rewriting tree shaking, which is far from trivial. Given the complexity involved, we opted to stick with file-based filtering for now. That said, we’re actively keeping an eye out for existing public implementations of tree shaking that we could potentially leverage, such as those used in Webpack or other bundlers.
Conclusion
By combining OpenAPI Codegen, TypeScript, and contract tests, we have built a robust solution to detect breaking API changes before they impact production.
Key takeaways:
- While the solution is in fact resource-intensive, there are available workarounds to mitigate this issue.
- This strategy for making contract tests is not the simplest to setup, but it is maintenance-free. You setup once and forget about it.
- Whatever solution you pick, make sure it is something that will help developers more than block them. In my experience, false positives and long CI times are the main issues, so make sure to address those.
If you're facing similar issues with breaking API changes, I highly recommend setting up a contract testing strategy tailored to your stack!
Top comments (0)