DEV Community

Cover image for Unit Test Generation with Early AI
Richard Zampieri
Richard Zampieri

Posted on

Unit Test Generation with Early AI

Accelerating Unit Test Generation and Improving Code Quality

Recently, I had the opportunity to deep-dive into Early, an AI agent designed for automatic unit test generation. As someone who regularly works with TypeScript and ExpressoTS Framework, I was keen to see how Early could streamline my workflow. I decided to test the vscode extension they built on my new NPM library I was developing called @expressots/shared.

Initial Impressions

The first thing that struck me about Early was its ability to automatically generate unit tests for my existing codebase. Instead of crafting tests from scratch, I could focus on refining the generated tests and improving my code's robustness and testability. This shift significantly accelerated my development process. The other interesting aspect I noticed is that 83% of the code generated I didn't do any adjustment, it worked out of the box and increased my code coverage. Save me a huge time.

Time Savings and Increased Coverage

In just 8.5 hours, I managed to:

  • Generate unit tests for approximately 3,000 lines of code.
  • Fix issues and enhance code testability.
  • Achieve a total code coverage of 88% with 96 tests.

The fact that I could accomplish all this in a single day was remarkable. The ideal scenario in unit test is do it while you're actually developing your functions. I did after the fact that I had already a library in place, so some adjustments were necessary to make the code testable.

Positive Outcomes

Automatic Generation of Edge Case Tests. For instance, it generated unit tests for scenarios involving empty strings, even when parameters were required:

export function printSuccess(message: string, component: string): void {
  stdout.write(chalk.green(`${message}:`, chalk.bold(chalk.white(`[${component}] ✔️\n`))));
}
Enter fullscreen mode Exit fullscreen mode

Initially, I wouldn't have created tests for empty strings in such a straightforward function. However, Early's approach promoted defensive programming practices, pushing me to handle edge cases I might have overlooked.

Detection of Potential Issues

While refining the generated tests, I encountered a type mismatch issue:

Problem: jest.fn() returns any, but process.exit returns never, leading to a type mismatch in TypeScript.
Solution: Modify the mock to match the process.exit signature, ensuring type correctness.
This discovery prompted me to adjust my code for better type safety, highlighting how Early can help identify subtle issues that might otherwise go unnoticed.

Areas for Improvement

Despite the overall positive experience, I encountered a few challenges that, if addressed, could enhance Early's usability:

  • Library Version Compatibility. Early generated tests using deprecated Jest methods in some cases, for example:

Using Jest 29.7

expect(Compiler.loadConfig()).rejects.toThrowError("process.exit() was called with code 1");
Enter fullscreen mode Exit fullscreen mode

// Corrected version

expect(Compiler.loadConfig()).rejects.toThrow("process.exit() was called with code 1");
Enter fullscreen mode Exit fullscreen mode
  • Customization Options for Test Generation While generating tests for edge cases was beneficial, in some scenarios, it might not be necessary:

Observation: Generating tests for every possible input, including empty strings, can sometimes be overkill.

Suggestion: Introduce options to customize the level of test generation, allowing developers to opt-in for defensive programming tests as needed.

  • User Interface Enhancements in the VSCODE Extension Navigating between Early and other tools highlighted some UI limitations:

Test Results Visibility: I had to switch between Early and Jest to see which tests passed or failed.

File Tree State: The project hierarchy in Early collapses when switching back from other applications, requiring me to reopen folders repeatedly.
Suggestion: Improve the UI to display test results within Early, mirroring Jest's structure. Maintaining the state of the file tree would also enhance user experience.

Early+Jest View

  • Mocking and Type Safety The issue with jest.fn() returning any suggests a need for more precise mocking:

Observation: Using any types in mocks can lead to type mismatches and potentially mask bugs.
Suggestion: Refine mock generation to use accurate signatures, promoting better type safety and reducing the need for manual corrections.

Conclusion

Overall, my experience with Early was highly positive. The tool significantly accelerated my unit testing process, allowing me to focus on refining tests rather than writing them from scratch. It also encouraged me to consider edge cases and improve my code's robustness.

The areas for improvement are relatively minor and revolve around enhancing usability and customization. Addressing these would make the tool even more powerful ally in software development.

Kudos to the Early team for their excellent work! I'm excited to see how the tool evolves and would be happy to continue providing feedback to help refine it further.

Extras (CodeCov + Early)

A great way to enhance your code quality and coverage measurement is by integrating Codecov. Combining Early AI with Codecov creates a powerful first line of defense for ensuring high-quality code from the start.

Here is the repo in case someone want's to check it out: @expressots/shared

Badges:

ExpressoTS Repo Shared Badges

CodeCov:

CodeCov

Drilling more on CodeCov:

CodeCov

You'll have clear insights into which parts of your code were covered and which weren't, making it easier to identify areas for improvement.

CodeCov

Top comments (0)