DEV Community

Christian Duarte
Christian Duarte

Posted on

Lab 7 - Jest Testing

Improvements...

This week I had the task of implementing testing into my project, and updating my CONTRIBUTING.md to reflect how to access the test suite.

Tools...

The tools I chose to implement this week were 'JEST' and 'Nock'.

JEST

JEST is a testing framework available as a VSCode extension and an npm module. It can be used within a program to provide a test suite with sectioned tests for different functions (at least in my use case). I have used jest before in my previous cloud computing class but had not used it in the almost 1 year since I took that course. It was not too hard to get back into. The documentation for JEST can be found here

Nock

I had never used Nock before this assignment, nor did I know what Nock is/was. Nock is a "server mocking and expectations library for Node.js". In this project, I used it to simulate my project connecting to the Ollama API in the test suite, where I could then test my main function without running an Ollama instance. The documentation can be found here

Setup...

JEST

For the JEST setup, I ran

bun add jest

which added JEST to my project. I then created the command.test.ts file in the root of my project, to correspond to my command.ts file in the src folder of my project. After that I downloaded the JEST extension from the VSCode extension marketplace to allow for 1 click and automatic testing on save.
Image description

Nock

For the Nock setup, I only ran 1 command, and then included the proper dependencies in my test file, the command to install was:

bun add nock

LLM response mocking...

When implementing my tests, I did use GitHub Copilot a few times when I got stuck, to explain me the syntax for JEST as I did not remember certain things, or in some cases for setting up Mock data to be implemented. The main area I had to gain help from Copilot with, was the init() testing, as I hadn't used nock before, and was having a hard time implementing it myself using just the documentation. I console logged my AI response in my init() function to see the structure of the response, then used copilot to help me structure the nock, and was presented with this:
Image description
Now that this is complete, I have to go back and remove the logging, as a realize while typing this that I never removed it.

Tools running from CLI...

As provided in my setup above, adding the scripts allowed me to run the tools from the command line. The prettier script opens up all of my files and formats them to the prettier format, and the ESLint script checks only my src/command.ts file for lint errors as it contains my main logic, and my only other file is just 1 line of code that had no issues when I tested it.

Writing test cases...

While writing my test cases I was able to get through the helper functions I created pretty straightforward. I knew how they worked, I knew how to go about testing them, and I was able to debug my issues pretty easily. The hardest part of the helper functions was finding the right amount of coverage with my use cases. The main part I got stuck on was the init() as mentioned before. I had an issue with the way I was doing it being that my test was checking for the creation of the README.md file made by init() with the mock LLM response before the init() had completed the process of creating the file. My "aha!" moment was realizing that not all of my async processes were being awaited, and that's why it was completing before the init() finished.

Test case:
Image description

AHA fix:
Image description

Bugs and/or edge cases...

In my testing journey on this lab I only found things like cases where an error case could never occur, or a conditional of a function could never be entered. For example in my check file path function I had a section to check if a filename didn't contain a "." meaning its a directory, then inside that did another check for if it was a directory. This was useless and was contributing to me not having great coverage. I was able to fix many issues like this, and make my code slightly more simple and effective in the process.

What did I learn?

Overall from this process I learned or re-learned how to introduce JEST testing, and a testing suite into my project. I also learned how to use the extension which made running individual tests very simple and straightforward, as well as letting me run tests with 1 click. I actually enjoyed running and figuring out how to create tests for this tool although I struggled with my init() towards the end. In the future, I hope that I can retain the knowledge that I learned about JEST testing and maybe even Nock, and can introduce testing into more of my personal projects.

Top comments (0)