DEV Community

Cover image for [Copilot Extreme Practice Journal] DAY 1: Using Copilot to Generate Unit Tests Without Writing a Single Line of Code
code plato
code plato

Posted on

[Copilot Extreme Practice Journal] DAY 1: Using Copilot to Generate Unit Tests Without Writing a Single Line of Code

This series is called "Extreme Practice" because I don't know if this is the best practice. I am currently trying to push ChatGPT to the extreme, meaning almost all actual coding and writing tasks are delegated to ChatGPT to see how much efficiency can be improved—or decreased.

Terminology Introduction

Copilot

Copilot is a chatbot developed by Microsoft that utilizes ChatGPT. You can consider it a wrapper around ChatGPT. It allows you to switch between different models, including Gemini, Claude, and ChatGPT.

GitHub Copilot

GitHub Copilot is an AI coding assistant that integrates Copilot into GitHub.

GitHub Copilot VSCode Plugin

GitHub has developed a Copilot plugin for GitHub. This plugin enables programmers to use Copilot within the IDE by connecting it to GitHub Copilot. It currently supports VSCode and IntelliJ.

Task Requirements

Generating unit tests for UI components is always a painful task. Testing pure logic functions is straightforward—you just pass in parameters and verify the output. However, testing components requires setting up many variables needed for component loading and passing them to the component in different ways.

Today is the first day of practice. I attempted to use Copilot to generate a unit test for the Vue component BenefitsAccordion. The requirements are as follows:

  • ⏳ Must be completed within 30 minutes.
  • ✅ The generated unit test must pass.
  • 🏆 The unit test must follow good coding practices.
  • 🙅‍♂️ I should not write any code; the task should be completed through conversation only.

Execution Process

I will use ✅ and ❌ and 🤔 to show how it is doing.

  • If it work as my expect. I will give it a ✅
  • If it doesn't work as expect. I will give it a ❌
  • If I don't know whether it works well or not, I will give it a 🤔

Step 1: Initial Generation of Test Cases ✅

I opened the Copilot Edits panel:

Image description

I selected the file and told it:

/tests generate unit test for this file

Image description

It generated a test file, but it did not recognize that my codebase is a monorepo (Monorepo vs Multi-repo vs Monolith). The generated test file should be placed in the corresponding sub-repository.

Step 2: Moving the Test Case ❌

I instructed it to move the unit test to apps/<sub-repo>/tests/unit. So I said:

@workspace app/xxx has its own tests folder. Move the unit test you just generated to that folder instead of tests/unit.

It agreed and moved the file, but placed it in the root directory /tests/unit, which is incorrect.

Step 3: Moving the Test Case Again ✅

Since test cases should be placed according to the structure of the System Under Test (SUT), I said:

Don't put it in the root of apps/ms/tests/unit. Place it according to the source code file structure.

This time, it got it right:

Move the unit test file for the BenefitsAccordion.vue component to the apps/ms/tests/unit/components/xxx/xxx folder to match the source code file structure.
But there were many errors

Image description

Step 4: Attempting to Fix the Test Case ❌

I asked it to fix the errors in the unit test file:

Fix the errors in the unit test file BenefitsAccordion.spec.ts.

It responded:

Fix the errors in the unit test file for the BenefitsAccordion.vue component.

However, the errors remained unchanged.

Step 5: Directly Telling It How to Fix the Errors ✅

The errors were basic—missing imports for Vitest functions like describe and beforeEach. I explicitly pointed this out:

The errors in BenefitsAccordion are because you didn't import describe, beforeEach, etc., from Vitest.

This time, it understood:

Fix the errors by importing describe, beforeEach, it, and expect from Vitest.

Step 6: Specifying Which Methods to Test ✅

It generated some unhelpful test cases, such as it should render ..., which I consider meaningless. So, I told it explicitly:

/tests in BenefitsAccordion.spec.ts, only generate unit tests for methods xxx, xxx, xxx, xxx, xxx, xxx. Remove other tests.

It responded correctly:

Generate unit tests only for the methods xxx, xxx, xxx, xxx, xxx, and xxx. Remove other unit tests.

Step 7: Categorizing Test Methods ✅

A good practice is to group test cases using describe(). I instructed:

Use describe to categorize unit tests by the methods being tested.

It got me

Organize the unit tests using describe blocks.

Step 8: Testing More Logical Paths ✅

It only tested one path per method, but each method had at least two logical paths. I told it:

You only test one path. Test more paths. For example, for xxx, test when xxx is true.

It got me

Add more test cases to cover different paths of the methods being tested.

Step 9: Manually Specifying a Missing Path 🤔

One test path was still missing, so I explicitly pointed it out:

For planId, you didn’t test when planId is null.

It responded:

Add a test case for planId when planId is null.

Step 10: Running the Test Cases ❌

I asked it to run the tests, but it said:

❌ Sorry, I can't assist with that.

I ran them manually and found one failed test:

Image description

Step 11: Fixing the Test Case ❌

I asked it to fix the failing test:

The unit test "returns empty string when planId is null" failed, correct it.

It responded:

✅ Correct the unit test "returns empty string when planId is null".

However, the error persisted.

Step 12: Attempting to Fix the Test Again ❌

After multiple attempts, it still couldn't fix the test. I had to fix it manually, which violated my goal of not touching the code. Additionally, fixing one test caused five others to fail.

Image description

Step 13: Ensuring Test Isolation ❌

To prevent tests from affecting each other, I instructed:

Ensure each test runs with a clean component instance.

It responded:

Ensure each test runs with a new component instance.

However, it did not work, and the errors remained.

Conclusion

The experiment ultimately failed. The tests did not pass, and the situation worsened. I even had to intervene manually. In the end, I deleted the test file. Tomorrow, I will attempt a different approach.

Top comments (0)