Introduction
End-to-end tests are great for ensuring application reliability, but they can bring maintenance headaches. Even minor UI changes can break tests, leaving developers and QAs wasting hours debugging.
In this article, I’ll show you how to leverage ChatGPT or Copilot to fix Playwright tests automatically. You’ll learn how to pre-generate an AI prompt for any failing test and attach it to the HTML report. That way, you can easily copy and paste the prompt into AI tools and instantly get suggestions for fixing the test.
Let’s dive in!
Plan
The solution boils down to three steps:
- Detect when a Playwright test fails
- Generate an AI prompt with relevant context:
- Error message
- Test code snippet
- ARIA snapshot of the page
- Attach the prompt to the Playwright HTML report
Step 1: Detecting a Failed Test
Detecting a failed test in Playwright can be done in a custom fixture. This fixture inspects the test result in the teardown phase (after the test finishes). If a test has testInfo.error
and will not be retried, a prompt is generated.
Here’s the code snippet:
import { test as base } from '@playwright/test';
export const test = base.extend({
fixWithAI: [async ({ page }, use, testInfo) => {
await use();
const willBeRetried = testInfo.retry < testInfo.project.retries;
if (testInfo.error && !willBeRetried) {
// ... build a prompt to fix the error
// ... attach the prompt to the test
}
}, { scope: 'test', auto: true }],
});
Step 2: Building the Prompt
Prompt Template
I start with a simple proof-of-concept prompt (will refine it later):
Fix the error in the Playwright test "{title}".
{error}
Code snippet of the failing test:
{snippet}
ARIA snapshot of the page:
{ariaSnapshot}
Let’s fill the prompt with the necessary data.
Error Message
Playwright stores the error message in testInfo.error.message
. However, it includes special ASCII control codes for coloring output in the terminal (such as [2m
or [22m
):
TimeoutError: locator.click: Timeout 1000ms exceeded.
Call log:
[2m - waiting for getByRole('button', { name: 'Get started' })[22m
After investigating Playwright’s source code, I found a stripAnsiEscapes
function that removes these special symbols:
const clearedErrorMessage = stripAnsiEscapes(testInfo.error.message);
Cleared error message:
TimeoutError: locator.click: Timeout 1000ms exceeded.
Call log:
- waiting for getByRole('button', { name: 'Get started' })
This cleaned-up message can be inserted into the prompt template.
Code Snippet
The test code snippet is crucial for AI to generate the necessary code changes. Playwright often includes these snippets in its reports, for example:
4 | test('get started link', async ({ page }) => {
5 | await page.goto('https://playwright.dev');
> 6 | await page.getByRole('button', { name: 'Get started' }).click();
| ^
7 | await expect(page.getByRole('heading', { level: 3, name: 'Installation' })).toBeVisible();
8 | });
You can see how Playwright internally generates these snippets. I've extracted the relevant code into a helper function, getCodeSnippet()
, to retrieve the source code lines from the error stack trace:
const snippet = getCodeSnippet(testInfo.error);
The full code for getCodeSnippet()
is here.
ARIA Snapshot
ARIA snapshots, introduced in Playwright 1.49, provide a structured view of the page’s accessibility tree. Here’s an example ARIA snapshot showing the navigation menu on the Playwright homepage:
- document:
- navigation "Main":
- link "Playwright logo Playwright":
- img "Playwright logo"
- text: Playwright
- link "Docs"
- link "API"
- button "Node.js"
- link "Community"
...
While ARIA snapshots are primarily used for snapshot comparison, they are also a game-changer for AI prompts in web testing. Compared to raw HTML, ARIA snapshots offer:
- Small size → Less risk of hitting prompt limits
- Less noise → Less unnecessary context
- Role-based structure → Encourages AI to generate role-based locators
Playwright provides .ariaSnapshot()
, which you can call on any element. For AI to fix a test, it makes sense to include the ARIA snapshot of the entire page, retrieved from the root <html>
element:
const ariaSnapshot = await page.locator('html').ariaSnapshot();
Assembling the Prompt
Finally, combine all the pieces into one prompt:
const errorMessage = stripAnsiEscapes(testInfo.error.message);
const snippet = getCodeSnippet(testInfo.error);
const ariaSnapshot = await page.locator('html').ariaSnapshot();
const prompt = promptTemplate
.replace('{title}', testInfo.title)
.replace('{error}', errorMessage)
.replace('{snippet}', snippet)
.replace('{ariaSnapshot}', ariaSnapshot);
Example of the generated prompt:
Fix the error in the Playwright test "get started link".
TimeoutError: locator.click: Timeout 1000ms exceeded.
Call log:
- waiting for getByRole('button', { name: 'Get started' })
Code snippet of the failing test:
test('get started link', async ({ page }) => {
await page.goto('https://playwright.dev');
await page.getByRole('button', { name: 'Get started' }).click();
await expect(page.getByRole('heading', { name: 'Installation' })).toBeVisible();
});
ARIA snapshot of the page:
- document:
- region "Skip to main content":
- link "Skip to main content"
- navigation "Main":
- link "Playwright logo Playwright":
- img "Playwright logo"
- text: Playwright
...
Step 3: Attach the Prompt to the Report
When the prompt is built, you can attach it to the test using testInfo.attach
:
await testInfo.attach('🤖 Fix with AI', { body: prompt });
Now, whenever a test fails, the HTML report will include an attachment labeled “🤖 Fix with AI.”
Testing
To try out the "Fix with AI" prompt, I created a simple test to validate the Get started link on the Playwright homepage:
test('get started link', async ({ page }) => {
await page.goto('https://playwright.dev');
await page.getByRole('link', { name: 'Get started' }).click();
await expect(page.getByRole('heading', { name: 'Installation' })).toBeVisible();
});
When it’s correct, the test passes:
$ npx playwright test
Running 1 test using 1 worker
1 passed (1.9s)
Next, I’ll introduce deliberate errors and see how the AI prompt can help.
Check 1: Changing the Role from link
to button
Suppose we introduce a bug by modifying the test locator's role from link
to button
:
test('get started link', async ({ page }) => {
await page.goto('https://playwright.dev');
- await page.getByRole('link', { name: 'Get started' }).click();
+ await page.getByRole('button', { name: 'Get started' }).click();
await expect(page.getByRole('heading', { name: 'Installation' })).toBeVisible();
});
The test now fails. In the Playwright HTML report, you’ll see a new attachment labeled “🤖 Fix with AI”:
You can expand the attachment and copy the prompt by clicking the small button in the top-right corner:
Pasting the prompt into ChatGPT yields a suggested fix:
ChatGPT correctly identifies that the button
role is incorrect and recommends using a link
role. After applying the suggestion, the test passes! 🎉
Improving the Prompt
Although ChatGPT gave a detailed explanation, in day-to-day workflows, you might prefer a more concise output that focuses on code changes. I've made many experiments and arrived at this prompt template:
You are an expert in Playwright testing.
Fix the error in the Playwright test "{title}".
- Provide response as a diff highlighted code snippet.
- Strictly rely on the ARIA snapshot of the page.
- Avoid adding any new code.
- Avoid adding comments to the code.
- Avoid changing the test logic.
- Use only role-based locators: getByRole, getByLabel, etc.
- For 'heading' role try to adjust the level first.
- Add a concise note about applied changes.
- If the test may be correct and there is a bug in the page, note it.
{error}
Code snippet of the failing test:
{snippet}
ARIA snapshot of the page:
{ariaSnapshot}
With this refined prompt, ChatGPT usually provides a succinct fix. You simply copy the suggested code and paste it back into your test:
I now use this new prompt for the following checks.
Check 2: Adjust Link Text
In this scenario, the UI text has changed - a very common case for real projects. The link has changed from Get started to Get involved:
test('get started link', async ({ page }) => {
await page.goto('https://playwright.dev');
- await page.getByRole('link', { name: 'Get started' }).click();
+ await page.getByRole('link', { name: 'Get involved' }).click();
await expect(page.getByRole('heading', { name: 'Installation' })).toBeVisible();
});
The test will fail. ChatGPT, using the ARIA snapshot, can detect the text mismatch and suggest updating the locator’s name
property:
It’s important to distinguish actual bugs from legitimate UI text changes. That’s why I prefer to see the code diff first and analyze what’s happening.
Check 3: Remove Link Name
What if the locator matches multiple elements on the page? Let’s test if AI can identify the correct one.
I remove the link’s name
property, causing the locator to match all links on the page:
test('get started link', async ({ page }) => {
await page.goto('https://playwright.dev');
- await page.getByRole('link', { name: 'Get started' }).click();
+ await page.getByRole('link').click();
await expect(page.getByRole('heading', { name: 'Installation' })).toBeVisible();
});
The test fails with:
Error: locator.click: Error: strict mode violation:
getByRole('link') resolved to 39 elements:
ChatGPT’s suggestion:
This is fantastic! Among 39 links on the page, ChatGPT pinpoints the correct one. Providing the test title and code snippet really helps AI figure out the right fix.
It also suggests adjusting the heading level for the "Installation" check to make the locator more reliable.
All these checks confirm that AI-powered prompts significantly reduce the time spent fixing tests. You get direct code suggestions that you can easily apply.
Using Copilot Edits
When using ChatGPT to fix tests, you must manually apply the suggested changes. You can streamline this step by using Copilot. Instead of pasting the prompt into ChatGPT, open the Copilot edits window in VS Code and paste your prompt there. Copilot will propose code changes that you can review and apply instantly—all within your editor.
Here’s a demo video of fixing a test with Copilot in VS Code:
Further Improvements
This AI-driven approach can be refined further. Here are some ideas I’m excited about:
1. "Fix with AI" Button in the Playwright VS Code Extension
Right now, you have to manually copy and paste the generated prompt. Imagine a button in the Playwright VS Code extension that appears when a test fails, automatically sending the context to your AI tool. That would be ideal!
Here’s a mockup showing where the “Fix with AI” button could appear:
2. HTML Report Enhancements
Similarly, a button in the Playwright HTML report could automatically send a request to a configured AI model and display a suggested fix. This would be especially helpful for team members who focus on reports and don't use IDE.
Here’s a potential spot for a “Fix with AI” button in the report:
Unfortunately, Playwright HTML report doesn’t support such level of customization. These issues may pave the way, so feel free to vote for them:
Integrating “Fix with AI” into Your Project
I’ve created a fully working GitHub repository demonstrating the “Fix with AI” workflow. Feel free to explore it, run tests, check out the generated prompts, and fix errors with AI help.
To integrate the “Fix with AI” flow into your own project, follow these steps:
- Ensure you’re on Playwright 1.49 or newer
- Copy the
fix-with-ai.ts
file into your test directory -
Register the AI-attachment fixture:
import { test as base } from '@playwright/test'; import { attachFixWithAI } from './fix-with-ai'; export const test = base.extend<{ fixWithAI: void }>({ fixWithAI: [async ({ page }, use, testInfo) => { await use(); await attachFixWithAI(page, testInfo); }, { scope: 'test', auto: true }], });
Run your tests and open the HTML report to see the “Fix with AI” attachment under any failed test
From there, simply copy and paste the prompt into ChatGPT or GitHub Copilot, or use Copilot’s edits mode to automatically apply the code changes.
I’d love to hear your thoughts or prompt suggestions for making the “Fix with AI” process even more seamless. Feel free to share your feedback in the comments.
Thanks for reading, and happy testing with AI! ❤️
Top comments (0)