DEV Community

Normia Pop for Cypress

Posted on

A Guide for Efficient Prompting in QA Automation

In today’s fast-paced development environment, AI is transforming automation testing by saving valuable time and reducing manual effort, especially when it comes to repetitive tasks. In this guide, I’ll walk you through what’s worked for me in my own testing journey with AI, including how to effectively prompt AI to generate precise test scripts, debug issues, and optimize your QA processes. By using the right techniques, I’ve found that AI can be a powerful tool to get the most accurate results with minimal effort.

Let’s dive in!

1. Choose the Right Tool or Model

The first step in leveraging AI for QA automation is selecting the right tool or model that best suits my needs. It’s important for me to do some research before diving in. For example, ChatGPT (GPT-4) and GitHub Copilot are excellent choices for generating and debugging code. ChatGPT excels at natural language processing, which allows for more conversational prompts, while GitHub Copilot is a code-specific assistant, perfect for generating boilerplate code and refactoring.

Other models I might consider include Claude 3 Sonnet by Anthropic, which excels at generating high-quality code for complex scenarios and edge cases, and Gemini by Google DeepMind, which combines reasoning and creativity to tackle advanced automation tasks. DeepSeek, on the other hand, is known for providing context-aware coding suggestions, making it particularly effective for deeper codebase integration.

By carefully selecting the right model, we want to ensure that we are using the best tool suited to the task at hand.

2. Provide Context About Your Project’s Structure

When requesting test generation, especially if the AI doesn’t have direct access to our codebase, it’s essential to offer context about the structure and patterns of my testing project. This could include naming conventions, folder structures, or specific elements to test. If possible, we could provide the HTML or components we want to test to help AI tailor the test script to our project’s unique structure.

Bad Prompt: “Generate automation tests”

Good Prompt: “Generate a UI automation test for a React application using Cypress and TypeScript. The project follows a functional design pattern, utilizes fixtures for reusable data, and includes custom commands for common interactions. Please focus on testing the user login flow, ensuring that both the login form and the error messages are validated”

This level of detail helps ensure that AI-generated tests are well-suited for my project.

3. Be Specific in Your Prompts

Vague or general prompts won’t give us the detailed results I need for QA automation. Automated tests require specificity—whether it’s the functionality, type of app, testing environment, or the tools we are using. I personally always aim to be clear about the framework and programming language I’m working with.

Additionally, providing context through system prompts can significantly improve the quality of the output. A context/system prompt helps the AI understand the broader environment or the specific task I’m asking it to perform. For example, I can tell the model what type of tests I’m running, the state of the application, or any constraints I’m working within.

Bad Prompt: “Generate a test for the navbar”

Good Prompt: “Generate a UI automation test for the navigation bar of a React app using Cypress and TypeScript. The app uses a BEM naming convention for CSS classes, and all test selectors should be located in the ‘Test IDs’ folder. Text constants like button labels should be stored in fixture files. The test should follow a functional design pattern and include custom commands for any repetitive steps, such as clicking navigation links. Please use the following HTML structure for the navbar: [Insert HTML]. Ensure the test validates that each menu item is clickable and visible.”

The more context we provide, the more tailored and accurate the output will be. What I found helpful is to always review AI-generated output for accuracy—sometimes AI models will misinterpret instructions, and a human check could ensure the test is solid.

4. Use Consistent Templates

When I started using AI, I was often surprised by how different the outputs were, depending on the level of detail in the prompt. By creating templates and sticking to a consistent structure, I found that my tests became more standardized and integrated smoothly into the broader test framework, making it easier to scale and manage in CI/CD pipelines.

Bad Prompt: “Generate API tests”

Good Prompt: “Generate an integration test template for testing API endpoints in a Node.js app using Jest. The test should cover POST requests with JSON data.”

This structured approach enables AI to generate reusable, well-organized test cases that fit seamlessly into our broader testing strategy. We can also use the same principle for generating edge cases by providing examples for more targeted results.

5. Leverage AI for Edge Case Generation

AI can be a valuable tool for generating edge cases—scenarios that I might not immediately think of when writing tests manually. To prompt AI effectively for edge cases, we need to be specific about the conditions we want to test.

Bad Prompt: “Generate edge cases”

Good Prompt: “Generate edge cases for a form that validates email input. Consider variations of valid and invalid emails, including special characters, spaces, and different top-level domains.”

This helps ensure that the AI tests a variety of edge cases and unexpected conditions, which could uncover hidden bugs.

6. Request Optimizations and Refactoring

Over time, tests can become large and difficult to manage, especially in complex projects. AI can assist us in refactoring test scripts to make them more efficient. I can ask AI for suggestions to improve the performance, readability, and maintainability of my test scripts.

Bad Prompt: “Refactor this Cypress test”

Good Prompt: “Refactor this Cypress test to reduce redundancy and improve performance when testing dynamic content on a page. Remove unnecessary waits and optimize the test for speed.”

AI can make my tests cleaner and more efficient, allowing us to focus on more critical tasks.

7. Use Iterative Refinement for Continuous Improvement

AI might not get everything right on the first try, but it’s highly effective in refining outputs based on feedback. We can start with a basic test case and ask AI to improve it iteratively.

Initial Prompt: “Can you improve this test case to handle when the user is redirected to a ‘welcome’ page after login instead of staying on the dashboard page? Feel free to ask any follow-up questions if something is not clear”

Follow-up Prompt: “Now, can you modify the test to check if the welcome message is personalized based on the user’s first name?”

This iterative feedback loop can refine test scripts continuously, improving accuracy and effectiveness.

8. Use AI for Debugging and Troubleshooting

AI can assist with debugging by analyzing failed test cases and suggesting potential causes for failure. We should provide AI with logs, error messages, or failed test details to help pinpoint the issue.

Bad Prompt: “Why is this test failing?”

Good Prompt: “Explain why this Cypress test is failing when checking if an element with class ‘submit-button’ is visible after clicking a dropdown. The error message I’m receiving is: Element not found: submit-button. Additionally, the test is failing at line 15 of the test file, where I’m using cy.get(’.submit-button’).should(‘be.visible’). Please include suggestions for fixing this issue, such as possible reasons for the element not being found or suggestions for more stable selectors”

Another Example with Specific Code:

“I’m getting an error when running this Cypress test. The test checks if the ‘submit-button’ is visible after a dropdown is clicked. The error message is: Element not found: submit-button. Here’s the code for the failing part:

[Code block]

Can you explain why this might be failing, and suggest any improvements? I suspect it could be an issue with timing or waiting for the dropdown to load fully before checking the button’s visibility.”

By using AI to debug, we can identify and resolve issues faster, without spending excessive time on manual troubleshooting.

9. Automate More Complex Tasks

AI can also help with more complex tasks such as integrating testing into our CI/CD pipeline, creating automated test schedules, or managing test environments. For instance, I can prompt AI to generate a GitHub Actions pipeline for my test suite.

Bad Prompt: “Help me set up GitHub Actions”

Good Prompt: “Generate an example of integrating a GitHub Actions pipeline for an automation project with Cypress. The project is in the same repo as the frontend. We want the tests scheduled to run once per day and send an email notification if the test fails or passes.”

By automating tasks like this, I can save time and focus on higher-priority work.

10. Limitations and Considerations

AI is undeniably powerful, but there are some key limitations and considerations to keep in mind when using it for QA automation:

  • Data Quality: AI is only as good as the data it’s trained on. If I provide a test case or prompt that lacks context or contains errors, the AI might provide incorrect or incomplete results. I should always validate my data before using AI to generate test scripts. One way I can validate the quality of my data is by reviewing it for completeness, accuracy, and relevance. I can also run my own tests to ensure the data aligns with expected outcomes, which helps identify potential gaps that AI might overlook.
  • Human Supervision: While AI is an excellent tool for generating and refining test scripts, it should assist, not replace, human judgment. A good practice that I follow is to verify AI-generated tests, especially for complex scenarios or edge cases. AI can’t always understand the full context of a project or anticipate every possible interaction.
  • Confidentiality: When using AI tools, especially cloud-based ones, we need to be mindful of the confidentiality of our codebase and sensitive information. AI models, especially third-party ones, may not guarantee that our code or data will remain private. We should avoid submitting proprietary code or confidential test cases unless I’m sure the platform is secure and respects privacy standards. Always check the terms of service and data.

11. Quick Recap:

Choose the right AI tool: Conduct minimal research to select the best model, like ChatGPT or GitHub Copilot, Claude 3, Sonnet, Gemini, DeepSeek etc for your specific needs.

Be specific: The more context you provide, the better the output.

Provide structure: Share details about your project’s structure and patterns to help AI generate tailored test cases.

Use templates for consistency: Leverage templates to maintain consistency across different test types.

Generate edge cases: Be specific when asking AI to create edge cases and unpredictable test scenarios.

Optimize and refactor tests: AI can make your tests more efficient, so ask for refactoring suggestions.

Iterative refinement: Use iterative feedback loops to continuously improve test scripts.

Debug with AI: Leverage AI for troubleshooting and identifying the root cause of failed tests.

Automate complex tasks: Use AI to set up CI/CD pipelines and automate testing workflows.

Conclusion

Hows this:

From my own experience, AI has been an absolute game-changer in my QA automation work. It’s helped me save so much time, reduced manual errors, and significantly improved my test coverage. By applying the strategies I’ve shared here, I’ve been able to fully leverage AI’s potential, making my testing process faster, more efficient, and much more effective.

Looking ahead, I’m genuinely excited to see where AI goes and how it continues to evolve. But honestly, with the right approach, I’m confident that AI will be an integral part of automation testing for years to come. If you’re looking for a great way to dive deeper, I highly recommend checking out Automatic Test Creation: Cypress AI + Studio on YouTube. It’s a great resource to see AI-powered testing in action.

If you give these tips a try in your own projects, I’d love to hear how they work out for you.

Happy Testing!

Top comments (0)