Comparing testing frameworks is the type of news that gets a lot of eyeballs online. Playwright vs. Cypress vs. Selenium vs. Webdriver.io - everyone is interested in seeing which one is better, faster, more stable and easier to work with.
Speed and performance is extremely scrutinised. It seems like everyone is migrating from Cypress to Playwright, because it offers faster test execution. But is that really a goal worth pursuing?
It seems like a no-brainer to switch to the faster framework. If the test execution is faster, why wouldn’t you make the switch?
The problem becomes more apparent once you understand the whole system of test automation. Test automation is never just about script execution. Writing, maintaining, setting up, retrying and a number of other concerns are what makes up the daily life of a test automation engineer. A good test automation flow is the one where all of these parts of the process are fast, not just the test execution.
If framework migration evangelists were honest, they would include time for setup into consideration. I know that edges on a long term experiment, so no blame here. Test scripting is not the biggest time investment. Set-up is.
- I’ll pretend maintenance isn’t a thing for now. That deserves its own article.
Setting up test environments
When it comes to testing, a big part of what makes a project go slow (or fast) is the system under test. Testing tools are getting faster these days, and it seems that we are no longer limited by the speed of testing tools, but rather by applications under test.
Clicking, typing and interacting with the application under test is a task that most of the testing frameworks can handle just fine. Any seasoned test automation engineer will probably tell you that the real complexity is not in these interactions, but in all things around - abstractions, system design, data seeding, environment concerns, authentication setup just to name a few. All of these things are what makes testing possible and make up the real work of test automation.
So what are the main concerns when it comes to setting up a well performing test automation? There are a couple of them. You can think of your test automation script as of a real user. Simply starting a user journey might require answers to following questions:
- What does the user need in order to interact with the system?
- Are there any limitations in when/how can user interact, such as authorization, authentication, payment or other?
- What kind of data is assumed when interacting with the system? (e.g. compare e-commerce vs. internet banking apps)
- Are there any integrations to third party systems that the user needs to have first?
- Does a user interact with other users when using the system?
- How do you make sure you don’t pollute your production analytics with test data?
There are, of course, many questions like these that will shape how the test script will look like, or what kind of setup will need to be done before it runs. In order to create a good testing system, these problems require well thought-through solutions. The biggest challenge of testing is that there’s no solution that would be 100% transferable to other projects.
This brings me back to test framework comparisons. A very big part of test automation is actually not test automation, but preparation for it. A good test automation project is more than just a good script, it’s a good overall experience. After all, test automation serves the goal of shipping faster, with greater confidence.
Test automation in the AI era
The AI wave has influenced test automation as well. It seems that there are multiple companies that now tackle test automation in new, innovative ways. Autonomous testing is on the rise, and bold claims on how testing is going to be done purely by AI are made.
But many of these seem to take one part of test automation and execute it well, while forgetting others. Creating an automation script is a task that many autonomous companies jump on to, and have been able to achieve at varying levels of success. Demos of these tools can be quite impressive, but much like the reviews I mentioned earlier, they cover only part of the ground.
Test automation was never just the problem of creating scripts, but as we established earlier, it’s also a challenge of proper setup and proper context. Autonomous testing solutions need to look at the whole system of test automation related problems and be able to go beyond simple scripting.
Much like being able to prompt LLM through chatGPT or Claude, autonomous testing services need to have a way to provide proper context for the environment under test. Not only the URL of the application under test, but data seeds, environment variables and other settings are what tackles the test automation as a whole, rather than just part of it.
AI testing tool supporting setup
Setup is a critical challenge. At Octomind, we’ve built a set of features to help you put up testing and run it quickly. We will be expanding these features as we go.
Test portability
Environments are essential for running the same test suite across different stages of deployment. Staging, canary, production - you can create as many environments you need. Login credentials can be adjusted per environment and you can define different authentication methods for each stage.
You can easily define custom variables and incorporate them into your test cases. They are created in the default environment ensuring they appear consistently across all other environments. They allows you to assign different values to variables depending on the environment, making it easier to maintain test cases across multiple setups.
Test repeatability
To maintain a consistent test environment for each run, setup and teardown strategies are essential. They improve test repeatability and optimize execution time. For example, if a specific element - such as a support ticket in a ticketing system - can be assumed to exist, you can immediately interact with it instead of creating it from scratch for every test.
We are exploring several options to facilitate setup and teardown at the moment. We will keep you posted once this is shipped.
Daniel Draper
Lead Engineer at Octomind
Top comments (0)