Those that have been watching the testing landscape for a while might remember the craze that occurred when test automation started going mainstream. Will testers lose their jobs? What happens when we automate everything?
Looking back, those concerns sound almost funny. Years of test automation have shown that despite significant speed improvements, companies still combine manual QAs with test automation engineers to help shipping high quality products.
The recent 'shifting left' trend - pushing testing and quality processes to the earlier stages of development - has increased the focus on test automation. Many companies insist on transforming former manual QA teams to test automation and equip everyone with automation skills. The goal is to automate everything that was previously done manually. While there’s a good argument for broadening the technical skills of QA teams, there’s definitely more to test automation than simply creating scripts to automate test cases.
Automating test cases
Let me state it clearly. Automating manual test cases 1:1 is a bad idea.
This idea usually stems from the desire to keep the same test coverage as when tests are performed manually. Simply put, the idea is:
One test case = one test script
But this does not meet with the reality of what test automation really is. Manual checks are not only a series of steps but a series of various qualitative assertions such as visual look of the application under test, exploratory side quests, prompting developers for additional contents and little experiments. Any tester worth their salt will not mindlessly perform a series of test steps and report back only when they’re unable to perform the next one. That’s what makes it more valuable than a test script. On the other side, there is much value in test automation that is worth pursuing.
Test automation makes QA much faster and also more reliable. You can run test automation at any time and more frequently. Since it is automated you can also include the less important tests with any issues and this way gain more confidence. It’s also a lot cheaper and can run outside working hours.
It requires a different approach than simply scripting scenarios though.
How to automate tests
Whenever a tester runs a manual check, they log in and have a broad goal of what needs to be done. There are no broad goals when it comes to test automation. A testing script needs a precise goal, precise result and precise set of steps. Without those, an end-to-end test will become overly complex, with way too many conditions altering the end result.
This means that test automation needs to be more streamlined. It’s typically a good practice to follow some kind of pattern, so that there’s a standard to be followed for the whole test automation team. One of the good ones is the AAA pattern, which follows a set structure:
This structure divides a test automation script into three distinct parts, each fulfilling a different objective.
Arrange
is a test preparation phase. It has the goal of making sure that the data is properly seeded, the application under test is in a proper state and the context is clearly set. Typically, the arrange phase can be something like a login, creating some test data or simply opening the proper subpage.
Act
is the heart of the automation script. It’s a series of steps, closest to what the test case might be. This phase is responsible for getting the application from state A to state B. The series of steps are implicitly asserting that the functionality of the application under test is unbroken and allows the user to use the app’s functionality. In a to do app, this would be creating, editing or deleting a to-do item.
Assert
is the most important step of a test. Without an assertion, the script is merely a series of steps. Assertions are an explicit confirmation that the functionality of the application under test works. In a to-do app, assertion might make sure that a to-do item is visible after it was created, no longer visible after it was deleted and so on.
Manual vs. automated
You could probably see some resemblance of the AAA pattern in manual testing. Even in manual testing there’s preparation, exploring and assessment. But as we stated earlier, that does not mean that simply automating a test case is going to work well.
Let’s demonstrate this in an example. The following is a cookie message that appears on a page:
This is a message that can appear seemingly randomly on a page. If testing this message is not within the scope, then the best course of action is to simply click “accept all” when the cookie consent message appears and move on.
In test automation this poses a challenge. The message might be covering a portion of the page and we need to get it out of the way when it appears. Test automation scripts usually run in a clean browser which means that we will see this message more often than when testing manually. This might be aided with a condition in the test script. In pseudo code it might look like this:
If you are shaking your head at this, I have to tell you that I have seen this approach many times. This little helper introduces a problem into the test automation suite. The script closes the cookie message anytime it runs, but what if the cookie message never appears, even when it should? And how do we test a case when we want the message to appear?
The key to making a good test automation decision is to have decent technical knowledge. Digging a little bit deeper into the cookie message functionality reveals that when we click the “accept all” button, a setting (in the form of a cookie) is saved into the browser storage. This ensures that we don’t see the message on the same page over and over again.
This is now a functionality we can test! Even better, it’s a functionality that we can control. Since most test automation frameworks always open browser with storage cleaned up, we can control when the cookie message appears by choosing when to inject the consent cookie into the browser.
Making good test automation decisions
There’s a very popular idea in the world of software development known as DRY - don’t repeat yourself. It’s a simple principle stating that whenever you have a piece of code that needs to be used at multiple places, it should not be repeated, but rather abstracted. Many testers apply this principle to their test automation code.
Let’s again demonstrate this in a simple example.
Imagine that there are 500 end-to-end tests written for an application that requires users to log in. For the sake of simplicity we’ll assume that every one of these 500 tests require the user to be logged in.
Applying the DRY principle to our test code means that we are going to create a login function that will get called at the beginning of each test as the “arrange” section.
And while the code is being abstracted, executing the login sequence in 500 tests could mean that simply logging in will take 25 minutes of the whole test run (given that login sequence takes 3 seconds). For some tests this can create a bit of imbalance in how long each part of the test takes. After all, the reason why we write tests is more connected to the act and the assertion phases.
The good news is that this draws a clear picture on what we need to focus on if we want to optimize.
In this case, we can use the same principle as we did with our cookies. It’s pretty much the same idea as when checking the “remember me” box when logging in to a page. But instead of using it to automatically log in to your favourite social media, you can simulate the same behavior in your tests.
In principle, this approach has the goal of creating relevant contexts and then re-using them. Instead of starting each test from scratch, we want to reuse contexts in as many tests as we can. Login is a great candidate for this, because login does not change from test to test and it’s also rarely affected by other test phases.
Modern test automation tools such as Playwright and Cypress already have tools for caching login sessions. This approach is not yet widely adopted, but it is the best way of dealing with applications that use login. Octomind provides an option to set up a shared authentication state as well as native support for one-time passwords.
This not only optimizes the test performance, but reduces the amount of login attempts. This helps tremendously when testing applications that have rate-limiting or captcha protection against brute force attacks. These can sometimes be a significant hurdle when it comes to test automation.
Improving test actions
Another area of improvement is the “act” section of the test. While it’s not the obvious first candidate, there’s a lot of potential in making tests faster and therefore making the test automation worth it.
There’s a lot that happens in the “act” phase when doing e2e testing. It’s a good practice to ask whether parts of this phase are being reused across tests. Let’s say you are testing a to-do app. There’s a good chance that you need to create an initial todo item (or more) to meet your testing goal. This can potentially bloat your “act” phase.
Usually when following the DRY principle, testers abstract creation of todo items into its own function. And this is a good thing to do, because now an action can be reused in multiple tests.
conceptual graph showing repeated necessity of 'creating todos' steps in testing to-do app
But if you stop and think about it, isn’t creating todo items the part of our test that is responsible for arranging the test?
It definitely is!
This means that if we want to draw our graph correctly, it should look more like this:
It now seems that we have once again made the arrange part of our test too big. But this is once again a situation where we can apply the same principle as with login. Once we understand the parts that create the setup of our tests, we can simply abstract them and reuse and optimize them across different tests.
Test chaining and dependency structure
Chaining tests using dependencies is a best practice to structure test automation efficiently. In an automated approach, each test case represents a small task in the user flow you want to achieve. Testing an entire user flow is the execution of a sequence of test cases chained together.
Every test case is executed only once which reduces test runtime significantly. It follows the DRY principle allowing for easier test maintenance. Separation minimizes the number of test code adjustments when the code in your app changes. This is valid for both, hand-written tests in Playwright or Cypress and autogenerated tests by Octomind.
Dependency structure provides you and your collaborators with a better oversight when your test suite gets bigger. It gets harder to understand what each test is doing or how much of your app is actually covered over time.
Test cases vs. test scripts
This brings us to the final difference between manual testing with test cases and test script automation. Usually, when test cases are put together, they follow a pattern of user behavior. These behaviors are then added to groups, so that testing is done within a scope of given features and functionalities. Test scripts, while they should mimic user behavior, do not need to follow the same grouping. The grouping can be done based on the setup each test needs.
---
Testing is highly analytical work and requires a good knowledge of the system under test. Test automation requires it all, but also requires some analytical work on what needs to be broken down in order to run tests in an optimal way. Many times, the optimization process is like a mathematics equation, where you can subtract parts that are redundant. A manual approach with test cases is full of these parts and the art of effective test automation decision lies in identifying them.
Top comments (0)