Exploring Annotations and Group Tests in Cypress vs Playwright: Unveiling Control Mechanisms in Test Frameworks.
(Cover image from pexels.com by cottonbro studio)
- ACT 1: EXPOSITION
- ACT 2: CONFRONTATION
- ACT3: RESOLUTION
ACT 1: EXPOSITION
It was about time for the second article of "The Test Drama: Cypress vs Playwright" series, and I thought it should cover something that can truly help you organize your tests and have complete control over your runs.
There is much to say about this, and many different features supported by Cypress and Playwright can assist you with this task. Therefore, I thought it would make a good subject for an article in this series.
Of course, I'm talking about annotations, group tests, tags, and test filters — exploring not just one test framework but two, with Cypress and Playwright side by side (or Playwright and Cypress if you prefer 😉), showcasing their full capabilities and constraints.
During my thorough exploration of this specific subject in Cypress and Playwright, I found that the information was quite extensive. To make it more manageable and easier to follow, I've organized the content into two parts.
- Control Your Tests (Part 1): ANNOTATIONS & GROUP TESTS (this article)
- Control Your Tests (Part 2): TAGS & TEST FILTERS (coming soon)
I highly recommend reading this first article in full and the second part once it's released. This will help you understand the features in great depth, enabling you to discern what you can and cannot do and identify what best suits your needs.
ACT 2: CONFRONTATION
Let's dissect each of these tools to understand how they can provide us with full control over our test framework, whether it's Cypress or Playwright.
📝 ANNOTATIONS
Annotations are special markers or directives used in test scripts to influence test execution. They provide additional instructions or metadata to the testing framework on how to handle certain tests or test cases. Annotations can be used to skip tests, focus on specific tests, and perform other functions.
CYPRESS ANNOTATIONS
According to the official documentation, "Cypress has adopted Mocha's BDD (Behavior Driven Development) syntax, which fits perfectly with both integration and unit testing".
This means that the annotations used to control test executions are those provided by Mocha: .only
and .skip
.
it.only()
The annotation .only
allows you to focus on specific tests or suites (group tests) within a spec file. To run a specific test in Cypress, you simply append .only
to the it
function. You can use as many .only
annotations in a spec file as you want.
// test-only.cy.js
// This will only execute the second test titled 'Age should be 52',
// which it will pass.
const theName = 'Caine'
const theAge = 52
it('Name should be John Wick', () => {
expect(theName).to.be.equal('John Wick')
})
it.only('Age should be 52', () => {
expect(theAge).to.be.equal(52)
})
In the terminal, it will appear as if there is only a single test. When .only
is used in a spec file, all other tests are completely ignored, as if they do not exist in that file during the run:
Similar information will appear in the Cypress Log of the runner, showing only a single test in the spec file:
it.skip()
The .skip
annotation in Cypress is used to skip specific tests or suites during execution. This functionality is beneficial when a test is under development, known to fail for reasons being investigated, or not relevant for your current testing efforts.
// test-skip.cy.js
// The first test titled 'Name should be John Wick' will fail (incorrect name).
// The second test titled 'Age should be 52' will be skipped.
const theName = 'Caine'
const theAge = 52
it('Name should be John Wick', () => {
expect(theName).to.be.equal('John Wick')
})
it.skip('Age should be 52', () => {
expect(theAge).to.be.equal(52)
})
In the terminal, it will show that one test is pending (it was skipped) and one has failed:
The same result will be displayed in the Cypress Log:
The
xit()
function serves as an alias toit.skip()
, offering an alternative way to implement the same functionality:// These two tests are exactly equivalent, and both will be skipped it.skip('Age should be 52 - Option 1', () => { expect(theAge).to.be.equal(52) }) xit('Age should be 52 - Option 2', () => { expect(theAge).to.be.equal(52) })
Workaround for Conditional Annotations
Cypress does not natively support Conditional Annotations. These are introduced in Playwright and are used to control the execution of tests based on specified conditions.
If you want to achieve this in Cypress, you will need to use a workaround by creating conditional execution of the tests based on those conditions.
- You can place the condition to control execution outside the test:
// test-cond-ann-workaround-exterior.cy.js
// Ignore the test with title 'Name should be John Wick' if
// the 'runNameTest' environment variable is not set or is false.
const theName = 'Caine'
const theAge = 52
if (Cypress.env('runNameTest')) {
it('Name should be John Wick', () => {
expect(theName).to.be.equal('John Wick')
})
}
it('Age should be 52', () => {
expect(theAge).to.be.equal(52)
})
In this case, the test will not appear in the terminal or the Cypress Log. The downside to this approach is that when viewing the test results, you won't know that the test exists in the spec file and that has been skipped due to not fulfilling a condition.
- Alternatively, you can place the condition to control execution inside the test:
// test-cond-ann-workaround-interior.cy.js
// Do not execute the logic of the test 'Name should be John Wick' if
// the 'runNameTest' environment variable is not set or is false.
const theName = 'Caine'
const theAge = 52
it('Name should be John Wick', () => {
if (!Cypress.env('runNameTest')) {
return
}
expect(theName).to.be.equal('John Wick')
})
it('Age should be 52', () => {
expect(theAge).to.be.equal(52)
})
In this scenario, the logic within the test will be bypassed, but the test will show as passed in the terminal and the Cypress Log. This can be misleading, as it makes the test appear to have been fully executed and passed, rather than indicating that the test logic was bypassed. To avoid confusion, you might need to log additional information to inform the user about what actually occurred.
- And now the best workaround: Using the Context of the Running Test
cy.state('runnable').ctx
A few months ago, David Ingraham wrote a very interesting article called Cypress — Simple Custom Command to Conditionally Skip Tests.
In this article, he revisits an interesting solution used by the legacy Cypress plugin cypress-skip-test. This old plugin uses the context of a running test, available through the command cy.state('runnable').ctx
, which allows you to execute the .skip
annotation directly over that running test.
cy.state('runnable').ctx.skip()
Since this command accesses the state of a running test, it can only be placed inside a test. Therefore, the previous test could be changed to look something like this:
/// <reference types="cypress" />
// test-cond-ann.cy.js
// Skip the running test conditionally using the Tesat Context
const theName = 'Caine'
const theAge = 52
it('Name should be John Wick', () => {
console.log(cy.state('runnable'))
if (!Cypress.env('runNameTest')) {
cy.state('runnable').ctx.skip()
}
expect(theName).to.be.equal('John Wick')
})
it('Age should be 52', () => {
expect(theAge).to.be.equal(52)
})
In this case, the test will be skipped and registered as such in the terminal:
And the Cypress Log:
You can also place this conditional
.skip
within abeforeEach
hook, so that it will skip all the tests if the condition is met.
Plugin cypress-expect to check test results (by Gleb Bahmutov)
Cypress does not include a built-in annotation to check if a test fails, a feature that is available in Playwright. I could not find a Cypress plugin that simulates or provides a workaround for this feature.
✨ If you know of one, please let me know, as it could be really useful—especially when you are including tests that verify certain failures, expecting them to fail. ✨
Recently, I came across Gleb Bahmutov's plugin cypress-expect. With this plugin, when you run your tests in the terminal, you can specify how many tests you expect to fail, pass, or skip, among other options.
Install the Cypress-Expect plugin as a development dependency:
npm i -D cypress-expect
Then, run the tests in the terminal as follows:
npx cypress-expect run --failing <N> --passing <M> --pending <P> ...
Let's consider an example. Using the Cypress framework, the following results were obtained when running tests in the terminal:
From the results, 11 tests have passed, 1 has failed, and 2 are skipped (marked as pending).
By running the cypress-expect command with these same expected numbers, a similar result will be achieved:
npx cypress-expect run --passing 11 --failing 1 --pending 2
But what happens if we mix up these numbers and claim we are expecting 7 tests to pass, 3 to fail, and 5 to be pending? Clearly, none of these numbers align with the actual results:
npx cypress-expect run --passing 7 --failing 3 --pending 5
Notice that beneath the results table, there is a message stating ERROR: expected 3 failing tests, got 1
.
Interestingly, even though the numbers for passing and pending tests also did not meet the expected conditions, the message displayed by the Cypress-Expect plugin only highlights the failing ones.
I conducted additional testing with various combinations of passing, failing, and pending tests, both correct and incorrect, and found that the plugin only reports one mismatch at a time, following this priority order: failing → passing → pending
According to the plugin documentation: "When running in parallel mode where the tests are split, this module would not work, since only a subset of specs will execute on the current machine".
I am also uncertain whether this plugin will function properly when running tests and registering results in Cypress Cloud.
I find this plugin interesting and potentially quite useful if it could report all issues at once. Additionally, it would be beneficial if it could intercept failing tests effectively, preventing them from failing either locally or in the CI pipeline.
PLAYWRIGHT ANNOTATIONS
Playwright follows a slightly different approach for annotations. They include five main types of annotations according to their official documentation: .only
, .skip
, ,fixme
, .fail
and ,slow
.
test.only()
The Playwright .only
annotation works in the same way as its Cypress counterpart, .only
. It allows you to focus on running exclusively certain tests. To run a specific test in Playwright, simply add .only
to the test
function.
import { test, expect } from '@playwright/test';
// test-only.spec.ts
// This will only execute the second test titled 'Age should be 52',
// and it will pass.
const theName = 'Caine'
const theAge = 52
test('Name should be John Wick', async ({ page }) => {
expect(theName).toEqual('John Wick');
});
test.only('Age should be 52', async ({ page }) => {
expect(theAge).toEqual(52);
});
In the terminal, it shows that it ran three tests and passed all 3. This indicates that the test with the title 'Age should be 52' was executed in the 3 default browsers (Chromium, Firefox, and WebKit):
The same information will also appear in the default HTML report:
test.skip()
The .skip
annotation functions identically to its Cypress counterpart. It allows the skipping of specific tests or suites, disabling a test that is either incomplete or not relevant at the moment.
import { test, expect } from '@playwright/test';
// test-skip.spec.ts
// The first test titled 'Name should be John Wick' will fail (incorrect name).
// The second test titled 'Age should be 52' will be skipped.
const theName = 'Caine'
const theAge = 52
test('Name should be John Wick', async ({ page }) => {
expect(theName).toEqual('John Wick');
});
test.skip('Age should be 52', async ({ page }) => {
expect(theAge).toEqual(52);
});
In the terminal, it will show that three tests were skipped (corresponding to 'Age should be 52'), and three tests failed ('Name should be John Wick') across the three browsers.
I believe the output in the terminal when there are errors in Playwright is quite "ugly" (or not very clean, if you prefer). But that will be discussed in a different article. 🙂
The result in the HTML report or the run will show a similar outcome:
You can see exactly the tests were skipped by clicking the Skipped tab in the HTML report:
test.fixme()
In Playwright, the .fixme
annotation is used to mark a test that is failing and should not be executed during the test run. This type annotation does not exist in Cypress.
import { test, expect } from '@playwright/test';
// test-fixme.spec.ts
// The first test titled 'Name should be John Wick' will fail (incorrect name).
// The second test titled 'Age should be 52' will be skipped as is marked as fixme.
const theName = 'Caine'
const theAge = 52
test('Name should be John Wick', async ({ page }) => {
expect(theName).toEqual('John Wick');
});
test.fixme('Age should be 52', async ({ page }) => {
expect(theAge).toEqual(52);
});
In essence, the .fixme
annotation has the same effect on a test as the .skip
annotation (by ignoring the test).
If you don't believe it 😄, see the results in the terminal:
And in the default HTML report:
Whatever you wish to accomplish with .fixme
can also be done with .skip
. The only advantage of using the .fixme
annotation is semantic and for documentation purposes, as it indicates that the test is inactive due to a failure and requires a fix, whereas .skip
is not specific.
test.fail()
The .fail
annotation is used to mark tests that are expected to fail. When Playwright runs the test, it confirms that the test actually fails. If the test doesn't fail, Playwright will notify you.
If we run these tests:
import { test, expect } from '@playwright/test';
// test-fail.spec.ts
// The first test will pass because the name is what expected.
// The second test also will pass because although the age is incorrect,
// the test is marked as expected to fail.
const theName = 'John Wick'
const theAge = 52
test('Name should be John Wick', async ({ page }) => {
expect(theName).toEqual('John Wick');
});
test.fail('Age should be 25 - test pass although assertion fails', async ({ page }) => {
expect(theAge).toEqual(25);
});
All the tests will pass because, in the first test, the assertion is satisfied. In the second test, where the assertion is failing, it is indeed expected to fail.
This is what shows the terminal:
And the default HTML report:
This
.fail
annotation does not exist in Cypress, but I have say that I find it really useful. 👨🔧 💖And why is that?
If you really want to know, you'll need to be patient and wait until I lay out my conclusions at the end of the article, especially my fellow Cypress enthusiasts. 😉
test.fail.only()
This .fail.only
annotation is a very interesting... It's a combination of the two annotations .fail
and .only
and it is used to focus on a specific test that is expected to fail, which is useful when debugging a failing test.
import { test, expect } from '@playwright/test';
// test-failonly.spec.ts
// The first test will pass because the name is what expected.
// The second test also will pass because although the age is incorrect,
// the test is marked as expected to fail.
const theName = 'John Wick'
const theAge = 52
test('Name should be John Wick', async ({ page }) => {
expect(theName).toEqual('John Wick');
});
test.fail.only('Age should be 25 - test pass although assertion fails', async ({ page }) => {
expect(theAge).toEqual(25);
});
Notice that in this case, only the second test with the title 'Age should be 25 - test passes although assertion fails' is executed. Although the assertion fails, the test passes since it was marked as expected to fail.
test.slow()
The annotation .slow
is used to mark a test that is expected to be slow, tripling its timeout. This is another annotation that is not supported by Mocha, and consequently, neither by Cypress.
import { test, expect } from '@playwright/test';
// test-slow.spec.ts
// The test is expected to be slow, so we triple its timeout.
test('Test for a very slow page', async ({ page }) => {
await page.goto('https://www.slowpage.com');
await expect(page).toHaveTitle('Welcome to the slow page');
});
For me, this annotation
.slow
is alright, and I understand why it's very convenient. However, I'm not particularly fond of it. I will explain my reasons for this later in the Resolution section
Conditional Annotations
In Playwright, built-in annotations can be applied conditionally, meaning they take effect when the condition is true. Additionally, multiple annotations can be applied to the same test, each with its own configuration.
Conditional annotations are not supported by Cypress, except through the workaround described previously.
These annotation conditions can also utilize any test fixtures passed within the object provided to the async
function.
For this example, we will skip the test when the browser is Firefox:
import { test, expect } from '@playwright/test';
// test-cond-annotation.spec.ts
// Skip the test if the browser is Firefox.
const theName = 'John Wick'
test('Name should be John Wick', async ({ page, browserName }) => {
test.skip(browserName === 'firefox', 'Still working on it');
expect(theName).toEqual('John Wick');
});
It is reported in the terminal that one test is skipped and two tests passed:
We can see more details in the default HTML report, where the Passed tab shows the two tests for Chromium and Webkit:
And in the Skipped tab, the test for Firefox (which matched the condition):
You also can combine conditional testing with other annotations in the tests. In this case we are combining .only
annotation with a conditional .skip
:
import { test, expect } from '@playwright/test';
// test-cond-multiple-annotation.spec.ts
// It will ignore the first test with the title 'Age should be 52'
// and will run only the second test with the title 'Name should be John Wick'
// if browser is not webkit or chromium
const theName = 'John Wick'
const theAge = 52
test('Age should be 52', async ({ page }) => {
expect(theAge).toEqual(52);
})
test.only('Name should be John Wick', async ({ page, browserName }) => {
test.skip(browserName === 'webkit' || browserName === 'chromium', 'Skip for webkit and chromium');
expect(theName).toEqual('John Wick');
});
In this last example, the first test is ignored, and only the second test will run due to the .only annotation. However, if the browser is WebKit or Chromium, this second test will be skipped, and it will only be executed when the browser is Firefox.
We obtain similar results in the HTML report:
Note that the first test titled 'Age should be 52' is completely ignored and does not appear in the report.
Annotations in beforeEach Hooks
You can also use annotations with or without conditionals in the beforeEach
hooks. In this case, the annotation will apply to all the tests within the block of the .beforeEach
.
Example of .fixme
conditional annotation within a .beforEach
hook:
import { test, expect } from '@playwright/test';
// test-fixme-beforeeach.spec.ts
// It will skip both tests if we are testing on mobile devices.
// Otherwise, it will visit google.com and check the page title in
// one of the tests and the URL in the other.
test.beforeEach(async ({ page, isMobile }) => {
test.fixme(isMobile, 'Google page not in mobile yet');
await page.goto('https://www.google.com');
});
test('Check Google page', async ({ page }) => {
await expect(page).toHaveTitle('Google')
});
test('Check Google url', async ({ page }) => {
await expect(page).await expect(page).toHaveURL('https://www.google.com')
});
Notice that when we place the
fixme
annotation within the.beforeEach
hook, it applies to both tests in that block.
Example of a .fail
annotation (without a condition) in a .beforeEach
hook:
import { test, expect } from '@playwright/test';
// test-fixme-beforeeach2.spec.ts
// It will pass both tests because they are expected to fail
// (notice the wrong title and the wrong URL in the assertions).
test.beforeEach(async ({ page, isMobile }) => {
test.fail();
await page.goto('https://www.google.com');
});
test('Check Google page fails', async ({ page }) => {
await expect(page).toHaveTitle('Googleeeeeeee')
});
test('Check Google url fails', async ({ page }) => {
await expect(page).toHaveURL('https://www.google.commmmmmmm')
});
In this case, because the
.fail
annotation does not include a condition, it is expected that all the tests will fail.
Annotate Tests
If you want to label your tests with more detailed information than just a tag, you can achieve this by using the annotation
property in the test options object, when declaring a test.
An annotation is an object that includes a type
and a description
for added context, and they are accessible via the reporter API. In Playwright, the built-in HTML reporter displays all annotations, except for those whose type
begins with an underscore _
.
This test detailed annotations are not supported in Cypress, and I find them really useful if you want to document your tests in detail, so you can review that information if needed when checking the test results.
import { test, expect } from '@playwright/test';
// test-with-annotation.spec.ts
// Annotate the tests with a detailed description of the issue.
const theName = 'WICK-A11Y'
test('Plugin name should be WICK-A11Y', {
annotation: {
type: 'issue',
description: 'Fix issue with with plugin name https://github.com/sclavijosuero/wick-a11y/issues',
},
}, async ({ page }) => {
expect(theName).toEqual('WICK-A11Y');
});
In the terminal all tests will pass:
When we check the details of the test in the default HTML report, you can observe the annotation information provided in the type
and description
fields:
Detailed test annotations are not supported in Cypress. I find them very useful for documenting your tests in detail, allowing you to review that information if needed when checking the test results.
You can also assign multiple annotations to the same test:
test('complete report evaluation', {
annotation: [
{ type: 'bug', description: 'Check details at: https://github.com/microsoft/playwright/issues/23180' },
{ type: 'efficiency', description: 'This test has performance delays.' },
],
}, async ({ page }) => {
// Additional test logic here...
});
Runtime Annotations
You can add annotations dynamically to test.info().annotations
even while the test is in progress.
test('sample test case', async ({ page, browserName }) => {
const version = browserName.version();
test.info().annotations.push({
type: 'browserDetails',
description: `Browser version: \${version}`,
});
// ...
});
In this test, an annotation containing the current browser version is dynamically created and linked to the test. This ensures that the browser version information is included in the test results report, providing valuable context for analyzing the test outcomes.
💕 GROUP TESTS (aka SUITES)
With group tests (suites) you can organize related tests under a logical name, simplifying identification and management. This naming helps in recognizing their purpose and scope, while facilitating the execution or exclusion of specific test groups. Such organization enhances testing efficiency and maintains clarity in test management.
CYPRESS GROUPS
Cypress also uses Mocha's BDD syntax for grouping tests, employing describe()
and context()
to define test groups.
describe()
The describe()
function in Cypress is used to define a test suite that groups related test cases together. It serves as a container for it()
blocks, allowing you to organize and structure tests in a readable and maintainable manner.
describe('Test Suite Name', () => {
it('Test case 1', () => {
// Test logic for case 1
});
it('Test case 2', () => {
// Test logic for case 2
});
});
context()
The function context()
is identical to describe()
, serving simply as an alias.
Many Cypress QA Engineers, myself included, use describe()
in a spec file as the outer main test group, and context()
to create subgroups of related tests within the main describe()
. However, to the best of my knowledge, there isn't a universally accepted convention or well-defined best practice for this usage.
Let's check out this example:
// test-groups.cy.js
describe('User Authentication Suite', () => {
context('Login Tests', () => {
it('should log in with valid credentials', () => {
// Test logic for valid login
});
it('should not log in with invalid credentials', () => {
// Test logic for invalid login
});
});
context('Registration Tests', () => {
it('should register a new user', () => {
// Test logic for new user registration
});
it('should not register with an existing email', () => {
// Test logic for duplicate email registration
});
});
});
In the spec above, the describe()
function is used to define the primary test suite, such as 'User Authentication Suite'. It serves as a container for multiple related authentication test cases, providing a high-level overview of the functionality being tested.
The context()
function is used to create subgroups such as 'Login Tests' and 'Registration Tests', effectively organizing closely related tests for those specific areas.
In the Cypress Log, you can observe that all four tests are executed in a hierarchical representation, illustrating their structured grouping:
Note that describe()
and context()
also support the use of Cypress annotations .only
and .skip
. Let's look at an example of this:
// test-groups-annotation.cy.js
describe('User Management Suite', () => {
context('User Registration', () => {
it('should register a new user successfully', () => {
// Test logic for user registration
});
it.skip('should not register with already existing email', () => {
// Test logic for duplicate email registration
});
});
context('User Login', () => {
it('should login with valid credentials', () => {
// Test logic for valid login
});
it.only('should not login with invalid credentials', () => {
// Test logic for invalid login
});
});
});
The test suite named 'User Management Suite', created using the describe()
function, organizes tests into two subgroups with context()
: 'User Registration' and 'User Login'.
Under the 'User Registration' context()
, there are two tests: 'should register a new user successfully', and 'should not register with already existing email' which has a .skip
annotation.
For the 'User Login' context()
, there are also two tests: 'should login with valid credentials', and 'should not login with invalid credentials' with a .only
annotation in the last one.
Which of the four tests do you think will be executed when we run this spec file?
Take a moment to think... 🤔🤔🤔
This is what it would look like the execution in the Cypress Log:
Only the second test, 'should not login with invalid credentials', within the second context()
'User Login' will be executed. This is precisely the test marked with the .only
annotation!
The rule is that in a test file, only the tests with the .only annotation will run, regardless of how many suites are in the test file or at which nested level the .only annotation is applied.
For Cypress Suites (describe
or context
), the workaround using the context of a running test cy.state('runnable').ctx
cannot be applied directly, as suites are not running tests. However, a different workaround to support conditional skipping of a suite is possible by placing the skip condition within the beforeEach
hook for the group's scope.
Something like this:
describe('Workaround Conditional Skip', () => }
beforeEach(() => {
// Skip all tests within the describe if condition is met
if (!Cypress.env('runNameTest')) {
cy.state('runnable').ctx.skip()
}
});
it("test", () => {
// ...
});
PLAYWRIGHT GROUPS
(Note: PW shows the stuff in terminal and report - headed VS headless)
https://playwright.dev/docs/test-annotations#group-tests
test.describe()
The test.describe()
function in Playwright is used to create a test suite, grouping related test cases together. It acts as a container for test()
blocks, aiding in the organization and structuring of tests.
You can use the Playwright annotations .only
, .skip
, and .fixme
to control the execution describe
block.
The same rule that applies to Cypress also applies to Playwright: in a test file, only the tests with the .only annotation will run, regardless of how many suites are in the test file or at which nested level the .only annotation is applied.
Also a .describe
can include nested describes to define different scopes within the test file.
const { test, expect } = require('@playwright/test');
// test-groups-annotation.spec.ts
test.describe('Main Application', () => {
test.describe('User Management', () => {
test.describe('User Registration', () => {
test.fail('should show error for invalid email', async ({ page }) => {
expect(true).toBe(false);
});
test('should successfully register a new user', async ({ page }) => {
expect(true).toBe(true);
});
});
test.describe.skip('User Deletion', () => {
test('should delete user successfully', async ({ page }) => {
expect(true).toBe(true);
});
test('should not delete non-existent user', async ({ page }) => {
expect(true).toBe(true);
});
});
});
test.describe('User Authentication', () => {
test('should log in with valid credentials', async ({ page }) => {
expect(true).toBe(true);
});
test.fixme('should not log in with invalid credentials', async ({ page }) => {
expect(true).toBe(true);
});
});
});
And this is the result: nine tests pass and nine tests are skipped.
Conditional Annotations in test.describe()
You can also apply conditional annotations to groups. Here is an example of conditionally skipping a group of tests:
import { test, expect } from '@playwright/test';
// test-cond-annotation-groups.spec.ts
test.describe('browser-specific tests', () => {
test.skip(({ browserName }) => browserName === 'firefox', 'Skip on Firefox!');
test.beforeEach(async ({ page }) => {
// This hook skips on Firefox.
await page.goto('https://www.google.com');
});
test('test', async ({ page }) => {
// This test skips on Firefox.
await expect(page).toHaveTitle('Google')
});
});
The beforeEach
hook and the test will only be run if the browser is Firefox, as the conditional skip is applied within the .describe
block.
Lastly, I would like to mention that you can define detailed annotations for groups using the annotation
property.
import { test, expect } from '@playwright/test';
// test-annotation-groups.spec.ts
// Annotation belong to the describe block
test.describe('invoice tests', {
annotation: { type: 'category', description: 'invoice' },
}, () => {
test('check invoice details', async ({ page }) => {
// ...
});
test('verify complete invoice', async ({ page }) => {
// ...
});
});
test.describe.configure()
In Playwright, test.describe.configure()
is used to set the configuration for a specific test suite. It can be executed either on the top level in the test file or inside a describe.
This method allows you to customize the behavior of tests within that suite, such as setting a timeout, specifying retries, or configuring the execution mode (parallel
, serial
, or default
). So it provides a way to tailor the execution environment or test execution rules for a set of tests grouped under a test.describe
block.
In the example below, Playwright runs both describe
blocks in parallel, but the tests inside each describe
are run in order:
test.describe.configure({ mode: 'parallel' });
test.describe('A, runs in parallel with B', () => {
test.describe.configure({ mode: 'default' });
test('in order A1', async ({ page }) => {});
test('in order A2', async ({ page }) => {});
});
test.describe('B, runs in parallel with A', () => {
test.describe.configure({ mode: 'default' });
test('in order B1', async ({ page }) => {});
test('in order B2', async ({ page }) => {});
});
ACT3: RESOLUTION
You said Cypress VS Praywright, so... Speak up!
OK, as promised I will speak up! 📣
Let's start with Annotations:
We can all agree that all the annotations supported by Cypress (CY) are also supported by Playwright (PW), and these are .only
and .skip
. However, Playwright offers quite a few more options: .fixme
, .fail
, and .slow
.
To me, some of these additional annotations in Playwright are really interesting (others, not so much)!
The .fixme
annotation (PW) essentially does the same as .skip
, but the word "FIXME" clearly indicates the actions that still need to be addressed. Although not indispensable, I think it's useful.
Now, the .fail
annotation (PW)... WOW! This one is really cool! I mentioned before that I would tell you why, and now is the moment.
When I created my open-source plugin cypress-ajv-schema-validator for JSON schema validation in Cypress API testing, I also developed some tests where the schema should fail. This way, when I do a new release, I can check the results of those tests to ensure the schema errors are flagged correctly. However, as you can imagine, these failures cause the GitHub actions for the CI/CD to fail. It's okay because that's what it's supposed to do —fail the test— but an annotation like Playwright's .fail
would have been (and would be) very beneficial for cases where the test is expected to fail, without affecting the CI/CD pipeline.
In fact, I'm considering creating a new Cypress plugin to support a
.fail
annotation.
Now is the turn of the .slow
annotation (PW). As I mentioned earlier, this annotation isn't a big deal for me. In fact, I would actually avoid it in Cypress if it existed (at least in the final test code). The reason is that, as many of you know, I'm not a big fan of arbitrary waiting. The .slow
annotation triples the timeout of a test, but why not double or quadruple it instead?
There are many reasons why a test can be slow, and these can lead to flaky tests. If a test is flaky, I'm certainly very interested in discovering why. Once identified, I would prefer to resolve the issue in ways other than just increasing the timeout. If I must increase the timeout, I would rather do so only for the element in the test causing the slowness.
So, for me, the .slow
annotation is more of a negative feature than a positive one. However, I totally understand why it can be attractive, especially during the debugging process.
Conditional Annotations... This is definitely a 'super' feature in Playwright! It provides total control over which tests to execute at runtime. Yeah, baby! 🕶️
These conditions can be set to almost anything, including leveraging the powerful Playwright fixtures. Additionally, the capability to include conditional annotations within the beforeEach
hooks offers an extra level of control over your test execution. There are some workarounds for Cypress, but come on! You really need to know the ins and outs of the context of test executions.
One more thing before we move on from annotations: the ability to annotate tests with more details using the annotation
property (PW), even dynamically, might not be critical for many QA engineers, but it would certainly be useful for those who love documenting their tests.
And now let's talk about Group Tests (Suites):
Similar to annotations, everything that Cypress can do regarding Suites is supported by Playwright. However, Playwright Suites offer additional powerful features such as Conditional Annotations and the ability to annotate those suites with more detailed information using the already mentioned annotation
property.
Moreover, Playwright supports test.describe.configure
, allowing you to set the behavior of the suites and tests during execution, such as running them in parallel or serially. This, along with Playwright's capability to run test specs in parallel, will expedite considerably test execution, providing you full control over them.
Wrap up
Wow, that's a lot!
In this article, we not only explored the differences between Cypress and Playwright (or Playwright and Cypress if you prefer 😉) concerning Annotations and Group Tests -aka Suites-, but we also examined all the nitty-gritty details about what you can and cannot do in each of these test frameworks.
If you want to experiment with all the examples discussed in this article, you can clone the repository sclavijosuero/cypress-vs-playwright-frameworks. 🧪 🧫
This repository also contains the examples from the first article in the series, "The Test Drama (The Opening Salvo): Cypress vs Playwright Installation - The Good, The Bad, and the... Bug-ly!", and it will be updated with examples for all future articles in this series.
Cheers!
I'd love to hear from you! Please don't forget to follow me, leave a comment, or a reaction if you found this article useful or insightful. ❤️ 🦄 🤯 🙌 🔥
You can also connect with me on my new YouTube channel: https://www.youtube.com/@SebastianClavijoSuero
If you'd like to support my work, consider buying me a coffee or contributing to a training session, so I can keep learning and sharing cool stuff with all of you.
Thank you for your support!
Top comments (0)