DEV Community

Cover image for How ReportPortal "Made" Pytest Run Twice
Oleksii Lytvynov
Oleksii Lytvynov

Posted on

How ReportPortal "Made" Pytest Run Twice

A story of how a coincidence led to an unexpected behavior

We used to store our test run results in TestRail, but the time came to switch to something else. Interestingly, this isn’t the first time I’ve seen teams move away from TestRail recently. We considered various options for storing test results and eventually chose ReportPortal. Later we could talk about other options.

Integrating ReportPortal with Pytest

What surprised me - the integration was very simple. Unlike TestRail, which required writing an API client, publishing results to ReportPortal only requires specifying a few command-line options.

As you might guess from the title, the tests are written in Pytest. To publish results to ReportPortal, you just need to include the following options to the command that runs tests:

  • --reportportal: enable publishing
  • --rp-endpoint: the address of your ReportPortal server
  • --rp-api-key: your API key, generated in the User Profile section of ReportPortal
  • --rp-project: the name of the project where results should be published

For example, to run all tests and publish results to ReportPortal, we could use the following command:

pytest --reportportal --rp-endpoint=<RP_SERVER_ADDRESS> --rp-api-key=<RP_API_KEY> --rp-project=<PROJECT_NAME>
Enter fullscreen mode Exit fullscreen mode

If we run tests with results publishing from a local machine, then we can define these parameters in pytest.ini file:

[pytest]
rp_endpoint=<RP_SERVER_ADDRESS>
rp_api_key=<RP_API_KEY>
rp_project=<PROJECT_NAME>
Enter fullscreen mode Exit fullscreen mode

ReportPortal provides additional options as well. For example, --rp-rerun to overwrite results in an existing launch or --rp-launch-description to add a description to the test launch.

Note that command line options use hyphens(-), while pytest.ini parameters use underscores (_)

We publish results during the regression run when the final build is ready. Before that, regression could be run when tests were updated but without results being recorded. Regression is executed using GitLab CI/CD on dedicated machines.

Adding GitLab CI/CD Integration

Next we need to integrate result publishing with the GitLab CI/CD pipeline. Some runs would include results publishing, while others - not.

We add a dropdown variable to define whether publish results or not to the variables section:

variables:
  IS_RP_RUN:
    value: "no"
    options:
      - "no"
      - "yes"
    description: "Select 'yes' to publish results to Report Portal"
Enter fullscreen mode Exit fullscreen mode

What we get on GitLab CI/CD UI:

Dropdown on GitLab CI/CD UI

GitLab CI/CD supports text variables and dropdowns. Dropdowns are useful for predefined options like enabling/disabling results publishing or selecting OS/browser to run tests on

By default, I choose value "no," as results should only be published for debugged tests. Depending on IS_RP_RUN value, we could pass or not pass options required for results publishing

The following conditional logic is added to the rules section of job:

  rules:
    - if: $IS_RP_RUN == "no"
      variables:
        RP_OPTIONS: ""
      when: always
    - if: $IS_RP_RUN == "yes"
      variables:
        RP_OPTIONS: '--reportportal --rp-endpoint=<RP_SERVER_ADDRESS> --rp-api-key=<RP_API_KEY> --rp-project=<PROJECT_NAME>'
      when: always
    - when: never
Enter fullscreen mode Exit fullscreen mode

To the script section, we add command to run all tests:

pytest $RP_OPTIONS
Enter fullscreen mode Exit fullscreen mode

Condition above will pass parameters for results publishing if we select "yes". If we don't want to publish results, an empty string will be passed.

In fact, with specified rules, pipeline will run on every commit. You could add one more variable for the build version. Then update the condition, so tests will run only when the version is specified. Or you could add condition "not to run pipeline on push events"

Let's run pipeline and see what we get in ReportPortal (Run ID is 23 because I ran it several time before).

ReportPOrtal test launch

We may need to pass other parameters for ReportPortal. So we create an additional text variable RP_ADDITIONAL_OPTIONS for them. It will have an empty string as a default value. In case when we run tests with the results publishing, we could enter there additional parameters.
Then the test run command will look like this:

pytest $RP_OPTIONS $RP_ADDITIONAL_OPTIONS
Enter fullscreen mode Exit fullscreen mode

Now let's run tests again with the --rp-rerun option:

GitLab CI/CD UI with text variable

ReportPortal will successfully show that tests were rerun (Rerun mark appears on the same 23rd test launch):

ReportPortal test launch with Rerun label

If I want to add a description to the test launch, I should specify the option --rp-launch-description="Experimental run". But then the first problem will occur - the description will only indicate "Experimental and tests will not run at all.

ReportPOrtal test launch with cut description

The problem occurs because the expanded variable will be split by whitespace. Then the second part of the launch description (run") will be considered as the next parameter - test module, PyTest would like to run, but can't identify.
It is easy to fix that - just quote the variable:

pytest $RP_OPTIONS "$RP_ADDITIONAL_OPTIONS"
Enter fullscreen mode Exit fullscreen mode

This is what we get now:

ReportPortal test launch with full description

We are already approaching a "double" run:) It is not always necessary to run all tests. While we are debugging tests, we may want to run a separate module. To do this, we could add another variable MODULE_TO_RUN. We will specify the name of the module with tests in it. To run all tests, we will set test_*.py by default (let's say in this project tests are organized only by modules).
Then command to run tests will look like:

pytest $RP_OPTIONS "$RP_ADDITIONAL_OPTIONS" MODULE_TO_RUN
Enter fullscreen mode Exit fullscreen mode

...And now double run

I should mention straight away that this issue is reproduced only on Linux

While I was testing run with additional ReportPortal options, everything was fine. Also, while I was adding integration with ReportPortal, I ran a module with only one test function. Now it's time to run tests as in real battle - all tests and without additional ReportPortal options.

Without reporting, the runtime of all tests was half an hour.
And I began to wait.
And wait.
And wait some more...

But half an hour had already passed. All the tests already appeared in ReportPortal (it publishes the results after each test function, so you see the results immediately). And now more tests began to appear in ReportPortal - more than there were in the project!

Let's look at the command we start tests this time, substituting all the values:

pytest --reportportal --rp-endpoint=<RP_SERVER_ADDRESS> --rp-api-key=<RP_API_KEY> --rp-project=<PROJECT_NAME> "" test_*.py
Enter fullscreen mode Exit fullscreen mode

There seems to be nothing criminal. The only thing that can be alerting here is the empty string in quotes. Let's search for "pytest runs tests twice when contains quotes" and the second search result we get is an issue in the PyTest bug tracker:

Quotation marks in pytest command collects duplicate tests #7012

In the defect description we can see our case:

pytest --collect-only tests.py ""
Enter fullscreen mode Exit fullscreen mode

If you read the details, you will find out that an empty string in quotes is interpreted by PyTest as LocalPath (.) and tests are also duplicated by the expression test_*.py.

This problem is not yet solved in PyTest. So how can we fix it on our side? We need some value in the RP_ADDITIONAL_OPTIONS variable.
Let's look at the list of ReportPortal options and choose some option with a default value. For example, --rp-mode=DEFAULT. Running the tests now, we will see that they are run only once.


ReportPortal offers many features for improving regression runs. Though I didn't find everything I wanted (for example grouping tests by modules and packages). Also, I faced one more interesting peculiarity of ReportPortal. I'll tell you about it some other time

Top comments (0)