There are cases where your project has a large number of tests, in such cases it may take a long time for the tests to complete on a single machine. Whenever you are running tests in Continuous Integration. You can save your team time and money when you run tests in parallel across many virtual machine. Since version 3.1.0 version can run recorded tests in parallel across multiple machine. Although parallel tests can technically run on a single machine as well, it is not recommended. This is because the machine will require significant resources to run your tests efficiently.
In this test we will assume that you already have a project running and recording within Continuous Integration. If you don’t have a project already set up, please read our tutorial on Continuous Integration. If you intend to run your tests across multiple browsers, we recommend that you go through our Cross Browser Testing tutorial to get helpful CI strategies when using parallelization.
Splitting up your test suite
The parallelization strategy of Cypress is file-based. Hence, for you to utilize parallelization, your tests have to be split across separate file.
Based on your balance strategy (discussed later in this tutorial), Cypress will assign each spec file to an available machine.
The run order of the spec files cannot be guaranteed when parallelized, this is due to the balance strategy.
Turning on parallelization
- First, you will need to refer to your CI provider’s documentation on how to set up multiple machines to run in your CI environment.
- You can then pass the --parallel key to cypress run so that your recorded tests are parallelized once multiple machines are available within your CI environment.
'''cypress run --record --key=abc123 --parallel'''
CI parallelization interactions
When you are in parallelization mode, the Cypress Dashboard Service will interact with your CI machines to orchestrate the parallelization of a test that is run using load-balancing of specs across available CI machines by the following process:
- The CI machines will contact the Cypress Dashboard Service to indicate the spec files to run in the project.
- A machine will opt in to receiving a spec file to run by contacting Cypress.
- Once Cypress receives requests from a CI machine, Cypress will calculate the estimated duration to test each spec file.
- Based on these estimations, Cypress will distribute (load-balances) spec files one-by-one to each available machine in a way that minimizes overall test run time.
- Immediately each CI machine finishes running its assigned spec file, more spec files will be distributed to it. This process will be repeated until all spec files are complete.
- When Cypress completes all the spec files, it will wait for a configurable amount of time before considering the test run as fully complete. This is done to support grouping of runs.
In summary, every Test Runner will send a list of the spec files to the Dashboard Service, and the service will send back one spec at a time to each Test Runner to run.
Cypress automatically balances your spec files across the available machines in your CI provider. Cypress will calculate which spec file to run based on the data collected from previous runs. This will ensure that your spec files run as fast as possible, without the need for manual configuration.
As you record more and more tests to the Cypress Dashboard, Cypress will be able to better predict how long a given spec file will take to run. In order to prevent irrelevant data from affecting the duration prediction, Cypress does not use old historical run data regarding the spec file.
Spec duration history analysis
Having a duration estimation for each spec file of a test run, Cypress will be able to spec files to available CI resources in descending order of spec run duration. This ensures that the most time-consuming specs start first which will minimize the overall test run duration.
The example below is shows a demo Cypress project. We will show you the result when you run the test with and without parallelization.
In this example, a single machine will run a job named demo-1x, Cypress will run all the 18 specs one by one alphabetically in this job. It takes 1.50 to complete all the tests.
demo-1x, Machine #1
-- actions.spec.js (14s) -- aliasing.spec.js (1s) -- assertions.spec.js (1s) -- connectors.spec.js (2s) -- cookies.spec.js (2s) -- cypress_api.spec.js (3s) -- files.spec.js (2s) -- local_storage.spec.js (1s) -- location.spec.js (1s) -- misc.spec.js (4s) -- navigation.spec.js (3s) -- network_requests.spec.js (3s) -- querying.spec.js (1s) -- spies_stubs_clocks.spec.js (1s) -- traversal.spec.js (4s) -- utilities.spec.js (3s) -- viewport.spec.js (3s) -- waiting.spec.js (5s)
It should be noted that the spec run's time add up to 0.54 but the total is 1.50 the extra time in the total run represents time taken to start the browser, encode and upload the video to the dashboard, and then requesting the next spec to run.
When we run the demo-1x project with parallelization, Cypress will use its balance strategy to order the specs to run based on the spec’s previous run history. To illustrate this, we ran the all the tests again with parallelization across 2 machines. It finished in 58 seconds
demo-1x, Machine #1, 9 specs
-- actions.spec.js (14s) -- traversal.spec.js (4s) -- misc.spec.js (4s) -- cypress_api.spec.js (4s) -- cookies.spec.js (3s) -- files.spec.js (3s) -- location.spec.js (2s) -- querying.spec.js (2s) -- location.spec.js (1s)
demo-2x, Machine #2, 10 specs
-- waiting.spec.js (6s) -- navigation.spec.js (3s) -- utilities.spec.js (3s) -- viewport.spec.js (4s) -- network_requests.spec.js (3s) -- connectors.spec.js (2s) -- assertions.spec.js (1s) -- aliasing.spec.js (1s) -- spies_stubs_clocks.spec.js (1s)
There is a clear difference in running times and machines used. The parallelization of our tests across 2 machines saved us almost 50% of the total run time. We can further decrease the build time by adding more machines.
Grouping test runs
You can label and associate multiple cypress run calls to a single run by passing in the --group <name> flag, where the name key is an arbitrary reference label. The group name has to be unique within the associated test run.
Noted that it is required for CI machines to share a common CI build ID environment in order for multiple runs to be grouped into a single run.
Typically, these CI machines run in parallel or within the same build workflow or pipeline, However, it is not required to use Cypress parallelization to group runs. The grouping of run can be utilized independent of Cypress parallelization.
Grouping by browser
Cypress enables you to test your application against different browsers and then view the results under a single run within the Dashboard. In the example below, we will name the groups the same name as the browser being tested:
- The first group will be called Windows/Chrome 83
cypress run --record --group Windows/Chrome-83 --browser chrome
- The second group will be called Mac/Chrome 84
cypress run --record --group Mac/Chrome-84 --browser chrome
- The third group will be called Linux/Electron. Electron is the default browser that is used in Cypress runs.
cypress run --record --group Linux/Electron
Grouping to label parallelization
The power of Cypress parallelization can be made available to our groups. To demonstrate this, we will run a group that will test against Chrome and 2 machines, a group that will test against Electron with 4 machines, and another group that will test against Electron again, but with one machine alone:
'''cypress run --record --group 1x-electron'''
'''cypress run --record --group 2x-chrome --browser chrome --parallel'''
'''cypress run --record --group 4x-electron --parallel'''
We have used the 1x, 2x and 4x group prefix here to indicate the level of parallelism for each run, and it is neither required nor essential.
When this tests are run the fastest will be the test against electron with 4 machines, followed by the test against Chrome with two machines and then the test against Electron with one machine.
Grouping by spec context
Consider the case where you have an application that has a customer facing portal, guest facing portal as well as an administration facing portal. You will be able to organize and test these three parts of your application with the same run:
- We will call one group package/admin:
'''cypress run --record --group package/admin --spec 'cypress/integration/packages/admin/**/*''''
- And another will be called package/customer:
'''cypress run --record --group package/customer --spec 'cypress/integration/packages/customer/**/*''''
- We will call the last group package/guest:
'''cypress run --record --group package/guest --spec 'cypress/integration/packages/guest/**/*''''
This pattern is very useful for projects that are in a monorepo. Every segment of the monorepo can be assigned its own group, and larger segments can then be parallelized in order to speed up their testing.
Linking CI machines for parallelization or grouping
To associate multiple CI machines to one test run, you will have to use a CI build ID. This ID is based on environment variables which are unique to each CI build and vary based on CI provider. Cypress has out-of-the-box support for most of the CI-providers that are commonly used. Hence, you will typically not have to set the CI build ID directly via the -ci-build-id flag.
CI Build environment variable by provider
Currently, Cypress uses the following CI environment variables to determine a CI build ID for a test run:
|Gitlab||CI_PIPELINE_ID, CI_JOB_ID, CI_BUILD_ID|
It is possible to pass a different value to link agents to the same run. For instance, if you are using Jenkins and think that the environment variable BUILD_TAG is more unique than the environment variable BUILD_NUMBER, you should pass the BUILD_TAG value via CLI -ci-build-id flag.
'''cypress run --record --parallel --ci-build-id $BUILD_TAG'''
Run completion delay
When you are in parallelization mode or when you are grouping runs, Cypress waits for a specified amount of time before completing the test run in case any more relevant work remains. This is done to compensate for various scenarios where CI machines could be backed-up in a queue.
This waiting period is known as the run completion delay and it starts after the last known CI machine has completed.
By default, this delay is 60 seconds, but it is can be configured within the Dashboard project settings page.
Visualizing parallelization and groups in the Dashboard
You can see the result of each of the spec files that ran within the Dashboard Service in the run’s Specs Tab. Specs can be visualized within a Timeline, Bar Chart and Machines View.
The Timeline View will chart your spec files as they ran relative to each other. This is very helpful when you want to visualize how your tests ran chronologically across all the available machines.
Bar Chart View
The Bar Chart View will visualize the duration of your spec files relative to each other.
The Machines View will chart spec files by the machines that executed them. This view will enable you to evaluate the contribution of each machine to the overall test run.
- New Content published on w3resource :
- Python Numpy exercises
- Python GeoPy Package exercises
- Python Pandas exercises
- Python nltk exercises
- Python BeautifulSoup exercises
- Form Template
- Composer - PHP Package Manager
- PHPUnit - PHP Testing
- Laravel - PHP Framework