Playwright

AIO Tests supports importing Playwright test results through its support for JUnit reports/Cucumber reports or via the AIO Tests REST APIs, which can be invoked from the hooks available in Playwright.

Playwright is an open-source NodeJS-based framework by Microsoft used for web testing and automation. It supports end-to-end cross-browser testing through its high-level API, allowing the tester to control a wide variety of browsers and also headless browsers.

This document provides an overview on:

  • Generating the Junit report from Playwright tests and uploading it to AIO Tests.

  • Generating Cucumber reports with Playwright + Cucumber and uploading it in AIO Tests

  • Using AIO Tests REST APIs to report results and much more, using the Playwright framework hooks.

In this documentation, you’ll understand:

Playwright + JUnit

The demo example is based on the Getting Started project of Playwright.

Required Playwright Setup

  1. npm init playwright@latest. On the prompt, select the tests folder.

  2. For reporting results:

    1. For JUnit report : PLAYWRIGHT_JUNIT_OUTPUT_NAME=results.xml npx playwright test --reporter=junit

    2. For reporting results via AIO Tests API : npm install axios (This can be replaced with any library to make API calls).

Mapping and Running Your Tests

The sample test generated by Playwright is as follows.

In AIO Tests, a unique key PROJKEY-TC-12 exists for each case. The unique key can be added to the test name to report results against it in AIO Tests.

The example below shows PROJ1-TC-23 added to the test name. On running the test, the reports will contain the case key.

const { test, expect } = require('@playwright/test'); test.beforeEach(async ({ page }) => { await page.goto('https://demo.playwright.dev/todomvc'); }); const TODO_ITEMS = [ 'feed the cat', 'book a doctors appointment' ]; test.describe('New Todo', () => { test('PROJ1-TC-23 : should allow me to add todo items', async ({page}) => { // create a new todo locator const newTodo = page.getByPlaceholder('What needs to be done?'); // Create 1st todo. await newTodo.fill(TODO_ITEMS[0]); await newTodo.press('Enter'); // Make sure the list only has one todo item. await expect(page.getByTestId('todo-title')).toHaveText([ TODO_ITEMS[0] ]); // Create 2nd todo. await newTodo.fill(TODO_ITEMS[1]); await newTodo.press('Enter'); // Make sure the list now has two todo items. await expect(page.getByTestId('todo-title')).toHaveText([ TODO_ITEMS[0], TODO_ITEMS[1] ]); }); });

To trigger the Playwright tests, use:

npx playwright test

Reporting Results via JUnit File

Using the Native JUnit Reporter

Playwright Test comes with a few built-in reporters for different needs and the ability to provide custom reporters. The easiest way to try out built-in reporters is to pass --reporter.

Running the tests generates the following Junit XML:

<testsuites id="" name="" tests="1" failures="0" skipped="0" errors="0" time="4.201744999885559"> <testsuite name="example.spec.js" timestamp="1675073959913" hostname="" tests="1" failures="0" skipped="0" time="2.681" errors="0"> <testcase name="New Todo PROJ1-TC-23 : should allow me to add todo items" classname="[chromium] › example.spec.js › New Todo › PROJ1-TC-23 : should allow me to add todo items" time="2.681"> </testcase> </testsuite> </testsuites>

Uploading Results to AIO Tests

Post execution of a suite, the TEST-<xxx>.xml file can be uploaded either via

Please follow the above links to continue to import results using either of the options.

Uploading the above file for the first time will

  1. Create new cases in the system. The new case is created with:
    - Title as the name value from <testcase> tag of the JUnit report
    - Automation key as classname.name from the JUnit report.
    - Status as Published
    - Automation status as Automated
    - Automation owner as a user uploading the results

  2. Add the newly created case to the cycle being uploaded.

  3. Mark the details of the run.

    1. Execution mode is set to Automated

    2. The duration of the run is set to the Actual Effort

    3. The status of the run is set based on the status mapping table below

    4. Failures and errors are reported as Run Level comments

If the same file is uploaded again, the cases will be identified using the automation key (classname.name )and would be updated, instead of creating new cases.

Status Mapping JUnit → AIO Tests

JUnit XML

Description

AIO Tests Mapping

JUnit XML

Description

AIO Tests Mapping

No tag inside <testcase> means Passed

Passed case

Passed

</skipped>

Skipped case either by @Ignore or others

Not Run

</failure>

Indicates that the test failed. A failure is a test which the code has explicitly failed by using the mechanisms for that purpose. e.g., via an assertEquals.

Failed

</error>

Indicates that the test errored. An errored test is one that had an unanticipated problem. e.g., an unchecked throwable;

Failed

Reporting Results via Playwright Hooks and AIO Tests REST APIs

AIO Tests provides a rich set of APIs for Execution Management, using which users can not only report execution status, but also add effort, actual results, comments, defects and attachments to runs as well as steps.
AIO Tests also provides APIs to create cycles and to add cases to cycles for execution planning.

The basic sample below will show how the Playwright Reporter API can leverage the AIO Tests REST APIs to report results. onTestEnd method can be used to make an AIO API call.

Playwright onTestEnd Method

onTestEnd(test: TestCase, result: TestResult) { console.log(`Finished test ${test.title}: ${result.status}`); }

Establish a Convention for AIO Tests Case Keys

For the purpose of the example, we have established a convention to map cases - the AIO Tests case key is the prefix to the case title e.g. it('NVPROJ-TC-11: should login with valid credentials' contains NVPROJ-TC-11, which is the AIO Tests Case key.

Any convention can be established and the code consuming it can cater to the convention. In our case, we are using startsWith to identify the case key.

Reporting Result via API

The playwright provides a way to develop custom reporters.

In the example below, a new class AIOReporter (aio-reporter.js) uses the onTestEnd method to make a call to the AIO REST API.

Register the reporter in the playwright.config file as below:

reporter: './aio-reporter.js',

In the example above, the postResults method uses Axios to make an HTTP call.

  1. It uses the test title to identify the case key [ based on the convention established]

  2. Create a POST request

    1. URL: For cloud the URL host would be https://tcms.aiojiraapps.com/aio-tcms/api/v1. For the Jira server, it would be the native Jira server hostname.

    2. Authorization: Please refer to Rest API Authentication to understand how to authorize users. The authentication information goes in the headers: {'Authorization': '<Auth based on Jira Cloud/Jira Server>'},

    3. POST Body: The body consists of data from the test and result object provided by the Playwright. If the case has failed, the error is posted as comments.

    4. If required, the basic example can be extended to upload attachments against the case using the upload attachment API.

The above is a basic example of what can be done with the hooks and AIO Tests APIs. It is recommended to add appropriate error handling and enhance it based on your automation requirements.

Playwright + Cucumber Setup

  1. npm init playwright@latest. On prompt, select the tests folder

  2. npm install @cucumber/cucumber.

Mapping and Running Your Tests

Cucumber generates a cucumber.json report which can be directly imported in AIO Tests.

In AIO Tests, a unique key PROJKEY-TC-12 exists for each case. The unique key can be added to the scenario tags to report results against it in AIO Tests. All scenarios that are not tagged, will be created as new cases, along with the steps from the JSON report.

The below example maps a scenario to an AIO Case.

To trigger the cucumber tests:

./node_modules/.bin/cucumber-js -f json:tmp/cucumber_report1.json --exit

This would generate the Cucumber JSON report as below:

Uploading Results to AIO Tests

Post execution of a suite, the cucumber.json file can be uploaded either via

Please follow the above links to continue to import results using either of the options.

Uploading the above file for the first time will

  1. Create new cases in the system. The new case is created with:
    - Title as the scenario description
    - Automation key as scenario.id from the JSON report.
    - Steps
    - Status as Published
    - Automation status as Automated
    - Automation owner as user uploading the results

  2. Add the newly created case to the cycle being uploaded to

  3. Mark the details of the run

    1. Execution mode is set to Automated

    2. The duration of a run is set to the Actual Effort

    3. The status of a run is set based on the status mapping table below

    4. Failures and errors are reported as Run Level comments

    5. Step-level results and step-level actual results in case of failure

If the same file is uploaded again, the cases will be identified using the automation key (scenario.id)and would be updated, instead of creating new cases.

Reporting Results via Cucumber Hooks and AIO Tests REST APIs

AIO Tests provides a rich set of APIs for Execution Management, using which users can not only report execution status, but also add effort, actual results, comments, defects and attachments to runs as well as steps.
AIO Tests also provides APIs to create cycles and to add cases to cycles for execution planning.

The basic sample below will show how Cucumber Hooks can leverage the AIO Tests REST APIs to report results.

To trigger the playwright tests with the above file, the reporting file needs to be provided to cucumber-js, using the following command:
./node_modules/.bin/cucumber-js -f json:./tmp/cucumber_report1.json --require features/support/handlers.js --require features/support/steps.js

 

For further queries and suggestions, feel free to reach out to our customer support via help@aiotests.com.