Behave (Python)
AIO Tests supports importing Behave results through its support for JUnit reports or results can also be reported using the hooks from the Behave framework and calling AIO Tests REST APIs.
Behave is a tool used for Behaviour-driven development (BDD) in Python, that is very similar to Cucumber in Java/Ruby or Specflow for .Net. It uses tests written in a natural language style i.e. Gherkin, backed up by Python code. Test scenarios are written in Gherkin “.feature” files. BDD keyword-based steps are glued to a step definition, which is a Python function decorated by a matching string in a step definition module. The feature files act like test scripts.
Behave provides multiple Hooks at various points of execution, which can insert helper logic for test execution. It also provides an out-of-the-box JUnit file, which can simply be generated using a flag.
There are two ways to integrate AIO Tests with Behave either via the JUnit report or via AIO REST API calls in afterScenario hooks of Behave. This document provides an overview on:
Generating the Junit report from Behave tests and uploading it to AIO Tests.
Using AIO Tests REST APIs to report results and much more, using the Behave hooks.
In this documentation, you’ll understand:
Required Setup
pip install behave
pip install requests
(Optional, required only if reporting results via APIs)
Reporting Results via JUnit File
For the documentation, we are using a feature file from the Behave tutorial.
Feature: Showing off AIO Tests with behave
@P0
Scenario: Run a simple passing test
Given we have behave installed
When we implement a test
Then behave will test it for us!
@P1
Scenario: Stronger opponent, which is going to fail
Given the ninja has a third level black-belt
When attacked by Chuck Norris
Then the ninja should run for his life
And fall off a cliff
Running your Cases and Generating a Junit XML Report File
Behave provides 2 different concepts for reporting the results of a test run:
reporters
formatters
The JUnit reporter provides JUnit XML-like output and can simply be generated by adding a JUnit flag from the behave CLI as below :
behave --junit
This generates one XML for each feature file that has been selected for running:
Mapping Cases with AIO Tests
AIO Tests supports reporting results against existing cases in AIO Tests or creating tests using the reports.
AIO Tests generates a unique key for each test created in AIO Test, for example, AT-TC-1. To map an automated case to an existing AIO Test, the test name can take the test case key generated in its description. e.g.AT-TC-299: Login with an existing user
.
In the example below,
For scenario 1, the Behave test/scenario maps to one AIO Tests.
For scenario 2, a single Behave test/scenario maps to two manual cases in AIO Tests.
Feature: Showing off AIO Tests with behave
@P0
Scenario: AT-TC-299 - Run a simple passing test
Given we have behave installed
When we implement a test
Then behave will test it for us!
@P1
Scenario: AT-TC-300, AT-TC-301 - Stronger opponent, which is going to fail
Given the ninja has a third level black-belt
When attacked by Chuck Norris
Then the ninja should run for his life
And fall off a cliff
Running the above creates the following JUnit XML. Note that the key becomes part of the name in the report.
Uploading Results to AIO Tests
Post execution of a suite, the TEST-<xxx>.xml file can be uploaded either via:
AIO Tests REST API calls using multipart form data to upload file
Please follow the above links to continue to import results using either of the options.
Uploading the above file for the first time will
Create new cases in the system if there is no key found in the classname.name. The new case is created with:
- Title as thename
value from <testcase> tag of the JUnit report
- Automation key asclassname.name
from the JUnit report.
- Status as Published
- Automation status as Automated
- Automation owner as a user uploading the resultsAdd the newly created case to the cycle being uploaded.
Mark the details of the run.
Execution mode is set to Automated.
The duration of a run is set to the Actual Effort.
The status of a run is set based on the status mapping table below.
Failures and errors are reported as Run Level comments.
If the same file is uploaded again, the cases will be identified using the automation key (
classname.name )
and would be updated, instead of creating new cases.
Status Mapping JUnit → AIO Tests
JUnit XML | Description | AIO Tests Mapping |
---|---|---|
No tag inside <testcase> means Passed | Passed case | Passed |
</skipped> | Skipped case either by @Ignore or others | Not Run |
</failure> |
| Failed |
</error> |
| Failed |
Reporting Results via Behave Hooks and AIO Tests REST APIs
AIO Tests provides a rich set of APIs for Execution Management, using which users can not only report execution status, but also add effort, actual results, comments, defects and attachments to runs as well as steps.
AIO Tests also provides APIs to create cycles and to add cases to cycles for execution planning.
The basic sample below will show how Behave Hooks can leverage the AIO Tests REST APIs to report results. In the environment.py
the before_scenario
and after_scenario
methods can be used to make AIO API calls.
Establish a Convention for AIO Tests Case Keys
Any convention can be established and the code consuming it can cater to the convention. The below example demonstrates this by adding tags made up of case keys.
Behave environment.py
In the before_scenario
hook, the tag can be read and can be added to a cycle. In the after_scenario
hook, the details of the case can be added to the run returned in the before_scenario
.
The below is a basic example of what can be done with the hooks and AIO Tests APIs. It is recommended to add appropriate error handling and enhance it based on your automation requirements.
In the example above, the add_run
method uses requests to make an HTTP call.
It uses the scenario tags to identify the case key [ based on the convention we established]
Create a POST request.
URL: For cloud the URL host would be
https://tcms.aiojiraapps.com/aio-tcms/api/v1
. For the Jira server, it would be the native Jira server hostname.Authorization: Please refer to Rest API Authentication to understand how to authorize users. The authentication information goes in the
headers: {'Authorization': '<Auth based on Jira Cloud/Jira Server>'},
POST Body: The body consists of data from the test object.
If required, the basic example can be extended to upload attachments against the case using the upload attachment API.
For further queries and suggestions, feel free to reach out to our customer support via help@aiotests.com.