Our documentation site has been updated to AIO Tests Knowledge Base

Behave (Python)

AIO Tests supports importing Behave results through it’s support for JUnit reports or results can also be reported using the hooks from the Behave framework and calling AIO Tests REST APIs.

Behave is a tool used for Behavior driven development (BDD) in Python, that is very similar to Cucumber in Java/Ruby or Specflow for .Net. It uses tests written in a natural language style i.e Gherkin, backed up by Python code. Test scenarios are written in Gherkin “.feature” files. BDD Keyword based steps are glued to a step definition, which is a Python function decorated by a matching string in a step definition module. The feature files act like test scripts. 
Behave provides multiple Hooks at various points of execution, which  can insert helper logic for test execution. It also provides an out of the box JUnit file, which can simply be generated using a flag.

There are two ways to integrate AIO Tests with Behave either via the JUnit report or via AIO REST API calls in afterScenario hooks of Behave. This document provides an overview on :

  • Generating the Junit report from Behave tests and uploading it to AIO Tests.

  • Using AIO Tests REST APIs to report results and much more, using the Behave hooks.

 

Required Setup

  1. pip install behave

  2. pip install requests (Optional, required only if reporting results via APIs)

Reporting results via JUnit file

For the documentation, we are using a feature file from Behave tutorial.

Feature: Showing off AIO Tests with behave @P0 Scenario: Run a simple passing test Given we have behave installed When we implement a test Then behave will test it for us! @P1 Scenario: Stronger opponent, which is going to fail Given the ninja has a third level black-belt When attacked by Chuck Norris Then the ninja should run for his life And fall off a cliff

Running your cases and generating a Junit XML Report file

behave provides 2 different concepts for reporting results of a test run:

reporters

formatters

The JUnit reporter provides JUnit XML-like output and can simply be generated by adding a junit flag from the behave CLI as below :

behave --junit

This generates one xml for each feature file that has been selected for running:

<testsuite name="behaveDemo.Showing off AIO Tests with behave" tests="2" errors="1" failures="0" skipped="1" time="0.0" timestamp="2022-07-26T12:28:25.003017" hostname="Niharikas-MacBook-Pro.local"> <testcase classname="behaveDemo.Showing off AIO Tests with behave" name="Run a simple passing test" status="skipped" time="0"><skipped /> <system-out><![CDATA[ @scenario.begin @P0 Scenario: Run a simple passing test Given we have behave installed ... skipped in 0.000s When we implement a test ... skipped in 0.000s Then behave will test it for us! ... skipped in 0.000s @scenario.end -------------------------------------------------------------------------------- ]]> </system-out> </testcase> <testcase classname="behaveDemo.Showing off AIO Tests with behave" name="Stronger opponent" status="failed" time="0"> <error type="IndexError" message="HOOK-ERROR in before_scenario: IndexError: list index out of range&#10;HOOK-ERROR in after_scenario: AttributeError: 'Scenario' object has no attribute 'aiokey'"><![CDATA[Traceback: File "/Users/varshneyn/PycharmProjects/pythonProject/venv/lib/python3.8/site-packages/behave/runner.py", line 545, in run_hook self.hooks[name](context, *args) File "features/environment.py", line 15, in before_scenario print("AIO Key found " + aioKey[0]); ]]> </error><system-out> <![CDATA[ @scenario.begin @P1 Scenario: Stronger opponent Given the ninja has a third level black-belt ... untested in 0.000s When attacked by Chuck Norris ... untested in 0.000s Then the ninja should run for his life ... untested in 0.000s And fall off a cliff ... untested in 0.000s @scenario.end -------------------------------------------------------------------------------- ]]> </system-out> </testcase> </testsuite>

Mapping Cases with AIO Tests

AIO Tests supports reporting results against existing cases in AIO Tests or creating tests using the reports.

AIO Tests generates a unique key for each test created in AIO Test, example AT-TC-1. To map an automated case to an existing AIO Test, the test name can take the testcase key generated in its description. e.g.AT-TC-299 : Login with existing user

In the example below,

  • for scenario 1, Behave test/scenario maps to one AIO Tests

  • for scenario 2, a single Behave test/scenario maps to two manual cases in AIO Tests.

Feature: Showing off AIO Tests with behave @P0 Scenario: AT-TC-299 - Run a simple passing test Given we have behave installed When we implement a test Then behave will test it for us! @P1 Scenario: AT-TC-300, AT-TC-301 - Stronger opponent, which is going to fail Given the ninja has a third level black-belt When attacked by Chuck Norris Then the ninja should run for his life And fall off a cliff

Running the above creates the following JUnit XML. Note that the key becomes part of the name in the report

Uploading results to AIO Tests

Post execution of a suite, the TEST-<xxx>.xml file can be uploaded either via

Please follow the above links to continue to import results using either of the options.

Uploading the above file for the first time will

  1. create new cases in the system if there is no key found in the classname.name. The new case is created with
    - title as the name value from <testcase> tag of the JUnit report
    - automation key as classname.name from the JUnit report.
    - status as Published
    - automation status as Automated
    - automation owner as user uploading the results

  2. Add the newly created case to the cycle being uploaded to

  3. Mark the details of the run

    1. Execution mode is set to Automated

    2. Duration of run is set to Actual Effort

    3. Status of run is set based on status mapping table below

    4. Failures and errors are reported as Run Level comments

If the same file is uploaded again, the cases will be identified using the automation key (classname.name )and would be updated, instead of creating new cases.

Status Mapping JUnit → AIO Tests

JUnit XML

Description

AIO Tests Mapping

JUnit XML

Description

AIO Tests Mapping

No tag inside <testcase> means Passed

Passed case

Passed

</skipped>

Skipped case either by @Ignore or others

Not Run

</failure>

Indicates that the test failed. A failure is a test which the code has explicitly failed by using the mechanisms for that purpose. e.g., via an assertEquals.

Failed

</error>

Indicates that the test errored. An errored test is one that had an unanticipated problem. e.g., an unchecked throwable;

Failed

Reporting results via Behave Hooks and AIO Tests REST APIs

AIO Tests provides a rich set of APIs for Execution Management, using which users can not only report execution status, but also add effort, actual results, comments, defects and attachments to runs as well as steps.
AIO Tests also provides APIs to create cycles and to add cases to cycles for execution planning.

The basic sample below will show how Behave Hooks can leverage the AIO Tests REST APIs to report results. In the environment.py the before_scenario and after_scenario methods can be used to make AIO API calls.

Establish a convention for AIO Tests Case keys

Any convention can be established and the code consuming it can cater to the convention. The below example demonstrates this by adding tags made up of case keys

Behave environment.py

In the before_scenario hook, the tag can be read and can be added to a cycle. In the after_scenario hook, the details of the case can be added to the run returned in the before_scenario

The below is a basic example of what can be done with the hooks and AIO Tests APIs. It is recommended to add appropriate error handling and enhance it based on your automation requirements.

In the example above, the add_run method uses requests to make an HTTP call.

  1. It uses the scenario tags to identify the case key [ based on the convention we established]

  2. Create a POST request

    1. URL : For cloud the url host would be https://tcms.aiojiraapps.com/aio-tcms/api/v1. For Jira server, it would be the native Jira server hostname.

    2. Authorization : Please refer to Rest API Authentication to understand how to authorize users. The authentication information goes in the headers: {'Authorization': '<Auth based on Jira Cloud/Jira Server>'},

    3. POST Body : The body consists of data from the test object.

    4. If required, the basic example can be extended to upload attachments against the case using the upload attachment API.