This document and code samples are concerned with how to get your test results into a format which can easily be digested by Jenkins.
Although Jenkins will display a simple pass/fail for a Jenkins job, this is usually not sufficient, because:
- a set of tests in a job may return success if it completed, even when individual tests may have failed
- the Jenkins results page for a job will not show the status of individual tests
One way to solve this is to get the test results into a format which Jenkins understands. The JUnit xml format is a good choice for this.
The JUnit test xml output format
The specification in .xsd format: http://windyroad.org/dl/Open%20Source/JUnit.xsd
More information: http://www.junit.org/node/399
In this context, "Error" means an error completing the test or job, and "Failure" means a test which completes but fails.
If you are using the python unittest module for your testing which is a very simple test harness you can trivially swith to using the xmlrunner test runner which will run the unittests as normal producing an XML results file at the end:
You can use the example script results2junit.py to trivially convert the autotest results directly to JUnit format.
You can use the example script ltp2JUnit to trivially convert the LTP results directly to JUnit format.
Rolling your Own
For addhoc tests you can use the conversion library JUnit_api.py.
I used the generateDS.py utility to generate a python wrapper API for the xml data, directly from the .xsd file. This file is named JUnit_api.py, and is used by the examples I've provided.
To process test results when there are multiple sub-tests within a job, you have to parse whatever the test output is and collect data in the following areas:
Properties apply to the entire test job, and can include things like the test command line, system configuration, installed package list, or anything else which has been collected which is of interest.
In the xml file, these appear like this (for example):
<property name="version" value="Linux version 3.0.0-14-generic-pae (buildd@palmer) (gcc version 4.6.1 (Ubuntu/Linaro 4.6.1-9ubuntu3) ) #23-Ubuntu SMP Mon Nov 21 22:07:10 UTC 2011"/>
Each test is reported as a test case. Each test case contains information If a test has an error or failure, then it has additional information about that.
An example of a successful test case:
<testcase classname="autotest" name="xfstests.228.ext4" time="30.000000"/>
An example of a test case failure:
<testcase classname="autotest" name="xfstests.241.ext4" time="24.000000"> <failure message="Test xfstests.241.ext4 is Not Applicable" type="Failure">dbench not found</failure> </testcase>
The testsuite item in the xml file contains summary information, which typically has to be gathered and accumulated while parsing all the test cases. The summary includes the number of tests, number of errors and failures, time taken by the test, etc.
<testsuite tests="2" errors="0" name="Autotest tests" timestamp="2012-01-10" hostname="Ivy" time="157.000000" failures="1">
The JUnit_api.py file implements objects which encapsulate the xml file format, making it relatively easy to create objects and set properties, then export the entire xml file. The hardest part is parsing the results of your tests.
I've provided two examples. One (ltp2JUnit) was done for the ltp test suite, and parses a long log from stdout. The other (results2junit.py) was done for autotest, and reads a bunch of individual files containing data gathered for the test (which become properties), and files for each test case. These are fairly convoluted examples because of the complexity of the results presented by the tests.
Two output xml files are provided:
- output1.xml - from a run of xfstests under autotest, 66 test cases, all succeeded
- output2.xml - from a run of xfstests under autotest, 2 test cases, one failed