TestSubmissionGuidelines

Tests Guidelines

Warning /!\ WORK IN PROGRESS.

As we learn more these guidelines will be updated and adjusted as needed. Please raise issues you see here at the #ubuntu-testing IRC channel on Freenode, or by emailing ubuntu-quality@lists.ubuntu.com.

Teams will develop test suites for their area of interest; QA will run them, as needed, and when needed, and publish the results on the Jenkins public page. These guidelines provide a high-level overview of what we expect from test developers for the hand-over of test suites. Warning /!\ test developers are responsible for the update and maintenance of their respective test suites.

Basic Requirements

  • All test cases must either succeed or fail. Exit status must be explicitly set.
  • Tests must produce output that can be digested by Jenkins (see Steve Conklin's contribution for an example)

  • All test suites must have a text document (a README file) describing:
    1. Test target (what is the system/package under test, what is being tested)
    2. Test requirements (which packages it depends on, environment requirements, etc)
    3. Test execution: how to run the test(s), configuration files needed, example calls, etc
    4. Explanation on test generated output
  • Tests should, whenever possible, be Ubuntu packages. These packages can reside in a PPA (one single? Any PPA?)
  • Any dependencies must be installable from the Ubuntu archives.

  • Naming the tests: <package-name>-<test id> when it is package-dependent, or <team name>-<test id> if not.

Submitting the request

Send an email to qa-team@canonical.com; subject must start with:

  • [Test Request]: for a new test suite/case to be added,

  • [Test Removal]: for an existing test suite/case to be discontinued,

  • [Test Update]: for an existing test suite/case to be updated

followed by the test suite name (as above).

The body of the email should have the following entries:

  • Summary: -- what packages the test suite exercises high level, and what type of testing it is (functional, performance, benchmarking...)

  • Pre-Reqs: -- the list of package dependencies, i.e. provide a list of packages or applications or environment configuration that needs to happen for the tests to be able to run correctly. The recommendation is to have an initial test case that takes care of installing all the dependencies and making all the configurations assuming the tests are going to run on a freshly installed machine. This may also include specific variants or architectures.

  • Hardware -- are there physical hardware requirements

  • Triggers: -- what are the conditions for triggering the test execution (ISO availability, SRU, bzr update, daily, weekly, etc)

  • Duration: -- how long the test suite(s) take to execute on average (for example, "15 minutes")

  • Output example -- a log file or an output file that shows what is expected after running the tests

  • Repository -- where to get the tests from, ideally ppa or branch in launchpad

  • Notifications -- who needs to be notified of test failures

Test Rejection Criteria

Test suites that have been accepted must still meet certain criteria in order to remain in operation. Test suites may be stopped from execution if:

  • test results produce false (positives|negatives)
  • tests fail to run
  • tests interfere with other test suites
  • tests have not been maintained in sync with the package(s) being tested

Owners will be notified and must correct the issue in order for the test suite(s) executions to be continued.

QATeam/AutomatedTesting/TestSubmissionGuidelines (last edited 2012-12-17 17:11:29 by 188-221-246-203)