ARMAutomatedTestingFramework

Revision 8 as of 2010-06-01 16:29:34

Clear message

UNDER CONSTRUCTION

Summary

To create an automated testing framework to improve automated testing in Ubuntu on Arm.

Release Note

No end-user visible change.

Rationale

ARM devices capable of running Ubuntu are available today, but are not yet as pervasive as x86. Therefore, our pool of potential testers is smaller, necessitating a more efficient approach to testing. Automated testing opportunities exist in all of Ubuntu, however it has traditionally been approached without consideration of non-x86 devices.

User stories

Peter has a lab with multiple development boards reserved for testing purposes. Peter has configured nightly cron jobs to update the machines and run tests using an automated test framework. The framework handles installing each testsuite not already installed, executing the tests, and pushing the results to a server running locally in the lab.

Susan is a developer who works from home. Like others on her team, she has a system at home that she can test the latest images on. After installing a new image, she would like to run some benchmarks to see how it compares to previous images. She installs the automated test framework and uses it to run the benchmarks she is interested in seeing. After reviewing the results, she uses the test framework to publish the results to an external server that the rest of her team can see.

Assumptions

  • The system under test may, or may not be connected to a network
  • Test systems may exist in a centralized pool, or distributed in many locations
  • Test systems might only have a text console, or even serial
  • A server for storing results is optional, and may be deployed publicly, or locally if private results storage is required

Design

Standalone Test Client

A standalone client should allow a developer or tester to use a command-line interface to perform basic testing operations such as:

  • Listing available tests
  • Installing tests
  • Executing tests
  • Storing and Listing results
  • Gathering system information and logs
  • Publishing results to a server

A text based CLI will be provided for 10.10.

Test Server

A test server should allow results to be gathered and stored in a fashion that they can be associated with specific builds, hardware, or date ranges.

The test server should allow results to be uploaded to the server so that they can be stored for future reference. SSH can be used for access control so that when the user publishes results to the server, if a known ssh key is not found, the user will be prompted for a password. This way, accounts can optionally be created for a whole team, or for individual users on the server. The server will then pick up the results that were uploaded, and import them into the database.

Implementation

The test client will be written in python, and allow for an extendable command set. It will also include a series of test definition files that encapsulate details of each test necessary for using a test. The following sections, at a minimum, will need to be included in the test definition:

  • Description
  • URL
  • MD5SUM
  • Dependencies
  • Installation script
  • Execution script
  • Results Filtering script

For tests that can be installed as a package in the archive, the URL/MD5SUM fields are not needed, and the packages necessary for installing the test should simply be listed in the Dependencies. Likewise, for tests that are built completely from source, the dependencies field is optional. New tests may be added by adding new test definition files.

If root permissions are needed for installing or running the test, the command should be run under sudo so that the user may be prompted to the root password, unless root/sudo privileges are already established.

Publishing tests will extract the results from a test run and upload them to the server in a format, such as xml, that will allow the server to easily parse the results and store them in a database.

Test/Demo Plan

Test images will be created by our team, and the automated testing framework described here will be used for testing these images, and

BoF agenda and discussion

What is needed in an automated testing framework?

  • Automated testing framework directed towards ARM, but not exclusive to ARM
    • Need to be able to integrate automated test results from various sources
    • This test data - should be separate from test client
    • Only include known good test data, ie. - be able to contact source of test
      • in the event of bad results
    • Need a default number of runs/test - be able to see variance (that data
      • point is useful)
    • Authentication of test result submitters is important (e.g., PGP, password)
    • Need to capture information about the hardware platform and config along
      • with test results.

Existing Frameworks

  • Checkbox certification database may need to be extended to include more
    • elaborate test results and performance resgression
  • Phoronix - uses PHP interface - however may meet the needs in this case
    • Phoronix has the concept of a unit test

Test Runs

  • How should we store data from test runs? Interesting results need to be
    • analyzed on a per test basis. Checkbox may meet this need? Checkbox ouput data format is pretty flexible. Data reporting programs are not open source??? Extend launchpad to include test result data analysis.
  • A lot of the concepts and techniques from DejaGnu are applicable

  • Results need to be relevant to the tests being run. Problem with
    • extra characters or results being added to data that was uploaded
  • Concept of private data in launchpad? This would be similar to bug report
    • information
  • Re-use concepts of certificate process
  • [Action] Introduce concept of tests and results for a project in

    • Launchpad. There are benefits to starting with Launchpad. Need to be able to extend data analysis based on user requirements

    • versus extending Checkbox to accomplish data reduction
  • Amount of data can be quite large - need to be able to reference and index
    • data efficently. Searching lots of unindexed test results would be undesirable. Needs to be designed properly. Can use libraries in Launchpad. Launchpad needs to be extended to analyse test data for our needs
  • Use launchpad librarian to record test log data - this won't be routinely
    • queried, so doesn't need to live in the main database.
  • gcc test results - overview of daily builds - sample of a simple and
    • effective results sharing
  • Results emailed to anybody who wanted the results
  • Results need to be compared on same platform from day to day
  • [ACTION] Michale Hudson can translate requirements in


CategorySpec