ARMAutomatedTestingFramework

UNDER CONSTRUCTION

Summary

To create an automated testing framework to improve automated testing in Ubuntu on Arm.

Release Note

No end-user visible change.

Rationale

ARM devices capable of running Ubuntu are available today, but are not yet as pervasive as x86. Therefore, our pool of potential testers is smaller, necessitating a more efficient approach to testing. Automated testing opportunities exist in all of Ubuntu, however it has traditionally been approached without consideration of non-x86 devices.

User stories

Peter has a lab with multiple development boards reserved for testing purposes. Peter has configured nightly cron jobs to update the machines and run tests using an automated test framework. The framework handles installing each testsuite not already installed, executing the tests, and pushing the results to a server running locally in the lab.

Susan is a developer who works from home. Like others on her team, she has a system at home that she can test the latest images on. After installing a new image, she would like to run some benchmarks to see how it compares to previous images. She installs the automated test framework and uses it to run the benchmarks she is interested in seeing. After reviewing the results, she uses the test framework to publish the results to an external server that the rest of her team can see.

Assumptions

  • The system under test may, or may not be connected to a network
  • Test systems may exist in a centralized pool, or distributed in many locations
  • Test systems might only have a text console, or even serial
  • A server for storing results is optional, and may be deployed publicly, or locally if private results storage is required

Design

Standalone Test Client

A standalone client should allow a developer or tester to use a command-line interface to perform basic testing operations such as:

  • Listing available tests
  • Installing tests
  • Executing tests
  • Storing and Listing results
  • Gathering system information and logs
  • Publishing results to a server

A text based CLI will be provided for 10.10.

Test Server

A test server should allow results to be gathered and stored in a fashion that they can be associated with specific builds, hardware, or date ranges.

Multiple test servers could be supported eventually, but the current plans are to work with the validation dashboard spec to allow uploading of results to the database used by that project. Test results would be uploaded over a web interface with authentication provided by the dashboard. Results, as well as hardware and software profile information will be transmitted in JSON.

The dashboard mentioned here is described in greater detail in ARMValidationDashboard

Implementation

The test client will be written in python, and allow for an extendable command set. It will also include a series of test definition files that encapsulate details of each test necessary for using a test. The following sections, at a minimum, will need to be included in the test definition:

  • Description
  • URL
  • MD5SUM
  • Dependencies
  • Installation script
  • Execution script
  • Results Filtering script

For tests that can be installed as a package in the archive, the URL/MD5SUM fields are not needed, and the packages necessary for installing the test should simply be listed in the Dependencies. Likewise, for tests that are built completely from source, the dependencies field is optional. New tests may be added by adding new test definition files.

If root permissions are needed for installing or running the test, the command should be run under sudo so that the user may be prompted to the root password, unless root/sudo privileges are already established.

Publishing tests will extract the results from a test run and upload them to the server in a format, such as xml, that will allow the server to easily parse the results and store them in a database.

Test/Demo Plan

Test images will be created by our team, and the automated testing framework described here will be used for testing these images, and

BoF agenda and discussion

This section contains the raw meeting notes from UDS. Some of the questions that were raised are answered in this spec, some of them are answered in other specs, and some may simply be extraneous ideas or possible things to consider in the future.

- Automated testing framework directed towards ARM, but not exclusive to ARM
  - Need to be able to integrate automated test results from various sources
  - This test data - should be separate from test client
  - Only include known good test data, ie. - be able to contact source of test
      in the event of bad results
  - Need a default number of runs/test - be able to see variance (that data
      point is useful)
  - Authentication of test result submitters is important (e.g., PGP, password)
  - Need to capture information about the hardware platform and config along
      with test results.
  - Checkbox certification database may need to be extended to include more
      elaborate test results and performance resgression
  - Phoronix - uses PHP interface - however may meet the needs in this case
    Phoronix has the concept of a unit test
  - How should we store data from test runs?  Interesting results need to be
      analyzed on a per test basis.  Checkbox may meet this need?  Checkbox
      ouput data format is pretty flexible.  Data reporting programs are not
      open source???  Extend launchpad to include test result data analysis.
  - A lot of the concepts and techniques from DejaGnu are applicable
  - Results need to be relevant to the tests being run.  Problem with
    extra characters or results being added to data that was uploaded
  - Concept of private data in launchpad?  This would be similar to bug report
    information
  - Re-use concepts of certificate process
  - Action Item: Introduce concept of tests and results for a project in
    Launchpad.  There are benefits to starting with Launchpad.  Need to be able
    to extend data analysis based on user requirements
    - versus extending Checkbox to accomplish data reduction
  - Amount of data can be quite large - need to be able to reference and index
    data efficently.  Searching lots of unindexed test results would be
    undesirable.  Needs to be designed properly.  Can use libraries in
    Launchpad.  Launchpad needs to be extended to analyse test data for our
    needs
  - Use launchpad librarian to record test log data - this won't be routinely
      queried, so doesn't need to live in the main database.
  - gcc test results - overview of daily builds - sample of a simple and
    effective results sharing
  - Results emailed to anybody who wanted the results
  - Results need to be compared on same platform from day to day
  - Michale Hudson can translate requirements in specific Launchpad requiremenet
    Paul, Michael, Francis, Joe, and ? to sit down and work out specific
    requirements.  Ex - be careful of swapping, serial communications, etc.


CategorySpec

Specs/M/ARMAutomatedTestingFramework (last edited 2010-06-17 22:29:38 by 75-27-138-126)