ARMAutomatedTestingFramework

Differences between revisions 5 and 6
Revision 5 as of 2010-05-21 17:18:35
Size: 5522
Editor: 75-27-138-126
Comment:
Revision 6 as of 2010-05-27 16:19:11
Size: 5843
Editor: 216
Comment:
Deletions are marked like this. Additions are marked like this.
Line 8: Line 8:
 * '''Packages affected''': TBD  * '''Packages affected''':
Line 43: Line 43:
A test server should allow results to be gathered and stored in a fashion that they can be associated with specific builds, hardware, or timestamps.  The function of the server is not only for storage of the results, but may also be extended to allow for scheduling test runs on remote clients. A test server should allow results to be gathered and stored in a fashion that they can be associated with specific builds, hardware, or timestamps.
Line 45: Line 45:
The test server should allow results to be uploaded to the server so that they can be stored for future reference.
Line 46: Line 47:

This section should describe a plan of action (the "how") to implement the changes discussed. Could include subsections like:
The test client will be written in python, and allow for an extendable command set. It will also include a series of test definition files that encapsulate details of each test necessary for installation, execution, and filtering results. New tests may be added by adding new test definition files. Publishing tests will extract the results from a test run and upload them to the server in a standard format that will allow the server to easily parse the results and store them.

UNDER CONSTRUCTION

Summary

We need to decide on, or create an automated testing framework to improve automated testing in Ubuntu on Arm.

Release Note

No end-user visible change.

Rationale

ARM devices capable of running Ubuntu are available today, but are not yet as pervasive as x86. Therefore, our pool of potential testers is smaller, necessitating a more efficient approach to testing. Automated testing opportunities exist in all of Ubuntu, however it has traditionally been approached without consideration of non-x86 devices.

User stories

Assumptions

  • The system under test may, or may not be connected to a network
  • Test systems may exist in a centralized pool, or distributed in many locations
  • Test systems might only have a text console, or even serial

Design

Standalone Test Client

A standalone client should allow a developer or tester to use a command-line interface to perform basic testing operations such as:

  • Listing available tests
  • Installing tests
  • Executing tests
  • Storing and Listing results
  • Gathering system information and logs
  • Publishing results to a server

This may or may not have a graphical interface, but a text interface is preferable for the client since it could also be used in lightweight environments that lack a GUI.

Test Server

A test server should allow results to be gathered and stored in a fashion that they can be associated with specific builds, hardware, or timestamps.

The test server should allow results to be uploaded to the server so that they can be stored for future reference.

Implementation

The test client will be written in python, and allow for an extendable command set. It will also include a series of test definition files that encapsulate details of each test necessary for installation, execution, and filtering results. New tests may be added by adding new test definition files. Publishing tests will extract the results from a test run and upload them to the server in a standard format that will allow the server to easily parse the results and store them.

Test/Demo Plan

It's important that we are able to test new features, and demonstrate them to users. Use this section to describe a short plan that anybody can follow that demonstrates the feature is working. This can then be used during testing, and to show off after release. Please add an entry to http://testcases.qa.ubuntu.com/Coverage/NewFeatures for tracking test coverage.

This need not be added or completed until the specification is nearing beta.

Unresolved issues

This should highlight any issues that should be addressed in further specifications, and not problems with the specification itself; since any specification with problems cannot be approved.

BoF agenda and discussion

What is needed in an automated testing framework?

  • Automated testing framework directed towards ARM, but not exclusive to ARM
    • Need to be able to integrate automated test results from various sources
    • This test data - should be separate from test client
    • Only include known good test data, ie. - be able to contact source of test
      • in the event of bad results
    • Need a default number of runs/test - be able to see variance (that data
      • point is useful)
    • Authentication of test result submitters is important (e.g., PGP, password)
    • Need to capture information about the hardware platform and config along
      • with test results.

Existing Frameworks

  • Checkbox certification database may need to be extended to include more
    • elaborate test results and performance resgression
  • Phoronix - uses PHP interface - however may meet the needs in this case
    • Phoronix has the concept of a unit test

Test Runs

  • How should we store data from test runs? Interesting results need to be
    • analyzed on a per test basis. Checkbox may meet this need? Checkbox ouput data format is pretty flexible. Data reporting programs are not open source??? Extend launchpad to include test result data analysis.
  • A lot of the concepts and techniques from DejaGnu are applicable

  • Results need to be relevant to the tests being run. Problem with
    • extra characters or results being added to data that was uploaded
  • Concept of private data in launchpad? This would be similar to bug report
    • information
  • Re-use concepts of certificate process
  • [Action] Introduce concept of tests and results for a project in

    • Launchpad. There are benefits to starting with Launchpad. Need to be able to extend data analysis based on user requirements

    • versus extending Checkbox to accomplish data reduction
  • Amount of data can be quite large - need to be able to reference and index
    • data efficently. Searching lots of unindexed test results would be undesirable. Needs to be designed properly. Can use libraries in Launchpad. Launchpad needs to be extended to analyse test data for our needs
  • Use launchpad librarian to record test log data - this won't be routinely
    • queried, so doesn't need to live in the main database.
  • gcc test results - overview of daily builds - sample of a simple and
    • effective results sharing
  • Results emailed to anybody who wanted the results
  • Results need to be compared on same platform from day to day
  • [ACTION] Michale Hudson can translate requirements in


CategorySpec

Specs/M/ARMAutomatedTestingFramework (last edited 2010-06-17 22:29:38 by 75-27-138-126)