MetricsBasedTesting

Differences between revisions 3 and 4
Revision 3 as of 2009-06-10 11:40:56
Size: 2859
Editor: cpc4-oxfd8-0-0-cust39
Comment:
Revision 4 as of 2009-06-10 11:47:00
Size: 2467
Editor: cpc4-oxfd8-0-0-cust39
Comment:
Deletions are marked like this. Additions are marked like this.
Line 38: Line 38:
 * When a test successfully gathers and submits it's metrics data the test is marked as a Pass
Line 49: Line 50:
 * Only publish data for publicly released hardware
Line 53: Line 56:
 * Create a new test result state 'Data' to be used instead of Pass/Fail/Skip
Line 55: Line 59:


== Unresolved issues ==

 * Should this data be public or private?
  * Will proprietary data be exposed if the data is public? (E.g. unreleased hardware)
  * We should be able to make a split the same way we do with the existing released data
   * this is done with a flag in the certification database marking systems as private
 * Should benchmarks be pigeonholed into the pass/fail/skip system?
  * should numeric data be an addition to the existing result type or a new one?
   * heno feels that this should be a new type -- our existing results ain't broke so don't fix 'em
   * also certain data doesn't really necessarily have a pass or fail... e.g. boot speed
 * Separate public and private results with access controls

Summary

All our current testing is a binary pass/fail but we would also like to track the evolution of certain parameters such as boot speed and power usage that may not have a clear pass/fail threshold but for which historical data is desireable. We will extend out infrastructure to collect and display such data.

Rationale

Work to improve performance requires access to rich performance data rather than binary pass/fail.

User stories

Design

Gather non-binary data during various checkbox tests and feed them on to the developers who need this for their work.

Examples

  • Tracking bootspeed (using bootchart)
  • Power usage
  • I/O throughput
  • Amount of memory being used
    • polled during install, application testing, immediately after boot, etc.
  • Webcam framerate

Implementation

Initial pass should be able to be implented quickly and provide value to groups who need this data.

Phase I (Karmic)

  • Gather metric data as attachments (files/blobs)
  • When a test successfully gathers and submits it's metrics data the test is marked as a Pass
  • Provide access to gathered metric data for developers
  • Do not parse it in the certification system or results tracker -- just gather and regurgitate

Provide access to the data without doing any parsing

  • Have a directory containing the data
  • .../bootchart/$machinename/$timestamp.tgz or .../$machinename/metrics/bootchart/$timestamp.tgz
    • linked from the certification 'librarian' so there is no necessity for copying the files around
  • Have a file containing metadata about the machine from this view
    • this data is already available from the website and can be retrieved programmatically
    • information such as processor speed, ram, etc.
  • Only publish data for publicly released hardware

Phase II (Later)

  • Store metric data in database as numeric value
    • Do not yet parse or analyze heuristically
  • Create a new test result state 'Data' to be used instead of Pass/Fail/Skip
  • Provide simple access for ad-hoc reporting
  • Generate alerts when metrics change by a sufficiently large amount or above a threshold
  • Separate public and private results with access controls


CategorySpec

QATeam/Specs/MetricsBasedTesting (last edited 2009-08-21 15:26:54 by ua-178)