MobileInstallTesting

Summary

The number of images to test is increasing rapidly, and some images require specific hardware which is not widely available in the community. These factors make it impractical to do exhaustive testing of install images. However, adequate coverage must still be acheived in order to ensure the quality of milestone images.

Release Note

No user visible change in the distribution.

Rationale

Ubuntu image installation is currently handled mainly through iso tracker, however, this is targeted mainly at smoketesting before a milestone. More comprehensive testing of install images is needed before the final release. Tests should be established that provide good coverage while limiting the number of tests required.

User stories

Lucy only has one XYZ development board to test with, and the candidates for the final image are now posted so she also has a limited amount of time. She wants to cover as many installation options as possible, but installs are time consuming. Using pairwise methods, she works out a plan that covers all of the options she wants, using only a few installs.

Assumptions

Most bugs are apparent with the combination of two or less factors being tested.

Design

This spec suggests to extend our QA install testing process in a way that will allow us to get a considerably higher test coverage of our release products/images, while preserving the ability to test images with high frequency during pre-milestone testing.

To ensure the latter, this spec confirms the established approach to run install tests on all-options - as done in the past. The rational about not extending the test-set using for verifying pre-milestones is that the frequency of image respins can be quite high, which makes verifying more combinations unfeasible.

However, while pre-milestone times have unique time constraints as mentioned above, we don't face the same constraints during milestone development phases. This spec suggests to utilize that time by conducting additional test option combinations generated using the all-pair method.

A bit of background on pair wise testing can be found here:

Example - generating tests using all-pair approach

For example, let's say you have 2 configuration options (let's call them 1 and 2) that can be either on or off. That would be two states (we'll call the states a and b). To test all options at least once, you would only need 2 tests:

 1a 2b
 1b 2a

In order to test all pairs of options in conjunction with one another, it would take 4 tests:

 1a 2b 
 1b 2a 
 1a 2a 
 1b 2b

This gets more interesting as the number of options grows. For example, if you had 7 configuration options with 2 states each, testing all options at least once would still only take 2 tests. Testing all pairs of options would take 8 tests. However, in order to test them all exhaustively, it would require 128 tests. For something like installs where the test itself is time consuming, this would be impractical.

Implementation

The idea we discussed in the session at UDS was to do the extra install testing after the milestone images are released. ISO tracker smoke testing will still operate as normal, but will be augmented by further install testing. For instance, testing on Beta1 will mean that the tests have been performed on a daily snapshot sometime between beta1 and beta2. A fake target will be created on iso tracker to facilitate testing of the images in this way. A smaller subset of the tests will be targetted for running on the pre-FINAL image.

  • Create list of possible install options for arm
  • Create matrix of minimum tests required to test each option and each pair
  • Document install test scenarios on testcases.qa.ubuntu.com
  • Create virtual milestones for post-milestone testing and link to appropriate tests
  • perform testing at each milestone

The actual tests can basically be documented by creating a wiki template, and substituting the options we want to test for each scenario. The list of tests can be created by using any number of already available pairwise test generation tools such as jenny or allpairs.

Install options for ARM images

Here is the current matrix of tests that will exercise all of the accessible installation options on arm. Some options, such as OEM install mode, are not straight-forward to select at this time for testing, or for end-user installation. If and when changes to the boot scripts allow for more convenient selection of these options, they will be added to the testing.

Testing each option at least once:

Partitioning

Login

Network

Side-by-side

Log in automatically

detached

guided

Require my password to log in and to decrypt my home folder

detached

manual

Require my password to log in

attached

Testing all pairs:

Partitioning

Login

Network

Side-by-side

Log in automatically

detached

guided

Require my password to log in and to decrypt my home folder

attached

manual

Require my password to log in

attached

guided

Require my password to log in

detached

Side-by-side

Require my password to log in

attached

manual

Log in automatically

detached

Side-by-side

Require my password to log in and to decrypt my home folder

detached

guided

Log in automatically

attached

manual

Require my password to log in and to decrypt my home folder

attached

BoF agenda and discussion

= Efficient testing of install images =
https://wiki.ubuntu.com/Specs/MobileInstallTesting

* not mobile specific
* a lot of install testing towards the end of cycle
 * ad-hoc
 * some architectures like arm dont have a broad testing community
  * images take quite long to install - just one board/person
  * idea: limit the number of installs to perform
 * found a lot of install issues during the final release testing that were
   not really mobile specific.

* atm most testing is done throgh iso tracker
  -> goal: determine whether an image is good enough a milestone, but does
     stop right after the milestone is out
  -> problem: no real QA for images after the milestone is out
  -> problem: only covers default install, but no customizations
 * new approach pairwise (aka all-pairs) testing: minimize the number of tests to perform
   -> tools exist to generate the matrix to test
   -> question: how to reduce the matrix to something coverable in practice
      -> test higher risk combinations only
      -> example: be able to reduce test scenarios from 72 to 12 or even further
 * idea is to supplement this with a new testing approach
 * problem: testing hardware combinations
  * voluntarily submit hardware info (maybe using checkbox information) 
 * spread out testing of the pairs throughout the alpha cycle using the daily
   image
 * during release time just test the options PLUS verification of all the
   combinations with the pair elements involved in bugs found during previous
   testing -> see http://burtleburtle.net/bob/math/jenny.html
 * create a) fake milestones and b) fake products to a) allow testing of final milestone
   releases rather than milestone candidates and b) don't spam/distract community
   testers
 * is it a problem that test coverage is not balanced propertly (e.g. lots of users
   did test A ... but only a few do test B)? seems not to be the case
 * claim mechanism with timestamps

List of options to test (pairwise) - initially assembled from UNR install options
(might be slightly different for ARM images in lucid)

We can use pair wise testing between milestones:
 * Create fake milestones and products for those.
 * We have to avoid spamming the community members with new builds
 
Possibilities for options to test during install:
 Boot
• livecd + install
• install Ubuntu
OEM install
• on/off
Partitioning
• side-by-side
• guided
• manual
Login
• auto
• password
• password+ecryptfs home dir
Free software only
• on/off
Network
* Connected/disconnected

How can we know which hardware the community has:
 * The ISO tracker profile include a text field to specify the hardware
 * Using checkbox to send the hw

ACTION: get a better list of arm install tests/pairs
ACTION: checking and setting up virtual milestones and virtual products in iso tracker
ACTION: put a testing timeline/roadmap on wiki (two columns: a) scheduled/regular
        b) regression testcases etc.


CategorySpec

Specs/MobileInstallTesting (last edited 2009-12-16 22:59:26 by 75-27-138-126)