Info <!> WIP from http://pad.ubuntu.com/k6nvveLDHX Info <!>

Automated Test Plan for Ubiquity Installer

This document is intended to outline at a higher level the plan for automated testing of the Ubiquity Installer. The installation test cases can be found here and the state transition test cases can be found here.

Autopilot needs to take over the role of the manual tester as thoroughly as possible, and as accurate as possible. While also making the tests, flavour and language compliant, this will greatly benefit the flavors that have small teams of testers, resulting in them being able to focus their time on new features and improvements and less time doing standard installation tests.

With Ubiquity usually being the first experience for new users of *buntu flavors, the installation needs to be flawless. The process of installing should feel solid and stable and reassure the user throughout. With the capabilities now to run these tests on every new build greatly increases the chances of reaching this goal.

This document does not replace any previous plans or specifications for automated testing but rather extends upon and updates previous ideas.



Dan Chapman


Ubuntu Member

Approved By


< lp profile >

< position >

Related Documents

Ref #

Document Name



Installer Design - Ubuntu Installation Process



Test Analysis and Specification for Ubiquity

Automated testing for Ubiquity


Installation Test Cases



State Transition Test Cases



This document describes the scope, approach, resources and schedule of intended testing for the Ubiquity installer.


This document provides the following guidance;

  • Testing Scope
  • Entry and Exit criteria for each test
  • A description of resources and tools to be used to conduct testing
  • An overview of the test schedule for 'T' cycle
  • An overview of types of testing to be conducted

Testing Objectives

Provide a comprehensive and robust test suite for the installer. Validate and verify system integration and that the installers functionality aligns with the installer design specification

Features To Be Tested

  • Integration of the Ubiquity package with all *buntu flavours that use it.
  • Installation using:
    • Default Values
    • Logical Volume Management
    • Logical Volume Management with encryption
    • Manual Partitioning using all available File-System formats
    • Automatic Re-Sizing of partitions
    • Manual Re-Sizing of partitions
    • Re-Using home partition
    • OEM Install
    • Free software only install
    • Release Upgrades - Including LTS to LTS
  • Different Locales - Running English and Non-English side-by-side
  • Connectivity - Ethernet, Wireless and No Network
  • Different configurations of HDD and Memory size
  • Logic of each step of the installer complies with design specification
  • Installing Updates and Third-Party software
  • Co-Existence with other operating systems
  • English text values throughout the install as well as 2 or 3 other non-english languages.

Testing Approach

The purpose of the testing is to verify the functionality, therefore a functional, black box, UI driven testing approach will be used incorporating various other testing techniques such as boundary-value analysis and fuzz testing. We want to be able to test from a users point of view that we can easily navigate the installer, as well as successfully complete the installation using all configurations relevant to the installer.

Test Techniques

The testing techniques that will be used for the,

Installation Tests

  • Black Box testing, not knowing about Ubiquity internals give the user input required and expect the install will complete.
  • Boundary-Value Analysis will be used to test the negative and positive of hard drive and memory size configurations against the minimum system requirements of the *buntu flavour under test.
  • Fuzz testing will be used for,
    • testing non-english locales by selecting a language at random for the install
    • testing random partition configurations for manual partition tests

State Transition Tests

  • Specification-based testing, using the Installer Design specification as a guide for creating test-cases for each step of the installer.
  • State Transition Diagrams, used to map the flow of the installer
  • Boundary-Value Analysis will be used to test negative and positive boundaries of user input controls


The tests are going to be implemented using Autopilot

Test Environment

The tests will be run from the live session environment of the flavour under test as we cannot run the tests in ubiquity-dm. Therefore this requires a test runner to create the required test environment, that boots into a live session of the latest daily iso image and installing all dependencies needed to run the tests.

To enable autopilot to work with ubiquity, a private root session DBus is started. Ubiquity is then launched as root into the private DBus session enabling autopilot to hook on to the running Ubiquity process.


Adding New Test Cases

When adding new test cases care needs to be taken as to not break any currently implemented tests. All test cases share common methods to reduce duplication and breakage in one test could result in all tests failing. It is recommended that you run all tests to confirm they are running ok before a merge proposal for new test cases is completed.

Info <!> Note to reviewer: Please ensure all tests complete before merging into lp:ubiquity

Once your test has been merged, you need to request a test submission to get your test running on jenkins. Follow the test submission guidelines.

Your test will be added to the ubiquity autopilot tests development daily run which can be viewed here. Where it should have a 14 day minimum holding period. After this a review needs to be done and a report created on the test results collected over the holding period.

If it is determined that the tests are stable and passing or catching legitimate bugs, a request will be made to hand the new test over to the CI team.

Filing Bugs

All bugs found from either the test cases or the test environment need to be confirmed locally before any bugs are filed, and should include the tags:

  • autopilot
  • qa-daily-testing
  • iso-testing

So we can easily find the bugs which are affecting the test suite.

Test Deliverables

The Ubuntu-QA, Community QA and CI Teams will provide specific deliverables during the project. These deliverables fall into the following basic categories:

  • Documentation
  • Test Case Write-ups
  • Bug Write-ups
  • Reports

QATeam/AutomatedTesting/Ubiquity (last edited 2014-01-03 12:31:27 by dpniel)