There are many different types of testing availible to people interested in QA. Before getting invovled in any activity, it's recommended you join the Ubuntu QA mailing list. The mailing list will have periodic announcements surrounding testing and keep you up to date on what's going on currently.

Tutorials / Classroom sessions


Ubuntu Tutorial and Classroom sessions.

These are the most important ways for people to learn about what ubuntu is and how by testing, it gets better each time.

Being able to be there

If you can attend the Tutorial / classroom sesssions you can ask the tutor questions directly.

Not Being able to be there

  • All the tutorial / classroom sessions QA team hold are logged here. Go and have a read of them, remember that the things required for a classroom session still apply if you want to follow the tutorial.

If you have a question that was not asked, then ask on the #ubuntu-quality channel or contact the tutor directly.

What can I do?

If you are new, contributing test results is a great way to start. Once you've contributed some results, you can participate in contributing testcases as well with your new knowledge. Watch the mailing list for announcements around calls for testing and cadence testing weeks. Then, follow the tutorials below and contribute your results. Use the mailing list and IRC channels to help you if you get stuck. Thanks!


Before you begin, it's important to learn about the QATracker. The QATracker is the master repository for all our our testing within ubuntu QA. It holds our testcases, records our results, and helps coordinate our testing events. It's imperative you learn about and understand how to use this tool before you are able to participate in the activities listed below. Learn more about it here.

Contributing Testcases

See the resources on writing testcases to help contribute manual and/or automated testcases!


We hold hackfests online from time to time to help new and old contributors contribute tests.

Manual Testcases

Manual Testcases are simply sets of instructions designed to be followed and reported against by real people. These testcases can be more complex or require thought than an automated testcase and in general are easier to write. The quality team maintains a project repository for manual testcases.

Automated Testcases


Autopackage tests are written and submitted as additions to an individual package. They are low-level tests that verify functionality and are run during package build time.


Autopilot tests are utilized for functional testing, including GUI testing and simulating end user interaction. See the launchpad project for autopilot tests.

Ubuntu Touch

In addition to writing testcases for the ubuntu desktop, you can help out the ubuntu touch core apps projects by writing testcases for the there applications. See this page for a list of needed tests, and this walkthrough for contributing a testcase.

Contributing Results

Manual Testing

These tests and results are reported using the appropriate QATracker. Testing generally occurs as part of a cadence testing week.


Walkthroughs for each type of this testing can be found on theĀ QATracker page.

Image Testing

Image or iso testing consists of downloading a copy of the latest daily ISO images, burning them to CDs/USB keys (or loading them into VM's) and testing them. This brings to light many issues that might have been missed by other early adopters and developers, especially in the CD builds and installers.

Advantages: ISO testing doesn't require you to dedicate your main machine or even a computer that you use productively. You can simply use a spare machine (or virtual machine) and test the installation there.

Disadvantages: ISO testing is only useful in the days just before a CD image milestone release. If you're installing to a fresh machine and want to continue with general testing of the development release, you'll need to import your data manually. If you're testing a CD-based upgrade, you'll want to back up your data first.

If you are interested in this kind of testing, see the ISO information page for instructions on getting started with ISO testing and directions for using the test tracker. The ISO Testing Walkthrough is a good place to start. Also ensure you've joined the Ubuntu QA mailing list to know when the testing weeks are occurring.

ARM Testing

Additionally, if you have the hardware for it, we are actively helping push forward ubuntu onto ARM architectures. Check out the ARM pages for more information on testing ubuntu ARM images, including testing on a pandaboard.

Application Testing

Application testing is the manual testing of specific things (test cases) in applications. Regression tests are specific tests for potential breakages from one release to another (they're also relevant for SRU testing, above).

Advantages: Many of the testcases are very quick to do. Uploading your results to the wiki is also relatively painless - all you need is a launchpad account.

Disadvantages: Unlike general testing, application tests require you to follow a specific set of procedures purely for testing purposes. Since existing application tests tend to cover either general functionality or parts that broke in a previous release, you may need to help write new test cases for a program to ensure good coverage.

If you are interested in this kind of testing, head to Packages QATracker and run through the application testcases and report your results.

Hardware Testing

Hardware Testing is about the manual testing of specific things (test cases) mainly related to laptops hardware, using milestone releases of the development version (alphas, betas and release candidates). The goal is to get Ubuntu to work great on as many different makes and models of laptops as possible and this can be done knowing which hardware works straight off the install CD and which hardware needs configuring or is poorly supported.

Advantages: Testcases are very quick and easy to do. Uploading your results to the wiki is also relatively painless. Because the issues you find are in the development release, you'll be using the same version as the developers themselves, making it easier for your bug reports to be confirmed and fixed.

Disadvantages: Need to run development release. You will need a launchpad account for many things (as reporting bugs) and you need to coordinate with other people having the same laptop.

If you are interested in this kind of testing, head to the https://wiki.ubuntu.com/Testing/Laptop Wiki page.

Automated Testing

Automated testing is the conversion of large numbers of test cases into simple scripts. They can often be run in bulk with a single command. Autopilot is the main way of automating testcases for ubuntu. In addition, the program Checkbox also allows for automated testcases to be run and recorded.

Advantages: Running tests can be very straightforward and serve as a simple complement to general testing. Checkbox will guide you through the testing process, automating as much as possible. If there are errors specific to your unique hardware setup, they are more likely to be fixed when detected this way. UTAH tests allow you to automate across languages and systems easily.

Disadvantages: Some of the test suites, particularly those within checkbox, will also require some manual intervention or tasks (such as confirming if a sound played). Submission of checkbox reports requires a launchpad account. Unless you're doing a regression test, you will also need the development release.

If you are interested in this kind of testing, start by installing checkbox and then run it by selecting System->Administration->System Testing. If you wish to go further and write Autopilot test cases, you can visit the AutomatedTesting page for more information.


Stable Release Update (SRU) Testing

All stable release updates are first uploaded to the Proposed repository before they are released live. An important principle behind Ubuntu is that we should never make a working system worse; stable release updates need to be extensively tested to make sure that there are no regressions.

Advantages: You don't have to use the development release to do SRU testing - it's as easy as enabling the Proposed repository and downloading the updates. Then you just use them as normal, making sure they didn't break anything, and leave your feedback on the appropriate launchpad bug. Unless hardware-related, SRUs for older releases can also be easily tested in a virtual machine. You don't have to be very technical to do this kind of testing either - the fanciest task is editing a tag on a launchpad bug.

Disadvantages: Unless you're willing to follow a test case for a new application, you'll be more or less limited to testing applications that you use and have received a proposed SRU.

If you are interested in this kind of testing, there is a public Launchpad SRU verification team that you can join as well as a fantastic wiki page with helpful tools for finding bugs to work on.


About the most important part of testing, please head over to Bugs for more information on this area.

Testing/Activities/JIC (last edited 2013-07-19 00:42:19 by phillw)