Report written up by Lars Wirzenius

Ubuntu QA Sprint, Oslo, October 1-2, 2007

Henrik Omma and Lars Wirzenius met in Oslo for two days to discuss Ubuntu quality assurance, current situation, and short and long term goals. This is a summary of the discussions.

We concluded that for Gutsy, there's no time to make any big changes. However, Hardy will be an LTS release, and the Hardy cycle should thus concentrate on quality.

There are at least eight signficant areas of QA as far as Ubuntu QA is concerned:

A summary of the discussion of each area is below. Additionally, we discussed possible UDS BOF topics related to QA.

Manual testing

It is not realistic to test Ubuntu completely automatically, so manual testing will always be needed. If nothing else, the community as a whole has access to a much larger set of hardware than Canonical. Manual testing is therefore needed for various ISOs created during the release.

To do:

Automatic testing: existing tools

There exist some tools for testing the quality of .deb packages: lintian, linda, piuparts, and autopkgtest. These are not being used, at least not systematically, by the QA team. Ian Jackson is running autopkgtest. Running them and reporting bugs automatically should be a good way to find a lot of simple bugs to be fixed.

To do:

Automatic testing: new tools

There are some tools that could be written or finished with a medium amount of work to allow a lot more automatic testing to be done. For example, the GTK+ accessibility layer (ATK) makes it possible to record what programs do, and then re-run the recordings to make sure they still do the same thing. The Accerciser program does that. This can be used to develop a desktop testing tool and test set to make sure all the basic operations on an installed Ubuntu desktop work as they should. Ideally, these test can be run completely automatically, but getting there may require a lot of effort.

Another tool is vlosuts, a "live system upgrade tester", which tests that an entire Ubuntu (or Debian) system can be upgraded while running, and that it will still work after the upgrade.

The "live CD" ISO images and their grapical installer may be tested with the same framework as desktop testing. Additionally, the "alternate" CDs, and partly the graphical installer, may be tested using "pre-seeding", where the installer gets fed a prepared list of answers to each question. When run in an emulator, such as qemu or VirtualBox, it should be possible this way to test the entire installation completely automatically. The test obviously won't be complete: no emulator can emulate all the different hardware that exists. It should, however, allow us to automatically determine that an ISO works at all, in basic scenarios.

To do:

Inciting community QA

The key to getting the community into doing more systematic QA work is to do everything related to it as openly as possible. Also, the visibility of QA should be raised. We discussed the possiblity of having a "weather report" or "big board" page on qa.ubuntu.com, which would give a quick overview (good/bad colors in table cells, perhaps) of the various factors that affect the quality of Ubuntu. For example, there could be indicators about whether the current set of ISOs pass automatic and manual tests, whether there are any release blocking bugs open, and a graph showing the number of open bugs as function of time. Implementing the page will require some kind of centralized QA information collection location, which either stores the information, or gets it automatically from, say, Launchpad.

To do:

Mobile testing

Mobile Ubuntu is, in theory, just a scaled down variant of Ubuntu, so any QA work done on Ubuntu will benefit Mobile Ubuntu as well. There are special challenges with Mobile Ubuntu, though: the devices are rather different from normal PCs. It is also unclear to us whether Mobile Ubuntu even ships the GTK+ accessibility layer.

We didn't know enough of Mobile Ubuntu to come up with anything specific the QA team could do.

Stable release and security update QA

Updates to the stable releases should be tested with the automatic tools, if they aren't already. For security updates, there is a problem that the new packages may have to be embargoed, so the QA team can't do the testing, the security team should have to do it themselves.

To do:

UDS topics

QATeam/Meetings/OsloSprint (last edited 2008-08-06 16:20:45 by localhost)