OsloSprint

Report written up by Lars Wirzenius

Ubuntu QA Sprint, Oslo, October 1-2, 2007

Henrik Omma and Lars Wirzenius met in Oslo for two days to discuss Ubuntu quality assurance, current situation, and short and long term goals. This is a summary of the discussions.

We concluded that for Gutsy, there's no time to make any big changes. However, Hardy will be an LTS release, and the Hardy cycle should thus concentrate on quality.

There are at least eight signficant areas of QA as far as Ubuntu QA is concerned:

  • manual testing
  • automatic testing using existing tools
  • automatic testing using new tools
  • inciting QA work within the community
  • tracking test result reports, and other measurements of quality factors
  • mobile testing
  • stable release QA
  • security update QA

A summary of the discussion of each area is below. Additionally, we discussed possible UDS BOF topics related to QA.

Manual testing

It is not realistic to test Ubuntu completely automatically, so manual testing will always be needed. If nothing else, the community as a whole has access to a much larger set of hardware than Canonical. Manual testing is therefore needed for various ISOs created during the release.

To do:

  • Update testing checklists continually, and review them before each alpha, beta, and release candidate release. The checklists are at

    https://wiki.ubuntu.com/Testing/Cases in the wiki.

  • Keep better track of ISO test results, especially successes on different kinds of hardware.

Automatic testing: existing tools

There exist some tools for testing the quality of .deb packages: lintian, linda, piuparts, and autopkgtest. These are not being used, at least not systematically, by the QA team. Ian Jackson is running autopkgtest. Running them and reporting bugs automatically should be a good way to find a lot of simple bugs to be fixed.

To do:

  • Set up lintian, linda, piuparts testing, and automatic bug reporting carefully using white/black lists to prevent flooding Launchpad with a lot of useless bugs.

Automatic testing: new tools

There are some tools that could be written or finished with a medium amount of work to allow a lot more automatic testing to be done. For example, the GTK+ accessibility layer (ATK) makes it possible to record what programs do, and then re-run the recordings to make sure they still do the same thing. The Accerciser program does that. This can be used to develop a desktop testing tool and test set to make sure all the basic operations on an installed Ubuntu desktop work as they should. Ideally, these test can be run completely automatically, but getting there may require a lot of effort.

Another tool is vlosuts, a "live system upgrade tester", which tests that an entire Ubuntu (or Debian) system can be upgraded while running, and that it will still work after the upgrade.

The "live CD" ISO images and their grapical installer may be tested with the same framework as desktop testing. Additionally, the "alternate" CDs, and partly the graphical installer, may be tested using "pre-seeding", where the installer gets fed a prepared list of answers to each question. When run in an emulator, such as qemu or VirtualBox, it should be possible this way to test the entire installation completely automatically. The test obviously won't be complete: no emulator can emulate all the different hardware that exists. It should, however, allow us to automatically determine that an ISO works at all, in basic scenarios.

To do:

  • Develop a prototype desktop testing framework. It should allow even non-technical people to record test scripts, and make it possible to run the scripts in a straightforward manner ("easy" can come later).
  • Look at vlosuts and see if it is suitable for Ubuntu.
  • Set up a pre-seeding test framework for alternate CDs.

Inciting community QA

The key to getting the community into doing more systematic QA work is to do everything related to it as openly as possible. Also, the visibility of QA should be raised. We discussed the possiblity of having a "weather report" or "big board" page on qa.ubuntu.com, which would give a quick overview (good/bad colors in table cells, perhaps) of the various factors that affect the quality of Ubuntu. For example, there could be indicators about whether the current set of ISOs pass automatic and manual tests, whether there are any release blocking bugs open, and a graph showing the number of open bugs as function of time. Implementing the page will require some kind of centralized QA information collection location, which either stores the information, or gets it automatically from, say, Launchpad.

To do:

  • Move QA team meetings to #ubuntu-meetings, instead of doing them over the phone. (This starts October 3.)

Mobile testing

Mobile Ubuntu is, in theory, just a scaled down variant of Ubuntu, so any QA work done on Ubuntu will benefit Mobile Ubuntu as well. There are special challenges with Mobile Ubuntu, though: the devices are rather different from normal PCs. It is also unclear to us whether Mobile Ubuntu even ships the GTK+ accessibility layer.

We didn't know enough of Mobile Ubuntu to come up with anything specific the QA team could do.

Stable release and security update QA

Updates to the stable releases should be tested with the automatic tools, if they aren't already. For security updates, there is a problem that the new packages may have to be embargoed, so the QA team can't do the testing, the security team should have to do it themselves.

To do:

  • Discuss with the stable release and security teams what the situation is and whether the QA team can help them with something.

UDS topics

  • The self-testing desktop. Can we include the automatic desktop testing framework in main, maybe even on the installation CD, to make it easier for users to participate in testing?

  • Generalizing the QA tracker and proposed updates to the tracker.

  • Bug statistics.

    • https://wiki.ubuntu.com/Bugs/Stats

    • What stats are useful (e.g., to release managers)?
    • What stats would drive us to do better quality-wise?
    • Perhaps: turnaround time for bugs: time to first answer, time to triage, time since last activity for non-forwarded and non-wishlist bugs, time to closed.
    • Open bugs / total bugs.
    • Statistics divided between different components?
    • Open bugs per package per popularity-contest vote?
    • Number of bugs per upload per package, or how long the package has been in the archive?
  • Slimming down main (spread over at least two days)

    • main is constantly growing
    • reduce it to make it easier to maintain
    • identify problem packages
      • lots of stable release updates in the past two years?
      • many security issues?
      • gut feeling? big group discussion at UDS to determine that? or just ask release managers and security team?
      • divide packages into "must have" and "removal candiate": after three months people must justify why packages would be kept in main, if the packages haven't been moved to "must have" already
    • build dependency tree and analyze leaves first
      • if there is a problem package that a leaf depends on, both may have to be removed
  • Removing packages from the CD

    • subset of main+restricted
    • duplication of functionality (3 image viewers, for example)
    • big applications with smaller alternatives
      • sort package list by installed size
  • Automatic bug reporting

    • is it OK, as far as developers are concerned?
    • what are the criteria for acceptable automatic bug reporting?
  • Hardware related testing

    • QA tracker
    • HW database people
    • server/kernel teams
    • collecting info on what hardware works and what does not
  • The Ubuntu weather report

    • gathering and summarizing test results automatically
    • release team: what's useful?
    • platform/toolchain people
  • Automated desktop testing

    • with Accerciser and Sun accessibility people
  • Automated mobile QA

    • What can be tested on a desktop system?
    • What about the rest?

QATeam/phillw/Meetings/OsloSprint (last edited 2014-07-22 21:58:50 by host-80-41-221-66)