AutomatedTesting

Differences between revisions 18 and 19
Revision 18 as of 2005-10-31 17:55:26
Size: 6467
Editor: chiark
Comment: virtualisation interface
Revision 19 as of 2005-10-31 21:42:26
Size: 7196
Editor: chiark
Comment: package own tests
Deletions are marked like this. Additions are marked like this.
Line 100: Line 100:
We really want to be able to run a package's tests on the installed version of the package. This requires a standardised way for a package to provide an interface to running the tests and finding the results. Also, this interface should be more than just "invoke some script called check". In particular, tests need have interesting properties like:
 * modifies global data
 * needs package X Y Z (>=4) installed
 * needs to run as root
 * needs an X display (and uses some gui replay tool?)
which need to be per-test or at least per batch of tests. This needs to be extensible so that new tests in old environments can be not run with `test environment does not support "blames-canada" property of test "simpsons"'.

== Status

Introduction

Discuss ways to automatically check certain package properties which we regard as essential.

Rationale

Currently it is possible to upload packages which do not work at all, can disrupt the packaging system, are uninstallable, or have a broken build system. We want to introduce a set of universally applicable tests that reject such packages before they irritate anyone. To the extent possible, a package should also be able to check its own functionality to make regressions immediately visible.

Scope and Use Cases

  • Check validity of binary packaging (all packages)
  • Check (re)buildability of source packages (all packages)
  • Check for library and linking errors (many packages)
  • Check for functionality regressions (where applicable, i. e. for non-interactive programs)

Implementation Plan

Data Preservation and Migration

None of the test should alter runtime behavior or touch actual data files.

Test environment

We need a set of test systems (preferably virtualized) where arbitrary packages can be installed and removed for testing. Xen looks very promising, and we hope to get it running soon in Breezy. That would require to run Breezy on the test server, but since we don't need public access on this server, we could maybe live with this for a limited time.

There are a fair variety of virtualisation systems which differ in maturity, intrusiveness into the hosting system/hardware, features, etc. We are interested in the following features:

  • Ability to set a checkpoint or make a snapshot, so that changes to the test filesystem can be undone fairly efficiently. (Must have, for virtualisation to be at all useful.)
  • Defence of the host system from the virtual environment (ie, security). (Must have for automated testing of possibly-untrusted packages, but optional in many cases for developers' use on their own systems.)
  • Ability to efficiently determine what changes were made to the virtual filesystem (as filenames and contents, not disk block changes).

Approaches or part-approaches that seem plausible include:

  • Xen
  • UML
  • Union-fs
  • CPU emulators (Qemu, Bochs, PearPC, Faumachine?)
  • chroot
  • LVM snapshots

There is a lot of activity in many of these projects, so their capabilities are changing. And, different approaches make senses in different contexts (local testing, launchpad autotest, etc.). So we should introduce an abstraction interface which we'll provide at least one low-impact sample implementation of (chroot + unionfs?)

Check validity of binary packaging

Test installability:

  1. Start with a sandbox with only required packages.
  2. Install all dependencies.
  3. Create a list of all files in the sandbox.
  4. Install the package.
  5. Run functional self tests of the package (see below).
  6. Reinstall the package to check that a (degraded) upgrading works.
  7. Remove the package.
  8. Remove all dependencies of the package.
  9. Purge the package. If this fails, then the purging code depends on non-required packages, which is invalid.
  10. Create a list of all files in the sandbox and report any differences against the first list.

Test conflicts:

  1. Create a mapping installed file -> package from package contents lists.

  2. Create the union of all installed files.
  3. Remove all entries from that set whose file only appears once.
  4. Remove all pairs where the associated packages declare a conflict to each other.
  5. Ideally the remaining set should be empty, report all package names that are left.

(Note that apparently some Debian folks already do this, so there might be some scripts around).

Test debconf:

  • Install the packages using the non-interactive frontend.
  • Intercept mails sent by non-interactive to collect the questions the package would ask.
  • Ideally there should be no questions.

Test package contents:

  • Compare package contents list with latest version in the archive; notify the uploader if the number of files changed considerably (we had such errors in the past).

Check validity of source package

Buildability is already tested on the buildd's. However, many packages have broken clean rules which leave the package in an unbuildable state. We should fix all packages where this is the case.

  1. Unpack the source package.
  2. dpkg-buildpackage

  3. Rename the resulting diff.gz to diff.gz-first

  4. dpkg-buildpackage; if this fails, the packaging is broken

  5. Compare the new diff.gz to diff.gz-first; if there is any difference, report this as a potentially broken package; however, many packages update config.{guess,sub}, so these should be excluded from the test

Package self tests

Build time self tests:

  • Many packages already come with test suites which are run at build time.
  • Add debian/rules check which runs those self tests and exits with 0 if all of them are successful.

  • check should not be a dependency of binary since there are test suites which take a fair amount of time. We rather modify dpkg-buildpackage to check for the existence of the check target and call it if it exists. dpkg-buildpackage should get a new option --no-check to disable the invocation of the test suite.

  • If check fails, this should generally make the build fail to prevent publishing regressions to the archive. There are some exceptions like gcc where many tests are expected to fail and it is unreasonable to modify the package to disregard them; in these cases check should exit with a zero exit status if appropriate.

  • Idea: Export the results of regression tests in a tarball and publish it somewhere so package maintainers do not need to rebuild the package to evaluate the reason for failures.

Run time self tests:

  • Call ldd on all ELF binaries and libraries and check for unresolved libraries.

  • dlopen() all ELF libraries and report failures.

  • Change packages to install runtime self tests into /usr/lib/selftests/packagename/; run all binaries in this directory and ensure that all of them exit with 0.

We really want to be able to run a package's tests on the installed version of the package. This requires a standardised way for a package to provide an interface to running the tests and finding the results. Also, this interface should be more than just "invoke some script called check". In particular, tests need have interesting properties like:

  • modifies global data
  • needs package X Y Z (>=4) installed

  • needs to run as root
  • needs an X display (and uses some gui replay tool?)

which need to be per-test or at least per batch of tests. This needs to be extensible so that new tests in old environments can be not run with `test environment does not support "blames-canada" property of test "simpsons"'.

Outstanding Issues

UDU BOF Agenda

Pre-Work

Tasks for the future


CategoryUdu CategorySpec

AutomatedTesting (last edited 2008-08-06 16:16:33 by localhost)