AutomatedTesting

Revision 9 as of 2005-04-28 06:44:09

Clear message

Automated Testing

Status

Introduction

Discuss ways to automatically check certain package properties which we regard as essential.

Rationale

Currently it is possible to upload packages which do not work at all, can disrupt the packaging system, are uninstallable, or have a broken build system. We want to introduce a set of universally applicable tests that reject such packages. To the extent possible, a package should also be able to check its own functionality to make regressions immediately visible.

Scope and Use Cases

  • Check validity of binary packaging (all packages)
  • Check (re)buildability of source packages (all packages)
  • Check for library and linking errors (many packages)
  • Check for functionality regressions (where applicable, i. e. for non-interactive programs)

Implementation Plan

Data Preservation and Migration

None of the test should alter runtime behavior or touch actual data files.

Test environment

We need a set of test systems (preferably virtualized) where arbitrary packages can be installed and removed for testing. The implementation of this environment remains to be discussed.

Check validity of binary packaging

Test installability:

  1. Start with a sandbox with only required packages.
  2. Install all dependencies.
  3. Create a list of all files in the sandbox.
  4. Install the package.
  5. Reinstall the package to check that a (degraded) upgrading works.
  6. Remove the package.
  7. Remove all dependencies of the package.
  8. Purge the package. If this fails, then the purging code depends on non-required packages, which is invalid.
  9. Create a list of all files in the sandbox and report any differences against the first list.

Test conflicts:

  1. Create a mapping installed file -> package from package contents lists.

  2. Create the union of all installed files.
  3. Remove all entries from that set whose file only appears once.
  4. Remove all pairs where the associated packages declare a conflict to each other.
  5. Ideally the remaining set should be empty, report all package names that are left.

Test debconf:

  • Install the packages using the non-interactive frontend.
  • Intercept mails sent by non-interactive to collect the questions the package would ask.
  • Ideally they should be no questions.

Check validity of source package

Buildability is already tested on the buildd's. However, many packages have broken clean rules which leave the package in an unbuildable state. We should fix all packages where this is the case.

  1. Unpack the source package.
  2. dpkg-buildpackage

  3. Rename the resulting diff.gz to diff.gz-first

  4. dpkg-buildpackage; if this fails, the packaging is broken

  5. Compare the new diff.gz to diff.gz-first; if there is any difference, report this as a potentially broken package; however, many packages update config.{guess,sub}, so these should be excluded from the test

Package self tests

Self-tests:

  • Many packages already come with test suites which are run at build time.
  • Add debian/rules check which

  • check should be a dependency of binary.

  • If check fails, this should generally make the build fail. There are some exceptions like gcc where many tests are expected to fail and it is unreasonable to modify the package to disregard them.

  • Idea: Export the results of regression tests in a tarball and publish it somewhere so package maintainers do not need to rebuild the package to evaluate the reason for failures.

Outstanding Issues

UDU BOF Agenda

Pre-Work