AutomatedTesting

Revision 31 as of 2005-11-01 22:59:12

Clear message

Status

Introduction

Discuss ways to automatically check certain package properties which we regard as essential.

Rationale

Currently it is possible to upload packages which do not work at all, can disrupt the packaging system, are uninstallable, or have a broken build system. We want to introduce a set of universally applicable tests that reject such packages before they irritate anyone. To the extent possible, a package should also be able to check its own functionality to make regressions immediately visible.

Scope and Use Cases

  • Check validity of binary packaging (all packages)
  • Check (re)buildability of source packages (all packages)
  • Check for library and linking errors (many packages)
  • Check for functionality regressions (where applicable, i. e. for non-interactive programs)

Overall Design

Testing of installed packages

We will create the following new machinery:

  • 1 Tester core. This:
    • Interprets test metadata
    • Knows how to enumerate tests, determine which test(s) are possible under the current circumstances
    • Knows how to invoke tests and collect results
    • Knows how to request test virtualisation services from the virtualisation regime
    • Provides a convenient interface for both manual use and building into automation (eg launchpad)
    2 Generic tests
    • Provides a set of tests (including metadata) which are supposed to be applicable to any package. These tests will typically involve building, installing, removing, etc. the package
    • Provides appropriate metadata about these tests
    3 Virtualisation regime
    • Encapsulate the invocation of tests
    • Insulates the host that is performing the tests from the effects of the tests (insofar as this regime is able to)
    • Provides a standard interface to the driver core
    • There may be (eventually, will be) several virtualisation regimes; initially we will provide one that is simple to implement
    4 Package-specific tests
    • Test scripts and test metadata are found in a specified location in the unpacked source package. These tests test the installed version of the package.

Initially, all but the package-specific tests will be in a single test package.

There are two main new interfaces here:

  • Test/metadata interface: a standard way describing the tests available and their important properties. Provided by packages and the generic tests and used by the tester core.
  • Virtualisation interface: provided by virtualisation regimes and used by the tester core.

Virtualisation

To test a package it must be installed (and often, removed again) for testing. Its installation or operation might alter the system or data in other ways.

For other than the most ad-hoc testing by a knowledgeable expert, there has to be a separate testbed for this purpose. Ideally, the testbed would be virtualised.

There are a fair variety of virtualisation systems which differ in maturity, intrusiveness into the hosting system/hardware, features, etc. We are interested in the following features:

  • Ability to set a checkpoint or make a snapshot, so that changes to the testbed filesystem can be undone fairly efficiently. (Must have, for virtualisation to be at all useful.)
  • Defence of the host system from the virtual environment (ie, security). (Must have for automated testing of possibly-untrusted packages, but optional in many cases for developers' use on their own systems.)
  • Ability to efficiently determine what changes were made to the testbed filesystem (as filenames and contents, not disk block changes).
  • Obviously, ability to run commands in the testbed (perhaps as root) and get the output and exit status, and copy data back and forth.

Approaches or part-approaches that seem plausible include:

  • chroot. Has the virtue of being well-known (used already by buildds, for example) and simple to implement - so we will do this first, with either unionfs or lvm snapshots.
  • Xen. Looks very promising, and we hope to get it running soon in Breezy. That would require to run Breezy on the test server, but since we don't need public access on this server, we could maybe live with this for a limited time.
  • UML
  • Union-fs
  • CPU emulators (Qemu, Bochs, PearPC, Faumachine?)
  • LVM snapshots
  • Separate machine

There is a lot of activity in many of these projects, so their capabilities are changing. And, different approaches make senses in different contexts (local testing, launchpad autotest, etc.). So we introduce an abstraction interface which we'll provide at least one low-impact sample implementation of.

Interface Design

Tests/metadata

DRAFT - this section needs discussing with debian-policy. We want to share this test infrastructure with Debian and other distros to allow us not to bear the burden of writing all the tests, forever.

The source package provides a test metadata file debian/tests/control. This is a file containing zero or more RFC822-style stanzas, along these lines:

        Tests: fred bill bongo
        Restrictions: needs-root breaks-computer

This means execute debian/test/fred, debian/test/bill, etc., each with no arguments, expecting exit status 0 and no stderr. The cwd is guaranteed to be the root of the source package which will have been built (but note that the tests must test the installed version).

Any unknown thing in Restrictions, or any unknown field in the RFC822 stanza, causes the tester core to skip the test with a message like `test environment does not support "blames-canada" restriction of test "simpsons"'.

Additional possibilities:

        Depends: ...
        Tests: filenamepattern*
        Restrictions: modifies-global-data needs-x-display

etc. - moves complexity from individual packages into central tester core.

A basic test could be simply running the binary and checking the result status (or other variants of this). Eventually every package would to be changed to include at least one test.

Even integration tests can be represented like this: if one package's tests Depend on the other's, then they are effectively integration tests. The actual tests can live in whichever package is most convenient.

Virtualisation interface

The virtualisation regime provides a single executable program which is used by the tester core to request virtualisation facilities.

This program is invoked with the argument --debian-package-testing and then proceeds to speak a protocol on its stdin/stdout. The protocol is line-based.

The server has the following states:

  • Closed: there is no particular testbed. This is the initial state.
  • Open: the testbed is running and can be communicated with (and, if applicable, is not being used by any other concurrent test run)
  • Initial response from regime server: ok

  • Command capabilities; response eg ok efficient-diff revert ... where the words after ok are features that not all regimes support. Valid in all states.

  • Command open; response ok testbed-scratchspace. Checks that the testbed is present and reserves it (waiting for other uses of the testbed to finish first, if necessary). State: Closed to Open. testbed-scratchspace is a pathname on the testbed which can be used freely by the test scripts.

  • Command stop local-filename; response ok. Indicates that the testbed should be stopped; replaces local-filename (on the host) with a representation of the changes to the testbed's filesystem. Then reverts the testbed. State: Open to Closed.

  • Command close; response ok. Stops and undoes filesystem changes. State: Open to Closed.

  • Command execute program,arg,arg... stdin stdout stderr; response ok exitstatus. Executes the command (args separated by commas, everything url-encoded). stdin, stdout, stderr are local files (must be files, not pipes)

  • Command copydown host-tree testbed-path or copyup testbed-tree host-path. Response ok. Like cp -dR --preserve=mode,timestamps only across the testbed boundary.

On any error including signals to the regime server or EOF on stin the testbed is unreserved and restored to its original state (ie, closed), and the regime server will print a message to stderr (unless it is dying with a signal).

Outstanding Issues

  • Is this a good overall design ? I've tried to keep the amount of new code to a minimum - and just glue, basically - while still encouraging development into a sophisticated automated testing framework.
  • Are the details of the test/metadata interface right ?
  • Are the details of the virtualisation regime interface right ?
  • What will we choose for the initial virtualisation regime to implement ?
  • Which generic tests will we implement to consider the goal complete for Dapper ?
  • Are we going to invent debian/rules check too and if so how exactly will it work ?

  • What format for filesystem diffs ?

Rationale

Q. Why put the tests in the source package ?

A. Other possibilities include a special .deb generated by the source (which is a bit strange and what happens to this .deb and will make it even harder to reuse upstream test suites), or putting them in the .deb to be tested (definitely wrong - most people won't want the tests and they might be very large) or having them floating about separately somewhere (which prevents us from sharing and exchanging tests with other parts of the Free Software community). The source package is always available when development is taking place.

Q. Why the declarative test metadata, which has to be parsed, rather than just (say) a single test script to run ?

A. The script which was run would have to decide which tests to run, based (eg) on environment variables etc. It would end up replicating the machinery in the tester core, but this machinery would have to be in each package. This also makes it harder to report things like which individual tests were passed, failed, skipped, etc. The actual interface to the tester core would end up having to be nearly as complicated, anyway.

Tasks for the future

  • Support better virtualisation
  • Make lots of tests - at least a basic as-installed selftest for each package
  • Provide standard machinery for GUI tests

UBZ BoF notes

  • One alternative to testing is to change the package to actually test right after building:
    • run a "make check" after building
    • enforce its usage: fail builds if they don't pass
    • test depends would need to be listed as build-depends
    • this tests the package as-built, but not as-installed

Question: how do we tell if a package has no debian/rules check as opposed to it having it but it failed ? What if the make check (which is probably what debian/rules check will often run) has additional dependencies ?

If this needs much additional infrastructure to be complete, perhaps we should only do the installed-package tests (which will catch a much wider range of packaging bugs).

Proposed tests

This section lists suggestions for the implementer's initial set of tests. We expect the test suite to continuously expand and improve.

Check validity of binary packaging

Test installability:

  1. Start with a sandbox with only required packages.
  2. Install all dependencies.
  3. Create a list of all files in the sandbox.
  4. Install the package.
  5. Run functional self tests of the package (see below).
  6. Reinstall the package to check that a (degraded) upgrading works.
  7. Remove the package.
  8. Remove all dependencies of the package.
  9. Purge the package. If this fails, then the purging code depends on non-required packages, which is invalid.
  10. Create a list of all files in the sandbox and report any differences against the first list.

Test conflicts:

  1. Create a mapping installed file -> package from package contents lists.

  2. Create the union of all installed files.
  3. Remove all entries from that set whose file only appears once.
  4. Remove all pairs where the associated packages declare a conflict to each other.
  5. Ideally the remaining set should be empty, report all package names that are left.

(Note that apparently some Debian folks already do this, so there might be some scripts around).

Test debconf:

  • Install the packages using the non-interactive frontend.
  • Intercept mails sent by non-interactive to collect the questions the package would ask.
  • Ideally there should be no questions.

Test package contents:

  • Compare package contents list with latest version in the archive; notify the uploader if the number of files changed considerably (we had such errors in the past).

Check validity of source package

Buildability is already tested on the buildd's. However, many packages have broken clean rules which leave the package in an unbuildable state. We should fix all packages where this is the case.

  1. Unpack the source package.
  2. dpkg-buildpackage

  3. Rename the resulting diff.gz to diff.gz-first

  4. dpkg-buildpackage; if this fails, the packaging is broken

  5. Compare the new diff.gz to diff.gz-first; if there is any difference, report this as a potentially broken package; however, many packages update config.{guess,sub}, so these should be excluded from the test

Package self tests

Build time self tests:

  • Many packages already come with test suites which are run at build time.
  • Add debian/rules check which runs those self tests and exits with 0 if all of them are successful.

  • check should not be a dependency of binary since there are test suites which take a fair amount of time. We rather modify dpkg-buildpackage to check for the existence of the check target and call it if it exists. dpkg-buildpackage should get a new option --no-check to disable the invocation of the test suite.

  • If check fails, this should generally make the build fail to prevent publishing regressions to the archive. There are some exceptions like gcc where many tests are expected to fail and it is unreasonable to modify the package to disregard them; in these cases check should exit with a zero exit status if appropriate.

  • Idea: Export the results of regression tests in a tarball and publish it somewhere so package maintainers do not need to rebuild the package to evaluate the reason for failures.

Run time self tests:

  • Call ldd on all ELF binaries and libraries and check for unresolved libraries.

  • dlopen() all ELF libraries and report failures.

  • Change packages to install runtime self tests into /usr/lib/selftests/packagename/; run all binaries in this directory and ensure that all of them exit with 0.

BOF braindump test suggestions

  • Install/uninstall (http://packages.debian.org/unstable/devel/piuparts):

    • check for cruft left over on the filesystem after install/uninstall
    • check for double-installs
    • perhaps a package black-list (to avoid testing stuff which won't live nicely in the virtualization environment -- perhaps networking stuff)
  • Upgrading
  • Test functionality (where existent)


CategoryUdu CategorySpec