AutomatedTesting
Automated Testing
Status
Created: Date(2005-04-23T02:32:42Z) by MattZimmermanBR
Priority: LowPriorityBR
People: MartinPittLead, MichaelVogtSecondBR
Contributors: MattZimmerman, JamesTroupBR
Interested: BR
Status: DraftSpecification, UduBof, DistroSpecification, MartinPittQueue, MichaelVogtQueueBR
Packages: BR
Depends: BR
Introduction
Discuss ways to automatically check certain package properties which we regard as essential.
Rationale
Currently it is possible to upload packages which do not work at all, can disrupt the packaging system, are uninstallable, or have a broken build system. We want to introduce a set of universally applicable tests that reject such packages. To the extent possible, a package should also be able to check its own functionality to make regressions immediately visible.
Scope and Use Cases
- Check validity of binary packaging (all packages)
- Check (re)buildability of source packages (all packages)
- Check for library and linking errors (many packages)
- Check for functionality regressions (where applicable, i. e. for non-interactive programs)
Implementation Plan
Data Preservation and Migration
None of the test should alter runtime behavior or touch actual data files.
Test environment
We need a set of test systems (preferably virtualized) where arbitrary packages can be installed and removed for testing. The implementation of this environment remains to be discussed.
Check validity of binary packaging
Test installability:
- Start with a sandbox with only required packages.
- Install all dependencies.
- Create a list of all files in the sandbox.
- Install the package.
- Reinstall the package to check that a (degraded) upgrading works.
- Remove the package.
- Remove all dependencies of the package.
- Purge the package. If this fails, then the purging code depends on non-required packages, which is invalid.
- Create a list of all files in the sandbox and report any differences against the first list.
Test conflicts:
Create a mapping installed file -> package from package contents lists.
- Create the union of all installed files.
- Remove all entries from that set whose file only appears once.
- Remove all pairs where the associated packages declare a conflict to each other.
- Ideally the remaining set should be empty, report all package names that are left.
Test debconf:
- Install the packages using the non-interactive frontend.
- Intercept mails sent by non-interactive to collect the questions the package would ask.
- Ideally they should be no questions.
Check validity of source package
Buildability is already tested on the buildd's. However, many packages have broken clean rules which leave the package in an unbuildable state. We should fix all packages where this is the case.
- Unpack the source package.
dpkg-buildpackage
Rename the resulting diff.gz to diff.gz-first
dpkg-buildpackage; if this fails, the packaging is broken
Compare the new diff.gz to diff.gz-first; if there is any difference, report this as a potentially broken package; however, many packages update config.{guess,sub}, so these should be excluded from the test
Package self tests
Self-tests:
- Many packages already come with test suites which are run at build time.
Add debian/rules check which
check should be a dependency of binary.
If check fails, this should generally make the build fail. There are some exceptions like gcc where many tests are expected to fail and it is unreasonable to modify the package to disregard them.
- Idea: Export the results of regression tests in a tarball and publish it somewhere so package maintainers do not need to rebuild the package to evaluate the reason for failures.
Outstanding Issues
UDU BOF Agenda
- Which virtualization framework to use?
Discuss the issues at http://wiki.ubuntu.com/AutomatedTesting