Launchpad Entry: https://launchpad.net/distros/ubuntu/+spec/testing-server-hardware
Created: 2005-10-31 by MarkRamm
Packages affected: debian-installer, server-testsuite, stress, iperf
We need to do a better job of testing Ubuntu on server hardware. To do this we need to:
- get up-to date hardware from major server hardware vendors to certify against Ubuntu Server 6.04
- set up a central, official certification facility that performs burn-in and installation testing
- create a comprehensive server test suite for hardware recognition and stress testing
- create an easy way to support and encourage community server testing for extra bug reports.
We're putting a lot of effort into making Dapper rock on servers. Being an enterprise-ready release, we'll be supporting it for five years on servers -- but none of this is much good if we can't guarantee it will run properly on modern server hardware.
- Company alpha runs all their servers on Ubuntu. They're buying a batch of new servers, and want to make sure they're certified to work with Dapper.
- Company beta is considering switching their data center to Ubuntu. They want to know how much of their hardware is certified to work with Dapper, to gauge the complexity and affordability of the switch.
Community testing use cases are addressed in the community testing spec.
We would like to certify a minimum of 25 servers in the Dapper timeframe.
The Harvard Computer Society will run the central Ubuntu certification facility in Cambridge, MA. HCS will:
- provide rackspace for the servers,
- provide staff to process inventory,
- run the testing suite (both installation and burn-in),
- develop and host an Ubuntu-branded server hardware catalogue, both for certified and community-tested hardware, before Dapper is released
- provide VPN access to servers under certification (and their lights-out systems such as iLO and LOM) to appointed Ubuntu developers
Following testing, the servers will be tasked to do non-critical functions for Ubuntu and HCS, such as providing an Ubuntu archive mirror, or web serving. These services can be easily shut down when Ubuntu developers need to make use of a server to troubleshoot problems.
IvanKrstic will run the certification facility.
Installation testing does not require developing any new software. Certification facility staff will plop in an Ubuntu-server CD, watch the installation through completion, and make sure the machine was installed properly after rebooting.
We eventually want to have a d-i rescue mode profile for server testing. Booting into it would ask people to answer a few questions (which hardware does the system really have, vs. what the system detected automatically), and deliver the result to us. At first, knowledgeable testing facility staff can perform this by hand and a few custom-written scripts.
We will have a minimal, easily developed burn-in test suite for Dapper.
It will contain:
- stress(1), package 'stress': I/O, CPU, VM, disk
- NLANR's iperf(1), package 'iperf': network stress
- a UI to run the tests (described below)
A certification burn-in run will be structured as follows:
- Day 1: I/O burn-in with stress(1)
- Day 2: CPU burn-in with stress(1)
- Day 3: VM burn-in with stress(1)
- Day 4: disk burn-in with stress(1)
- Day 5: network burn-in with iperf(1)
- Days 6, 7: full stress on all subsystems with iperf(1) and stress(1)
Burn-in and installation test runs are collected in the HCS-developed server hardware testing catalog. Use of the catalog for community testing is explained in CommunityServerHardwareTesting.
Load testing UI
The test suite is wrapped in a shiny ncurses UI that, when started, asks the user whether she wants to perform a full burn-in (7 days) or a micro-burn-in (7 hours). A 7 hour micro-burn-in is assumed to be acceptable for community testing, and runs on the same schedule as a certification burn-in, with days scaled to hours.
The UI would run iperf(1) and stress(1) in verbose logging mode, and after a completed burn-in run, would offer to upload the results to the community server hardware testing catalogue via HTTP. The official certification facility would cancel this upload, and upload the logs to the certified server hardware catalogue manually.
Because a failed burn-in test often freezes or reboots the machine, the application needs a way to keep test checkpoints. It should write out a checkpoint to disk every 1 hour and at the completion of every test (which resets the timer). The checkpoint file will only be appended to, and so will contain record of any restarted runs; this checkpoint file will also be uploaded to the catalog, which will parse it to see if any tests failed.
The application needs to read the check-point file, if it exists, at every start: if it determines a test was interrupted, it should offer to start from the interrupted test instead of starting from scratch. A user-interupted test will be specifically mentioned in the checkpoint file, such that it can be differentiated from unexpected test interruptions due to machine reboots or freezes.
Custom install CDs
We may want to produce install CDs tailored to specific certified hardware. Vendors would pay for the creation of these CDs, possibly as part of the certification process, which would then be available to customers for free. We would base the customized CDs on the hardware list and testing results obtained from our official server certification run.
It would be at least 6-8 weeks before hardware could start shipping from vendors to the HCS certification facility. This means the server test suite needs to be completed by the end of the year.
HCS can start receiving and processing hardware in approximately as many weeks. However, actual certification runs can't start before February 1st, 2006. The gap is a one-time setup cost, and will not exist for future releases. This leaves two months for server certification, and since a full certification run (including burn-in testing) takes one week, two months should be more than enough time.
- Some hardware configurations may require non-distributable software to support (e.g. RAID). Malcolm will need to talk to vendors about being able to distribute those tools as packages in Ubuntu. In the cases where those tools are undistributable (which is the case with many of them), Malcolm will be petitioning vendors to sponsor us creating custom Ubuntu CD images for their hardware.
- iperf(1) understandably wants a client and a server when running network testing. While this is no problem for the certification facility, we have to make sure we have simple instructions available on how to do this for community testing (luckily, doing it is trivial - it requires a connected machine, one apt-get, and one invocation of iperf). The suite UI needs to ask the user up front about the IP of the iperf peer for the network testing, or allow her to skip that part of the test.