Revision 32 as of 2010-05-27 20:27:43

Clear message


We want to provide a simple yet powerful set of tools to allow ARM developers to easily create/manage archives (including package uploads/builds) and generate images.


Organizations need to generate their own images for testing and evaluation purposes, possibly including software that supports unreleased hardware or software with restricted redistribution rights. To generate such images they need to maintain an integrated set of software packages that can be installed on to their devices.


See the Definitions section of ARMArchiveBranching and ARMArchiveFeatures

User stories

  1. Ted wants to generate an image for marvel doves, which require non-free software packages that are not included in Ubuntu but instead are packaged in a marvel PPA. Ted must be able to create a new archive (by branching the Ubuntu archive and marvel's PPA) and generate the image using the new archive.

    # Create a new archive containing the "standard" platform of Ubuntu's main archive.
    tbd branch --platform=standard
    # Append all packages from the PPA containing the private bits to the newly created archive.
    tbd append-archive
    # Generate an image to test the marvel doves.
    tbd gen-image
  2. A partner is experimenting with a custom netbook UI but doesn't want to include it in their main archive until they've done some more testing, so they use the tools to create a slim archive and do any UI changes (or add new packages) there. Once they decide the UI has had enough testing, they push the changes from the slim archive back to their main one. (The slim archive could be hosted on LP, like a [private] PPA, as soon enough we'll be able to upload packages through sftp).
    # Create a new empty archive.
    tbd create
    # Create a workspace associated with the new archive.
    tbd make-workspace
    # Hack on an existing package or create a new one.
    tbd edit-package unity
    cd unity
    vi  # That's how you do it, right? ;)
    tbd build  # Make sure it builds correctly with your changes.
    # Push the package to the archive.
    tbd push
  3. YAP (Yet Another Partner) is working on optimizing their new (not-yet-released) chip, but for that they need a version of GCC newer than the one on the Ubuntu archive. They want to create a new (private) archive where they'll upload the new GCC version, but upgrading to that new version of GCC is known to break binary compatibility, so it must be possible for them to easily rebuild all packages using the new GCC and generate images out of the new binaries to ensure the resulting system works as expected.
    tbd branch https://archives.u.c/main --platform=standard 
    # Upload an already prepared gcc-4.5 package.
    tbd push gcc-4.5.dsc
    tbd rebuild  # This is going to take ages!
    tbd gen-image
  4. YAP also has a separate team working on the UI for a device which will use their new chip, so they want to have yet another archive, based on the one containing the new GCC, where they'll make their UI changes without affecting other users of the archive containing the new GCC. The tools should allow them to do that as well.
    tbd branch --platform=standard 
  5. During the development of YAP's latest device, it should be possible for them to easily see the delta between their archive and its upstream. They should also be able to review those changes and pull/push changes from/to the upstream archive.
    # Notice how we don't need to specify the parent as the archive has that information.
    tbd show-delta
  6. At the end of the development of their new device, YAP's engineers want to freeze their archive so that all package uploads have to be reviewed by their release team before they're accepted.
    tbd freeze

Some of the features described here will depend on DerivedArchiveRebuild


  • Archives can be refered to by their URLs. (I think this is how we're going to tell the tools the archives they'll operate on)


The goal here is to provide a high-level interface with sensible defaults, abstracting some of the low-level complexity of the underlying utilities/frameworks.

The tools should be transparent and allow people to access their inner workings. We should aim to ensure it is always possible to complete a task manually.

Users must be able to run any of the tools on either a Desktop or a Server. Also, the tools must not be tied to Launchpad, although they should probably take advantage of anything provided by Launchpad whenever desirable.

The tools should be able to operate on both local and remote archives, for both read and write operations. When operating on remote archives they'll use the HTTP RESTFUL API of the service hosting the archive (wich can be either or a vostok instance). To upload packages we'll use sftp.

  • We'll need multiple OAuth credentials (e.g. for,, etc), so we need to figure out which credentials to use depending on the arguments given.

The tools should not try to enforce any sort of version number rules, but it must provide appropriate version numbers by default so that users don't need to worry about that.


  • Protocol used when talking to server: HTTP; RESTFUL API, with OAuth for auth
  • Data we might want to store locally:
    • - global cache of packages! - OAuth credentials! - bazaar branches?

UI Changes

All tools will probably have no UI other than their command-line arguments.

Test/Demo Plan

It's important that we are able to test new features, and demonstrate them to users. Use this section to describe a short plan that anybody can follow that demonstrates the feature is working. This can then be used during testing, and to show off after release. Please add an entry to for tracking test coverage.

This need not be added or completed until the specification is nearing beta.

Unresolved issues

  • If we go with OAuth for the authentication, it means the user will need a browser to obtain the OAuth credentials, so it will be tricky to run the tools on a server. To workaround that we can either copy existing credentials to the server or ask the user for their password and do the oauth dance ourselves (a la ground control). The latter is a really nasty trick, IMO.
  • For long running operations, do we want to try and design some sort of progress report or should we rely just on an email sent by vostok when it's done?