We want to provide a simple yet powerful set of tools to allow ARM developers to easily create/manage archives (including package uploads/builds) and generate images.
Organizations need to generate their own images for testing and evaluation purposes, possibly including software that supports unreleased hardware or software with restricted redistribution rights. To generate such images they need to maintain an integrated set of software packages that can be installed on to their devices.
Ted wants to generate an image for marvel doves, which require non-free software packages that are not included in Ubuntu but instead are packaged in a marvel PPA. Ted must be able to create a new archive (by branching the Ubuntu archive and marvel's PPA) and generate the image using the new archive.
# Create a new archive containing the "standard" platform of Ubuntu's main archive. tbd branch http://archives.ubuntu.com/main https://archives.yap.com/marvel --platform=standard # Append all packages from the PPA containing the private bits to the newly created archive. tbd append-archive https://launchpad.net/~yap/+archive/private https://archives.yap.com/marvel # Generate an image to test the marvel doves. tbd gen-image https://archives.yap.com/marvel
- A partner is experimenting with a custom netbook UI but doesn't want to include it in their main archive until they've done some more testing, so they use the tools to create a slim archive and do any UI changes (or add new packages) there. Once they decide the UI has had enough testing, they push the changes from the slim archive back to their main one. (The slim archive could be hosted on LP, like a [private] PPA, as soon enough we'll be able to upload packages through sftp).
# Create a new empty archive. tbd create https://archives.yap.com/new-netbook-ui # Create a workspace associated with the new archive. tbd make-workspace https://archives.yap.com/new-netbook-ui # Hack on an existing package or create a new one. apt-get source unity cd unity vi # That's how you do it, right? ;) tbd build # Make sure it builds correctly with your changes. # Push the package to the archive. tbd push
- YAP (Yet Another Partner) is working on optimizing their new (not-yet-released) chip, but for that they need a version of GCC newer than the one on the Ubuntu archive. They want to create a new (private) archive where they'll upload the new GCC version, but upgrading to that new version of GCC is known to break binary compatibility, so it must be possible for them to easily rebuild all packages using the new GCC and generate images out of the new binaries to ensure the resulting system works as expected.
tbd branch https://archives.u.c/main https://archives.yap.com/new-gcc --platform=standard tbd push gcc-4.5.dsc https://archives.yap.com/new-gcc tbd rebuild https://archives.yap.com/new-gcc # This is going to take ages! tbd gen-image https://archives.yap.com/new-gcc
- YAP also has a separate team working on the UI for a device which will use their new chip, so they want to have yet another archive, based on the one containing the new GCC, where they'll make their UI changes without affecting other users of the archive containing the new GCC. The tools should allow them to do that as well.
tbd branch https://archives.yap.com/new-gcc https://archives.yap.com/new-gcc-and-UI --platform=standard
- During the development of YAP's latest device, it should be possible for them to easily see the delta between their archive and its upstream. They should also be able to review those changes and pull/push changes from/to the upstream archive.
tbd show-delta https://archives.yap.com/new-device https://archives.yap.com/main
- At the end of the development of their new device, YAP's engineers want to freeze their archive so that all package uploads have to be reviewed by their release team before they're accepted.
tbd freeze https://archives.yap.com/new-device
Some of the features described here will depend on DerivedArchiveRebuild
- Archives can be refered to by their URLs. (I think this is how we're going to tell the tools the archives they'll operate on)
The goal here is to provide a high-level interface with sensible defaults, abstracting some of the low-level complexity of the underlying utilities/frameworks.
Users must be able to run any of the tools on either a Desktop or a Server. Also, the tools must not be tied to Launchpad, although they should probably take advantage of anything provided by Launchpad whenever desirable.
The tools should be able to operate on both local and remote archives, for both read and write operations. When writing they'll probably use the archive's HTTP RESTFUL API, and sftp to upload packages. Also they should be able to deal with archives hosted on either Launchpad.net or a vostok instance.
- For long running operations, do we want to try and design some sort of progress report or should we rely just on an email sent by vostok when it's done?
- We'll need multiple OAuth credentials (e.g. for launchpad.net, archives.yap.com, etc), so we need to figure out which credentials to use depending on the arguments given.
The tools should not try to enforce any sort of version number rules, but it must provide appropriate version numbers by default so that users don't need to worry about that.
- Protocol used when talking to server: HTTP; RESTFUL API, with OAuth for auth
- Data we might want to store locally:
- - global cache of packages! - OAuth credentials! - bazaar branches?
All tools will probably have no UI other than their command-line arguments.
It's important that we are able to test new features, and demonstrate them to users. Use this section to describe a short plan that anybody can follow that demonstrates the feature is working. This can then be used during testing, and to show off after release. Please add an entry to http://testcases.qa.ubuntu.com/Coverage/NewFeatures for tracking test coverage.
This need not be added or completed until the specification is nearing beta.
- If we go with OAuth for the authentication, it means the user will need a browser to obtain the OAuth credentials, so it will be tricky to run the tools on a server. To workaround that we can either copy existing credentials to the server or ask the user for their password and do the oauth dance ourselves (a la ground control). The latter is a really nasty trick, IMO.