We want to provide a simple yet powerful set of tools to allow ARM developers to easily create/manage archives (including package uploads/builds) and generate images.
Organizations need to generate their own images for testing and evaluation purposes, possibly including software that supports unreleased hardware or software with restricted redistribution rights. To generate such images they need to maintain an integrated set of software packages that can be installed on to their devices.
Ted wants to generate an image for marvel doves, which require non-free software packages that are not included in Ubuntu but instead are packaged in a marvel PPA. Ted must be able to create a new archive (by branching the Ubuntu archive and marvel's PPA) and generate the image using the new archive.
# Create a new archive containing the "standard" platform of Ubuntu's main archive. tbd branch http://archives.ubuntu.com/main https://archives.yap.com/marvel --platform=standard # Append all packages from the PPA containing the private bits to the newly created archive. tbd append-archive https://launchpad.net/~yap/+archive/private https://archives.yap.com/marvel # Generate an image to test the marvel doves. tbd gen-image https://archives.yap.com/marvel
- A partner is experimenting with a custom netbook UI but doesn't want to include it in their main archive until they've done some more testing, so they use the tools to create a slim archive and do any UI changes (or add new packages) there. Once they decide the UI has had enough testing, they push the changes from the slim archive back to their main one. (The slim archive could be hosted on LP, like a [private] PPA, as soon enough we'll be able to upload packages through sftp).
# Create a new empty archive. tbd create https://archives.yap.com/new-netbook-ui # Create a workspace associated with the new archive. tbd make-workspace https://archives.yap.com/new-netbook-ui # Hack on an existing package or create a new one. tbd edit-package unity cd unity vi # That's how you do it, right? ;) tbd build # Make sure it builds correctly with your changes. # Push the package to the archive. tbd push
- YAP (Yet Another Partner) is working on optimizing their new (not-yet-released) chip, but for that they need a version of GCC newer than the one on the Ubuntu archive. They want to create a new (private) archive where they'll upload the new GCC version, but upgrading to that new version of GCC is known to break binary compatibility, so it must be possible for them to easily rebuild all packages using the new GCC and generate images out of the new binaries to ensure the resulting system works as expected.
tbd branch https://archives.u.c/main https://archives.yap.com/new-gcc --platform=standard # Upload an already prepared gcc-4.5 package. tbd push gcc-4.5.dsc https://archives.yap.com/new-gcc tbd rebuild https://archives.yap.com/new-gcc # This is going to take ages! tbd gen-image https://archives.yap.com/new-gcc
- YAP also has a separate team working on the UI for a device which will use their new chip, so they want to have yet another archive, based on the one containing the new GCC, where they'll make their UI changes without affecting other users of the archive containing the new GCC. The tools should allow them to do that as well.
tbd branch https://archives.yap.com/new-gcc https://archives.yap.com/new-gcc-and-UI --platform=standard
- During the development of YAP's latest device, it should be possible for them to easily see the delta between their archive and its upstream. They should also be able to review those changes and pull/push changes from/to the upstream archive.
# Notice how we don't need to specify the parent as the archive has that information. tbd show-delta https://archives.yap.com/new-device
- At the end of the development of their new device, YAP's engineers want to freeze their archive so that all package uploads have to be reviewed by their release team before they're accepted.
tbd freeze https://archives.yap.com/new-device
Some of the features described here will depend on DerivedArchiveRebuild
- Archives can be refered to by their URLs. (I think this is how we're going to tell the tools the archives they'll operate on)
The goal here is to provide a high-level interface with sensible defaults, abstracting some of the low-level complexity of the underlying utilities/frameworks.
The tools should be transparent and allow people to access their inner workings. We should aim to ensure it is always possible to complete a task manually.
Users must be able to run any of the tools on either a Desktop or a Server. Also, the tools must not be tied to Launchpad, although they should probably take advantage of anything provided by Launchpad whenever desirable.
The tools should be able to operate on both local and remote archives, for both read and write operations. When operating on remote archives they'll use the HTTP RESTFUL API of the service hosting the archive (wich can be either Launchpad.net or a vostok instance). To upload packages we'll use sftp.
The tools should not try to enforce any sort of version number rules, but it must provide appropriate version numbers by default so that users don't need to worry about that.
When doing multiple operations on a given archive, users can create a workspace associated to that archive so that they don't have to specify the archive on every operation. Any directory containing a .tbd.conf file is considered a workspace and tbd will use the archive specified there when one is not explicitly provided.
Workspaces are most useful when modifying packages or building images, so they can also cache .deb packages (for image building) and bazaar working trees for the source packages in the archive.
We need to properly lay out bazaar working trees and source/binary packages (resulting from test builds) in workspaces, to keep things sane. Here's a proposal, in which we just put all the artifacts generated by a test build in a separate directory. Another option would be to keep said artifacts in a sub-directory under the bzr working tree.
. |-- workspace root | |-- unity (bzr working tree) | | `-- README.txt | |-- gcc (bzr working tree) | | `-- HACKING.txt | `-- testbuilds | |-- unity | | `-- unity-N.NN-x86.deb | `-- gcc | `-- gcc-N.NN-x86.deb
Operations accessing remote locations through HTTP will need to be authenticated, so tbd will have a per-user registry of credentials that can be used for OAuth authentication. There will be one set of credentials for each remote location, but when such credentials don't exist tbd will guide the user, via their web browser, through the process of obtaining them.
Getting information about an archive
- Is the archive frozen?
- This will just make an HTTP request to the site that hosts the archive, printing to stdout whether or not the archive is frozen.
- Show delta between an archive and its parent
- We could, again, print to stdout a summary of the packages that differ, indicating in which archive it's been changed. If not that then I think we'd need a GUI app because if we wanted HTML we'd be doing the same thing that's vostok can do.
- Get the source for a given package
- We'll use bazaar to fetch the branch (together with its build recipe, I guess?) associated with the given package and place it in the current directory.
- Test build a given bazaar working tree
- Use bzr builder to assemble a Debian source package and build it, generating a binary package.
- Pushing package
- Push the given bazaar working tree to its parent branch and make an HTTP request to vostok telling it to bzr-build that updated branch, placing the resulting source/binary packages in the appropriate archive.
- Sync packages from parent archive
- We'll just make an HTTP request telling vostok to sync the given archives from the parent.
- Build an image with the given set of packages
Here we'll use the image building tool to generate an image containing the packages specified by the user.
To build an image we'll need to fetch lots of binary packages from the archive, so it makes sense to cache these binary packages locally for further image building. These will be cached in a .cache/ directory under the root of the workspace.
- Branch an archive
Make an HTTP request telling a remote vostok instance to branch a given archive into a new one.
- Append one archive to another
- Make an HTTP request telling a remote vostok instance to branch the given archive into an existing archive.
- Rebuild all packages in an archive
- Make an HTTP request telling a remote vostok instance to rebuild all the packages in the given archive.
- Freeze the archive for package uploads
- Flag the given archive as frozen, forcing package uploads to be held for approval. This is also done by making an HTTP request to the remote vostok instance.
All tools will probably have no UI other than their command-line arguments.
It's important that we are able to test new features, and demonstrate them to users. Use this section to describe a short plan that anybody can follow that demonstrates the feature is working. This can then be used during testing, and to show off after release. Please add an entry to http://testcases.qa.ubuntu.com/Coverage/NewFeatures for tracking test coverage.
This need not be added or completed until the specification is nearing beta.
- We'll need multiple OAuth credentials (e.g. for launchpad.net, archives.yap.com, etc), so we need to figure out which credentials to use depending on the arguments given.
- If we go with OAuth for the authentication, it means the user will need a browser to obtain the OAuth credentials, so it will be tricky to run the tools on a server. To workaround that we can either copy existing credentials to the server or ask the user for their password and do the oauth dance ourselves (a la ground control). The latter is a really nasty trick, IMO.
- For long running operations, do we want to try and design some sort of progress report or should we rely just on an email sent by vostok when it's done?
Maybe the caching of binary packages should be the responsibility of ARMImageBuildingTool?