ARMDeveloperEnvironment

Differences between revisions 1 and 36 (spanning 35 versions)
Revision 1 as of 2010-04-27 15:23:45
Size: 2620
Editor: fw-unat
Comment:
Revision 36 as of 2010-05-28 17:47:24
Size: 11472
Editor: bd7a3fd5
Comment:
Deletions are marked like this. Additions are marked like this.
Line 3: Line 3:
 * '''Launchpad Entry''': [[https://blueprints.launchpad.net/ubuntu-arm/+spec/arm-m-dev-env]]  * '''Launchpad Entry''': [[https://blueprints.launchpad.net/ubuntu/+spec/arm-m-development-tools]]
Line 5: Line 5:
 * '''Contributors''':  * '''Contributors''': GuilhermeSalgado
Line 10: Line 10:
This should provide an overview of the issue/functionality/change proposed here. Focus here on what will actually be DONE, summarising that so that other people don't have to read the whole spec. See also CategorySpec for examples.

== Release Note ==

This section should include a paragraph describing the end-user impact of this change. It is meant to be included in the release notes of the first release in which it is implemented. (Not all of these will actually be included in the release notes, at the release manager's discretion; but writing them is a useful exercise.)

It is mandatory.
We want to provide a simple yet powerful set of tools to allow ARM developers to easily create/manage archives (including package uploads/builds) and generate images.
Line 20: Line 14:
This should cover the _why_: why is this change being proposed, what justifies it, where we see this justified. Organizations need to generate their own images for testing
and evaluation purposes, possibly including software that supports unreleased hardware or software with restricted redistribution rights. To generate such images they need to maintain an integrated set of software packages that can be installed on to their devices.

== Definitions ==

See the Definitions section of [[Specs/M/ARMArchiveBranching|ARMArchiveBranching]] and [[Specs/M/ARMArchiveFeatures|ARMArchiveFeatures]]
Line 24: Line 23:
 1. Ted wants to generate an image for marvel doves, which require non-free software packages that are not included in Ubuntu but instead are packaged in a marvel PPA. Ted must be able to create a new archive (by [[Specs/M/ARMArchiveBranching|branching]] the Ubuntu archive and marvel's PPA) and generate the image using the new archive.
 {{{
# Create a new archive containing the "standard" platform of Ubuntu's main archive.
tbd branch http://archives.ubuntu.com/main https://archives.yap.com/marvel --platform=standard
# Append all packages from the PPA containing the private bits to the newly created archive.
tbd append-archive https://launchpad.net/~yap/+archive/private https://archives.yap.com/marvel
# Generate an image to test the marvel doves.
tbd gen-image https://archives.yap.com/marvel
 }}}

 1. A partner is experimenting with a custom netbook UI but doesn't want to include it in their main archive until they've done some more testing, so they use the tools to create a slim archive and do any UI changes (or add new packages) there. Once they decide the UI has had enough testing, they push the changes from the slim archive back to their main one. (The slim archive could be hosted on LP, like a [private] PPA, as soon enough we'll be able to upload packages through sftp).
 {{{
# Create a new empty archive.
tbd create https://archives.yap.com/new-netbook-ui
# Create a workspace associated with the new archive.
tbd make-workspace https://archives.yap.com/new-netbook-ui
# Hack on an existing package or create a new one.
tbd edit-package unity
cd unity
vi # That's how you do it, right? ;)
tbd build # Make sure it builds correctly with your changes.
# Push the package to the archive.
tbd push
 }}}

 1. YAP (Yet Another Partner) is working on optimizing their new (not-yet-released) chip, but for that they need a version of GCC newer than the one on the Ubuntu archive. They want to create a new (private) archive where they'll upload the new GCC version, but upgrading to that new version of GCC is known to break binary compatibility, so it must be possible for them to easily rebuild all packages using the new GCC and generate images out of the new binaries to ensure the resulting system works as expected.
 {{{
tbd branch https://archives.u.c/main https://archives.yap.com/new-gcc --platform=standard
# Upload an already prepared gcc-4.5 package.
tbd push gcc-4.5.dsc https://archives.yap.com/new-gcc
tbd rebuild https://archives.yap.com/new-gcc # This is going to take ages!
tbd gen-image https://archives.yap.com/new-gcc
 }}}

 1. YAP also has a separate team working on the UI for a device which will use their new chip, so they want to have yet another archive, based on the one containing the new GCC, where they'll make their UI changes without affecting other users of the archive containing the new GCC. The tools should allow them to do that as well.
 {{{
tbd branch https://archives.yap.com/new-gcc https://archives.yap.com/new-gcc-and-UI --platform=standard
 }}}

 1. During the development of YAP's latest device, it should be possible for them to easily see the delta between their archive and its upstream. They should also be able to review those changes and pull/push changes from/to the upstream archive.
 {{{
# Notice how we don't need to specify the parent as the archive has that information.
tbd show-delta https://archives.yap.com/new-device
 }}}

 1. At the end of the development of their new device, YAP's engineers want to freeze their archive so that all package uploads have to be reviewed by their release team before they're accepted.
 {{{
tbd freeze https://archives.yap.com/new-device
 }}}

Some of the features described here will depend on [[Specs/M/DerivedArchiveRebuild|DerivedArchiveRebuild]]
Line 25: Line 76:

 * Archives can be refered to by their URLs. (I think this is how we're going to tell the tools the archives they'll operate on)
Line 28: Line 81:
You can have subsections that better describe specific parts of the issue. The goal here is to provide a high-level interface with sensible defaults, abstracting some of the low-level complexity of the underlying utilities/frameworks.

The tools should be transparent and allow people to access their inner workings. We should aim to ensure it is always possible to complete a task manually.

Users must be able to run any of the tools on either a Desktop or a Server. Also, the tools must not be tied to Launchpad, although they should probably take advantage of anything provided by Launchpad whenever desirable.

The tools should be able to operate on both local and remote archives, for both read and write operations. When operating on remote archives they'll use the HTTP RESTFUL API of the service hosting the archive (wich can be either Launchpad.net or a [[Specs/M/ARMArchiveBranching|vostok]] instance). To upload packages we'll use sftp.

The tools should not try to enforce any sort of version number rules, but it must provide appropriate version numbers by default so that users don't need to worry about that.
Line 32: Line 93:
This section should describe a plan of action (the "how") to implement the changes discussed. Could include subsections like: When doing multiple operations on a given archive, users can create a workspace associated to that archive so that they don't have to specify the archive on every operation. Any directory containing a .tbd.conf file is considered a workspace and {{{tbd}}} will use the archive specified there when one is not explicitly provided.

Workspaces are most useful when modifying packages or building images, so they can also cache .deb packages (for image building) and bazaar working trees for the source packages in the archive.

We need to properly lay out bazaar working trees and source/binary packages (resulting from test builds) in workspaces, to keep things sane. Here's a proposal, in which we just put all the artifacts generated by a test build in a separate directory. Another option would be to keep said artifacts in a sub-directory under the bzr working tree.

 {{{
  .
  |-- workspace root
  | |-- unity (bzr working tree)
  | | `-- README.txt
  | |-- gcc (bzr working tree)
  | | `-- HACKING.txt
  | `-- testbuilds
  | |-- unity
  | | `-- unity-N.NN-x86.deb
  | `-- gcc
  | `-- gcc-N.NN-x86.deb
 }}}

Operations accessing remote locations through HTTP will need to be authenticated, so {{{tbd}}} will have a per-user registry of credentials that can be used for OAuth authentication. There will be one set of credentials for each remote location, but when such credentials don't exist {{{tbd}}} will guide the user, via their web browser, through the process of obtaining them.

=== Getting information about an archive ===

 * Is the archive frozen?
  This will just make an HTTP request to the site that hosts the archive, printing to stdout whether or not the archive is frozen.

 * Show delta between an archive and its parent
  We could, again, print to stdout a summary of the packages that differ, indicating in which archive it's been changed. If not that then I think we'd need a GUI app because if we wanted HTML we'd be doing the same thing that's vostok can do.

=== Modifying packages ===

 * Get the source for a given package
  We'll use bazaar to fetch the branch (together with its build recipe, I guess?) associated with the given package and place it in the current directory.

 * Test build a given bazaar working tree
  Use bzr builder to assemble a Debian source package and build it, generating a binary package.

 * Pushing package
  Push the given bazaar working tree to its parent branch and make an HTTP request to vostok telling it to bzr-build that updated branch, placing the resulting source/binary packages in the appropriate archive.

 * Sync packages from parent archive
  We'll just make an HTTP request telling vostok to sync the given archives from the parent.

=== Image building ===

 * Build an image with the given set of packages
  Here we'll use the [[Specs/M/ARMImageBuildingTool|image building tool]] to generate an image containing the packages specified by the user.

To build an image we'll need to fetch lots of binary packages from the archive, so it makes sense to cache these binary packages locally for further image building. These will be cached in a .cache/ directory under the root of the workspace.

=== Creating/modifying archives ===

 * Branch an archive
  Make an HTTP request telling a remote vostok instance to [[Specs/M/ARMArchiveBranching|branch]] a given archive into a new one.

 * Append one archive to another
  Make an HTTP request telling a remote vostok instance to branch the given archive into an existing archive.

 * Rebuild all packages in an archive
  Make an HTTP request telling a remote vostok instance to rebuild all the packages in the given archive.

 * Freeze the archive for package uploads
  Flag the given archive as frozen, forcing package uploads to be held for approval. This is also done by making an HTTP request to the remote vostok instance.
Line 36: Line 160:
Should cover changes required to the UI, or specific UI that is required to implement this

=== Code Changes ===

Code changes should include an overview of what needs to change, and in some cases even the specific details.

=== Migration ===

Include:
 * data migration, if any
 * redirects from old URLs to new ones, if any
 * how users will be pointed to the new way of doing things, if necessary.
All tools will probably have no UI other than their command-line arguments.
Line 57: Line 170:
This should highlight any issues that should be addressed in further specifications, and not problems with the specification itself; since any specification with problems cannot be approved.  * We'll need multiple OAuth credentials (e.g. for launchpad.net, archives.yap.com, etc), so we need to figure out which credentials to use depending on the arguments given.
Line 59: Line 172:
== BoF agenda and discussion ==  * If we go with OAuth for the authentication, it means the user will need a browser to obtain the OAuth credentials, so it will be tricky to run the tools on a server. To workaround that we can either copy existing credentials to the server or ask the user for their password and do the oauth dance ourselves (a la ground control). The latter is a really nasty trick, IMO.
Line 61: Line 174:
Use this section to take notes during the BoF; if you keep it in the approved spec, use it for summarising what was discussed and note any options that were rejected.  * For long running operations, do we want to try and design some sort of progress report or should we rely just on an email sent by vostok when it's done?

 * Maybe the caching of binary packages should be the responsibility of [[Specs/M/ARMImageBuildingTool|ARMImageBuildingTool]]?

Summary

We want to provide a simple yet powerful set of tools to allow ARM developers to easily create/manage archives (including package uploads/builds) and generate images.

Rationale

Organizations need to generate their own images for testing and evaluation purposes, possibly including software that supports unreleased hardware or software with restricted redistribution rights. To generate such images they need to maintain an integrated set of software packages that can be installed on to their devices.

Definitions

See the Definitions section of ARMArchiveBranching and ARMArchiveFeatures

User stories

  1. Ted wants to generate an image for marvel doves, which require non-free software packages that are not included in Ubuntu but instead are packaged in a marvel PPA. Ted must be able to create a new archive (by branching the Ubuntu archive and marvel's PPA) and generate the image using the new archive.

    # Create a new archive containing the "standard" platform of Ubuntu's main archive.
    tbd branch http://archives.ubuntu.com/main https://archives.yap.com/marvel --platform=standard
    # Append all packages from the PPA containing the private bits to the newly created archive.
    tbd append-archive https://launchpad.net/~yap/+archive/private https://archives.yap.com/marvel
    # Generate an image to test the marvel doves.
    tbd gen-image https://archives.yap.com/marvel
  2. A partner is experimenting with a custom netbook UI but doesn't want to include it in their main archive until they've done some more testing, so they use the tools to create a slim archive and do any UI changes (or add new packages) there. Once they decide the UI has had enough testing, they push the changes from the slim archive back to their main one. (The slim archive could be hosted on LP, like a [private] PPA, as soon enough we'll be able to upload packages through sftp).
    # Create a new empty archive.
    tbd create https://archives.yap.com/new-netbook-ui
    # Create a workspace associated with the new archive.
    tbd make-workspace https://archives.yap.com/new-netbook-ui
    # Hack on an existing package or create a new one.
    tbd edit-package unity
    cd unity
    vi  # That's how you do it, right? ;)
    tbd build  # Make sure it builds correctly with your changes.
    # Push the package to the archive.
    tbd push
  3. YAP (Yet Another Partner) is working on optimizing their new (not-yet-released) chip, but for that they need a version of GCC newer than the one on the Ubuntu archive. They want to create a new (private) archive where they'll upload the new GCC version, but upgrading to that new version of GCC is known to break binary compatibility, so it must be possible for them to easily rebuild all packages using the new GCC and generate images out of the new binaries to ensure the resulting system works as expected.
    tbd branch https://archives.u.c/main https://archives.yap.com/new-gcc --platform=standard 
    # Upload an already prepared gcc-4.5 package.
    tbd push gcc-4.5.dsc https://archives.yap.com/new-gcc
    tbd rebuild https://archives.yap.com/new-gcc  # This is going to take ages!
    tbd gen-image https://archives.yap.com/new-gcc
  4. YAP also has a separate team working on the UI for a device which will use their new chip, so they want to have yet another archive, based on the one containing the new GCC, where they'll make their UI changes without affecting other users of the archive containing the new GCC. The tools should allow them to do that as well.
    tbd branch https://archives.yap.com/new-gcc https://archives.yap.com/new-gcc-and-UI --platform=standard 
  5. During the development of YAP's latest device, it should be possible for them to easily see the delta between their archive and its upstream. They should also be able to review those changes and pull/push changes from/to the upstream archive.
    # Notice how we don't need to specify the parent as the archive has that information.
    tbd show-delta https://archives.yap.com/new-device
  6. At the end of the development of their new device, YAP's engineers want to freeze their archive so that all package uploads have to be reviewed by their release team before they're accepted.
    tbd freeze https://archives.yap.com/new-device

Some of the features described here will depend on DerivedArchiveRebuild

Assumptions

  • Archives can be refered to by their URLs. (I think this is how we're going to tell the tools the archives they'll operate on)

Design

The goal here is to provide a high-level interface with sensible defaults, abstracting some of the low-level complexity of the underlying utilities/frameworks.

The tools should be transparent and allow people to access their inner workings. We should aim to ensure it is always possible to complete a task manually.

Users must be able to run any of the tools on either a Desktop or a Server. Also, the tools must not be tied to Launchpad, although they should probably take advantage of anything provided by Launchpad whenever desirable.

The tools should be able to operate on both local and remote archives, for both read and write operations. When operating on remote archives they'll use the HTTP RESTFUL API of the service hosting the archive (wich can be either Launchpad.net or a vostok instance). To upload packages we'll use sftp.

The tools should not try to enforce any sort of version number rules, but it must provide appropriate version numbers by default so that users don't need to worry about that.

Implementation

When doing multiple operations on a given archive, users can create a workspace associated to that archive so that they don't have to specify the archive on every operation. Any directory containing a .tbd.conf file is considered a workspace and tbd will use the archive specified there when one is not explicitly provided.

Workspaces are most useful when modifying packages or building images, so they can also cache .deb packages (for image building) and bazaar working trees for the source packages in the archive.

We need to properly lay out bazaar working trees and source/binary packages (resulting from test builds) in workspaces, to keep things sane. Here's a proposal, in which we just put all the artifacts generated by a test build in a separate directory. Another option would be to keep said artifacts in a sub-directory under the bzr working tree.

  •   .
      |-- workspace root
      |   |-- unity  (bzr working tree)
      |   |   `-- README.txt
      |   |-- gcc    (bzr working tree)
      |   |   `-- HACKING.txt
      |   `-- testbuilds
      |       |-- unity
      |       |   `-- unity-N.NN-x86.deb
      |       `-- gcc
      |           `-- gcc-N.NN-x86.deb

Operations accessing remote locations through HTTP will need to be authenticated, so tbd will have a per-user registry of credentials that can be used for OAuth authentication. There will be one set of credentials for each remote location, but when such credentials don't exist tbd will guide the user, via their web browser, through the process of obtaining them.

Getting information about an archive

  • Is the archive frozen?
    • This will just make an HTTP request to the site that hosts the archive, printing to stdout whether or not the archive is frozen.
  • Show delta between an archive and its parent
    • We could, again, print to stdout a summary of the packages that differ, indicating in which archive it's been changed. If not that then I think we'd need a GUI app because if we wanted HTML we'd be doing the same thing that's vostok can do.

Modifying packages

  • Get the source for a given package
    • We'll use bazaar to fetch the branch (together with its build recipe, I guess?) associated with the given package and place it in the current directory.
  • Test build a given bazaar working tree
    • Use bzr builder to assemble a Debian source package and build it, generating a binary package.
  • Pushing package
    • Push the given bazaar working tree to its parent branch and make an HTTP request to vostok telling it to bzr-build that updated branch, placing the resulting source/binary packages in the appropriate archive.
  • Sync packages from parent archive
    • We'll just make an HTTP request telling vostok to sync the given archives from the parent.

Image building

  • Build an image with the given set of packages
    • Here we'll use the image building tool to generate an image containing the packages specified by the user.

To build an image we'll need to fetch lots of binary packages from the archive, so it makes sense to cache these binary packages locally for further image building. These will be cached in a .cache/ directory under the root of the workspace.

Creating/modifying archives

  • Branch an archive
    • Make an HTTP request telling a remote vostok instance to branch a given archive into a new one.

  • Append one archive to another
    • Make an HTTP request telling a remote vostok instance to branch the given archive into an existing archive.
  • Rebuild all packages in an archive
    • Make an HTTP request telling a remote vostok instance to rebuild all the packages in the given archive.
  • Freeze the archive for package uploads
    • Flag the given archive as frozen, forcing package uploads to be held for approval. This is also done by making an HTTP request to the remote vostok instance.

UI Changes

All tools will probably have no UI other than their command-line arguments.

Test/Demo Plan

It's important that we are able to test new features, and demonstrate them to users. Use this section to describe a short plan that anybody can follow that demonstrates the feature is working. This can then be used during testing, and to show off after release. Please add an entry to http://testcases.qa.ubuntu.com/Coverage/NewFeatures for tracking test coverage.

This need not be added or completed until the specification is nearing beta.

Unresolved issues

  • We'll need multiple OAuth credentials (e.g. for launchpad.net, archives.yap.com, etc), so we need to figure out which credentials to use depending on the arguments given.
  • If we go with OAuth for the authentication, it means the user will need a browser to obtain the OAuth credentials, so it will be tricky to run the tools on a server. To workaround that we can either copy existing credentials to the server or ask the user for their password and do the oauth dance ourselves (a la ground control). The latter is a really nasty trick, IMO.
  • For long running operations, do we want to try and design some sort of progress report or should we rely just on an email sent by vostok when it's done?
  • Maybe the caching of binary packages should be the responsibility of ARMImageBuildingTool?


CategorySpec

Specs/M/ARMDeveloperEnvironment (last edited 2010-06-08 10:57:12 by fw-unat)