ARMDeveloperEnvironment

Differences between revisions 2 and 38 (spanning 36 versions)
Revision 2 as of 2010-05-18 13:00:11
Size: 3636
Editor: 5ac884b8
Comment: Add notes from UDS-M session
Revision 38 as of 2010-05-28 19:20:30
Size: 15990
Editor: 74
Comment:
Deletions are marked like this. Additions are marked like this.
Line 3: Line 3:
 * '''Launchpad Entry''': [[https://blueprints.launchpad.net/ubuntu-arm/+spec/arm-m-dev-env]]  * '''Launchpad Entry''': [[https://blueprints.launchpad.net/ubuntu/+spec/arm-m-development-tools]]
Line 5: Line 5:
 * '''Contributors''':  * '''Contributors''': GuilhermeSalgado, JamesWestby
Line 10: Line 10:
This should provide an overview of the issue/functionality/change proposed here. Focus here on what will actually be DONE, summarising that so that other people don't have to read the whole spec. See also CategorySpec for examples.

== Release Note ==

This section should include a paragraph describing the end-user impact of this change. It is meant to be included in the release notes of the first release in which it is implemented. (Not all of these will actually be included in the release notes, at the release manager's discretion; but writing them is a useful exercise.)

It is mandatory.
We want to provide a simple yet powerful set of tools to allow ARM developers to easily create/manage archives (including package uploads/builds) and generate images.
Line 20: Line 14:
This should cover the _why_: why is this change being proposed, what justifies it, where we see this justified. Organizations need to generate their own images for testing
and evaluation purposes, possibly including software that supports unreleased hardware or software with restricted redistribution rights. To generate such images they need to maintain an integrated set of software packages that can be installed on to their devices.

== Definitions ==

See the Definitions section of [[Specs/M/ARMArchiveBranching|ARMArchiveBranching]] and [[Specs/M/ARMArchiveFeatures|ARMArchiveFeatures]]
Line 24: Line 23:
 1. Ted wants to generate an image for marvel doves, which require non-free software packages that are not included in Ubuntu but instead are packaged in a marvel PPA. Ted must be able to create a new archive (by [[Specs/M/ARMArchiveBranching|branching]] the Ubuntu archive and marvel's PPA) and generate the image using the new archive.
 {{{
# Create a new archive containing the "standard" platform of Ubuntu's main archive.
tbd branch http://archives.ubuntu.com/main https://archives.yap.com/marvel --platform=standard
# Append all packages from the PPA containing the private bits to the newly created archive.
tbd append-archive https://launchpad.net/~yap/+archive/private https://archives.yap.com/marvel
# Generate an image to test the marvel doves.
tbd gen-image https://archives.yap.com/marvel
 }}}

 1. A partner is experimenting with a custom netbook UI but doesn't want to include it in their main archive until they've done some more testing, so they use the tools to create a slim archive and do any UI changes (or add new packages) there. Once they decide the UI has had enough testing, they push the changes from the slim archive back to their main one. (The slim archive could be hosted on LP, like a [private] PPA, as soon enough we'll be able to upload packages through sftp).
 {{{
# Create a new empty archive.
tbd create https://archives.yap.com/new-netbook-ui
# Create a workspace associated with the new archive.
tbd make-workspace https://archives.yap.com/new-netbook-ui
# Hack on an existing package or create a new one.
tbd edit-package unity
cd unity
vi # That's how you do it, right? ;)
tbd build # Make sure it builds correctly with your changes.
# Push the package to the archive.
tbd push
 }}}

 1. YAP (Yet Another Partner) is working on optimizing their new (not-yet-released) chip, but for that they need a version of GCC newer than the one on the Ubuntu archive. They want to create a new (private) archive where they'll upload the new GCC version, but upgrading to that new version of GCC is known to break binary compatibility, so it must be possible for them to easily rebuild all packages using the new GCC and generate images out of the new binaries to ensure the resulting system works as expected.
 {{{
tbd branch https://archives.u.c/main https://archives.yap.com/new-gcc --platform=standard
# Upload an already prepared gcc-4.5 package.
tbd push gcc-4.5.dsc https://archives.yap.com/new-gcc
tbd rebuild https://archives.yap.com/new-gcc # This is going to take ages!
tbd gen-image https://archives.yap.com/new-gcc
 }}}

 1. YAP also has a separate team working on the UI for a device which will use their new chip, so they want to have yet another archive, based on the one containing the new GCC, where they'll make their UI changes without affecting other users of the archive containing the new GCC. The tools should allow them to do that as well.
 {{{
tbd branch https://archives.yap.com/new-gcc https://archives.yap.com/new-gcc-and-UI --platform=standard
 }}}

 1. During the development of YAP's latest device, it should be possible for them to easily see the delta between their archive and its upstream. They should also be able to review those changes and pull/push changes from/to the upstream archive.
 {{{
# Notice how we don't need to specify the parent as the archive has that information.
tbd show-delta https://archives.yap.com/new-device
 }}}

 1. At the end of the development of their new device, YAP's engineers want to freeze their archive so that all package uploads have to be reviewed by their release team before they're accepted.
 {{{
tbd freeze https://archives.yap.com/new-device
 }}}

Some of the features described here will depend on [[Specs/M/DerivedArchiveRebuild|DerivedArchiveRebuild]]
Line 26: Line 77:
 * Archives can be refered to by their URLs. (I think this is how we're going to tell the tools the archives they'll operate on)
Line 28: Line 81:
You can have subsections that better describe specific parts of the issue. The goal here is to provide a high-level interface with sensible defaults, abstracting some of the low-level complexity of the underlying utilities/frameworks.

The tools should be transparent and allow people to access their inner workings. We should aim to ensure it is always possible to complete a task manually.

Users must be able to run any of the tools on either a Desktop or a Server. Also, the tools must not be tied to Launchpad, although they should probably take advantage of anything provided by Launchpad whenever desirable.

The tools should be able to operate on both local and remote archives, for both read and write operations. When operating on remote archives they'll use the HTTP RESTFUL API of the service hosting the archive (wich can be either Launchpad.net or a [[Specs/M/ARMArchiveBranching|vostok]] instance). To upload packages we'll use sftp.

The tools should not try to enforce any sort of version number rules, but it must provide appropriate version numbers by default so that users don't need to worry about that.

The following are some of the main tasks that the tool must facilitate.

=== Interacting with an archive ===

This category covers operations on remote archives, such as finding current versions of packages, removing packages, requesting rebuilds, freezing an archive, and indeed branching an archive.

This should make use of an HTTP API exposed by the archive management software that allows querying and modifications.

For each logical operation that a user may wish to do there should be a command or subcommand provided by {{{tbd}}}, which it will then map to the necessary API calls. Therefore most of the logic about how the operations work will be in the archive management software, and {{{tbd}}} just needs to handle making the correct API call, presenting the information to the user, and handling error conditions.

We should aim to get good coverage of typical operations in the {{{tdb}}} tool, but asking users to go to the web UI of the archive management service is acceptable, and will be necessary if new features are added that old versions of {{{tdb}}} do not support.

=== Workspaces ===

A workspace is an area on disk that the tool can create which encapsulates a specific configuration. This allows the tool to infer lots of information when it is in a workspace and save the developer time in remembering and typing some of those details.

A workspace will be tied to a certain archive, and so by default act on that archive.

The developer can maintain as many workspaces as they like locally, and {{{cd}}} between them in order to work on different archives.

In addition to this the workspace can contain a local archive that can be used in addition to the remote one. This is used to allow the developer to do things like build an image containing some test changes, or to build two packages locally where the second depends on some new API just added to the first, all without having to upload experimental changes to the archive for others to see. We may also want to allow developers to push packages from their development archive to a PPA for sharing with others, or even use a PPA for this if they like.

=== Modifying packages ===

One of the most common operations will be modifying a package. The tool will provide commands that make it easy to get a copy of the current version of the package, make changes to it, build it locally for testing, and then commit the change.

Ideally we should support both pushing the change directly to an archive, and for submission for review for those that either don't have upload rights to the archive, or would like peer review before making the change.

While editing files and the like won't be abstracted, the tool can provide wrapper commands for test-building a package, adding a changelog entry and uploading, merging a new upstream version, and merging from a parent archive, amongst others.

There should also be a way to submit the change to a parent archive for review very easily, so that the change can be made in the archive that the developer is targeting, but also start making its way in to parent archives at the same time.

We may also want to provide environments other than the host system in order to build packages. chroots or virtual machines are important as soon as you are building for a target that is different to the host in terms of package versions and the like. Clearly when building for a different architecture this needs to be done, in which case we should interface with UbuntuSpec:arm-m-xdeb-cross-compilation-environment.

=== Image building ===

The tool should tie in to the results of UbuntuSpec:arm-m-image-building-tool such that a developer can easily build themselves a test image, including results of their test builds.

In addition, the tool should tie in to UbuntuSpec:arm-m-image-building-console such that they can also request image builds from a service. Here it would be needed to host all the packages remotely so that the image building service could make use of them.

Furthermore for the developer's workflow it would be ideal if they could submit a package to build in the archive, and at the same time queue an image build request that would start if and when the package built successfully. That would save them having to switch context too often.

=== Updating a derived archive ===

The tool should also allow the developer to make use of the features of the archives described in UbuntuSpec:arm-m-archive-branching. It should allow them to visualise the difference between an archive and its parent, and then act on the result as well.

Crucially it should allow them to do three things:

  * Request a sync of a package.
  * Submit a change to the parent archive in the appropriate manner.
  * Merge a package from the parent where the package was modified in both archives.

As the last two operations can't be done through the web UI described in the other spec it is important that the tool make that part easy for developers to do.
Line 32: Line 147:
This section should describe a plan of action (the "how") to implement the changes discussed. Could include subsections like: === Interacting with an archive ===

Operations accessing remote locations through HTTP will need to be authenticated, so {{{tbd}}} will have a per-user registry of credentials that can be used for OAuth authentication. There will be one set of credentials for each remote location, but when such credentials don't exist {{{tbd}}} will guide the user, via their web browser, through the process of obtaining them.

Once it has the credentials it can make the necessary authenticated API calls to perform the requested operation.

It must understand enough of the response that it will receive from the server to present the results meaningfully to the developer, and also provide useful error messages where possible.

=== Workspaces ===

Any directory containing a .tbd.conf file is considered a workspace and {{{tbd}}} will use the archive specified there when one is not explicitly provided. The file will also be able to store other configuration defaults for that workspace, such as extra archives (e.g. PPAs) that should be included in images built from that workspace.

Workspaces are most useful when modifying packages or building images, so they can also cache .deb packages (for image building) and bazaar working trees for the source packages in the archive.

We need to properly lay out bazaar working trees and source/binary packages (resulting from test builds) in workspaces, to keep things sane. Here is an example of how that could look.

 {{{
  .
  |-- workspace root
  | |-- unity (bzr working tree)
  | | `-- README.txt
  | |-- gcc (bzr working tree)
  | | `-- HACKING.txt
  | `-- testbuilds
  | |-- Packages (makes it an archive that image builders can use)
  | |-- unity-N.NN-x86.deb
  | `-- gcc-N.NN-x86.deb
 }}}

=== Modifying packages ===

 * Get the source for a given package
  We'll use bazaar to fetch the branch associated with the given package and place it in the current directory.
  - How do we deal with branches of work?

 * Test build a given bazaar working tree
  Use bzr-builddeb to build a source package from the tree. This can then be built in to a binary package.
  - We need to take in to account building in a chroot/with x-deb here.

 * Pushing package
  Push the given bazaar working tree to its parent branch. How the package gets in the archive depends on the facilities of the target:
  * If it has full building facilities then we request a build of the branch to a source/binary package.
  * If it can only build binaries then we build a source package locally and then upload that, which will trigger a binary build.
  * Otherwise we also upload source and binary packages.

=== Image building ===

 * Build an image with the given set of packages
  Here we'll use the [[Specs/M/ARMImageBuildingTool|image building tool]] to generate an image containing the packages specified by the user.

To build an image we'll need to fetch lots of binary packages from the archive, so it makes sense to cache these binary packages locally for further image building. These will be cached in the workspace.

=== Updating a derived archive ===

 * Show delta between an archive and its parent
   - The tool can show the list of modified packages in each category, and then allow the user to choose one to work on..
Line 36: Line 206:
Should cover changes required to the UI, or specific UI that is required to implement this

=== Code Changes ===

Code changes should include an overview of what needs to change, and in some cases even the specific details.

=== Migration ===

Include:
 * data migration, if any
 * redirects from old URLs to new ones, if any
 * how users will be pointed to the new way of doing things, if necessary.
Mostly the tool will just have command-line arguments and status output.

There will be sometimes when it may need to present the user a list of options, so we should design how that would look.
Line 51: Line 212:
It's important that we are able to test new features, and demonstrate them to users. Use this section to describe a short plan that anybody can follow that demonstrates the feature is working. This can then be used during testing, and to show off after release. Please add an entry to http://testcases.qa.ubuntu.com/Coverage/NewFeatures for tracking test coverage.

This need not be added or completed until the specification is nearing beta.
There is clearly a lot of testing that will be required. We will make a lot of use of unit testing, and also dogfooding by developers.

Each feature should be explicitly tested as it is included though, and integration tests included where possible.
Line 57: Line 218:
This should highlight any issues that should be addressed in further specifications, and not problems with the specification itself; since any specification with problems cannot be approved.

== BoF agenda and discussion ==
=== Goals ===
 * Goal: branch subset of ubuntu packages and manage changes
  * experiments - short/adhoc / long running
  * multiple parents: e.g. integrate goodies from multiple PPAs
  * hierarchy: common archive -> project archive -> project variant archive etc.
   * automatic superseding and merging
=== Releasing ===
 * Release/Freezes/ACLs
   * have automatic merged release branch owned by releaes team during development
     period that gets set to manual mode during freezes.
   * at release another branch is auto created that is not changeable??
=== Getting Started ===
 * getting started: command line tool to branch some archive; by default it starts
   by copying the binaries;
 * managing changes: webtool that visualizes relationship to parent archive:
   * changes in downstream archive
  * changes in upstream archive (merge o matic'ish)
 * some changes like gcc would require move a binary copy archive to a source/rebuild
   everything mode? is that true? gcc might just have changed for a bug fix/crash etc.
   * tracking build dependencies may be relevant here
 * a way of enforcing version number rules in a particular archive would be good
 * We'll need multiple OAuth credentials (e.g. for launchpad.net, archives.yap.com, etc), so we need to figure out which credentials to use depending on the arguments given.

 * If we go with OAuth for the authentication, it means the user will need a browser to obtain the OAuth credentials, so it will be tricky to run the tools on a server. To workaround that we can either copy existing credentials to the server or ask the user for their password and do the oauth dance ourselves (a la ground control). The latter is a really nasty trick, IMO.

 * For long running operations, do we want to try and design some sort of progress report or should we rely just on an email sent by vostok when it's done?

 * Maybe the caching of binary packages should be the responsibility of [[Specs/M/ARMImageBuildingTool|ARMImageBuildingTool]]?

 * Is supporting the local archive in a workspace in all the tools going to be a lot of work? Should there be a way for it to be transparently included instead?

Summary

We want to provide a simple yet powerful set of tools to allow ARM developers to easily create/manage archives (including package uploads/builds) and generate images.

Rationale

Organizations need to generate their own images for testing and evaluation purposes, possibly including software that supports unreleased hardware or software with restricted redistribution rights. To generate such images they need to maintain an integrated set of software packages that can be installed on to their devices.

Definitions

See the Definitions section of ARMArchiveBranching and ARMArchiveFeatures

User stories

  1. Ted wants to generate an image for marvel doves, which require non-free software packages that are not included in Ubuntu but instead are packaged in a marvel PPA. Ted must be able to create a new archive (by branching the Ubuntu archive and marvel's PPA) and generate the image using the new archive.

    # Create a new archive containing the "standard" platform of Ubuntu's main archive.
    tbd branch http://archives.ubuntu.com/main https://archives.yap.com/marvel --platform=standard
    # Append all packages from the PPA containing the private bits to the newly created archive.
    tbd append-archive https://launchpad.net/~yap/+archive/private https://archives.yap.com/marvel
    # Generate an image to test the marvel doves.
    tbd gen-image https://archives.yap.com/marvel
  2. A partner is experimenting with a custom netbook UI but doesn't want to include it in their main archive until they've done some more testing, so they use the tools to create a slim archive and do any UI changes (or add new packages) there. Once they decide the UI has had enough testing, they push the changes from the slim archive back to their main one. (The slim archive could be hosted on LP, like a [private] PPA, as soon enough we'll be able to upload packages through sftp).
    # Create a new empty archive.
    tbd create https://archives.yap.com/new-netbook-ui
    # Create a workspace associated with the new archive.
    tbd make-workspace https://archives.yap.com/new-netbook-ui
    # Hack on an existing package or create a new one.
    tbd edit-package unity
    cd unity
    vi  # That's how you do it, right? ;)
    tbd build  # Make sure it builds correctly with your changes.
    # Push the package to the archive.
    tbd push
  3. YAP (Yet Another Partner) is working on optimizing their new (not-yet-released) chip, but for that they need a version of GCC newer than the one on the Ubuntu archive. They want to create a new (private) archive where they'll upload the new GCC version, but upgrading to that new version of GCC is known to break binary compatibility, so it must be possible for them to easily rebuild all packages using the new GCC and generate images out of the new binaries to ensure the resulting system works as expected.
    tbd branch https://archives.u.c/main https://archives.yap.com/new-gcc --platform=standard 
    # Upload an already prepared gcc-4.5 package.
    tbd push gcc-4.5.dsc https://archives.yap.com/new-gcc
    tbd rebuild https://archives.yap.com/new-gcc  # This is going to take ages!
    tbd gen-image https://archives.yap.com/new-gcc
  4. YAP also has a separate team working on the UI for a device which will use their new chip, so they want to have yet another archive, based on the one containing the new GCC, where they'll make their UI changes without affecting other users of the archive containing the new GCC. The tools should allow them to do that as well.
    tbd branch https://archives.yap.com/new-gcc https://archives.yap.com/new-gcc-and-UI --platform=standard 
  5. During the development of YAP's latest device, it should be possible for them to easily see the delta between their archive and its upstream. They should also be able to review those changes and pull/push changes from/to the upstream archive.
    # Notice how we don't need to specify the parent as the archive has that information.
    tbd show-delta https://archives.yap.com/new-device
  6. At the end of the development of their new device, YAP's engineers want to freeze their archive so that all package uploads have to be reviewed by their release team before they're accepted.
    tbd freeze https://archives.yap.com/new-device

Some of the features described here will depend on DerivedArchiveRebuild

Assumptions

  • Archives can be refered to by their URLs. (I think this is how we're going to tell the tools the archives they'll operate on)

Design

The goal here is to provide a high-level interface with sensible defaults, abstracting some of the low-level complexity of the underlying utilities/frameworks.

The tools should be transparent and allow people to access their inner workings. We should aim to ensure it is always possible to complete a task manually.

Users must be able to run any of the tools on either a Desktop or a Server. Also, the tools must not be tied to Launchpad, although they should probably take advantage of anything provided by Launchpad whenever desirable.

The tools should be able to operate on both local and remote archives, for both read and write operations. When operating on remote archives they'll use the HTTP RESTFUL API of the service hosting the archive (wich can be either Launchpad.net or a vostok instance). To upload packages we'll use sftp.

The tools should not try to enforce any sort of version number rules, but it must provide appropriate version numbers by default so that users don't need to worry about that.

The following are some of the main tasks that the tool must facilitate.

Interacting with an archive

This category covers operations on remote archives, such as finding current versions of packages, removing packages, requesting rebuilds, freezing an archive, and indeed branching an archive.

This should make use of an HTTP API exposed by the archive management software that allows querying and modifications.

For each logical operation that a user may wish to do there should be a command or subcommand provided by tbd, which it will then map to the necessary API calls. Therefore most of the logic about how the operations work will be in the archive management software, and tbd just needs to handle making the correct API call, presenting the information to the user, and handling error conditions.

We should aim to get good coverage of typical operations in the tdb tool, but asking users to go to the web UI of the archive management service is acceptable, and will be necessary if new features are added that old versions of tdb do not support.

Workspaces

A workspace is an area on disk that the tool can create which encapsulates a specific configuration. This allows the tool to infer lots of information when it is in a workspace and save the developer time in remembering and typing some of those details.

A workspace will be tied to a certain archive, and so by default act on that archive.

The developer can maintain as many workspaces as they like locally, and cd between them in order to work on different archives.

In addition to this the workspace can contain a local archive that can be used in addition to the remote one. This is used to allow the developer to do things like build an image containing some test changes, or to build two packages locally where the second depends on some new API just added to the first, all without having to upload experimental changes to the archive for others to see. We may also want to allow developers to push packages from their development archive to a PPA for sharing with others, or even use a PPA for this if they like.

Modifying packages

One of the most common operations will be modifying a package. The tool will provide commands that make it easy to get a copy of the current version of the package, make changes to it, build it locally for testing, and then commit the change.

Ideally we should support both pushing the change directly to an archive, and for submission for review for those that either don't have upload rights to the archive, or would like peer review before making the change.

While editing files and the like won't be abstracted, the tool can provide wrapper commands for test-building a package, adding a changelog entry and uploading, merging a new upstream version, and merging from a parent archive, amongst others.

There should also be a way to submit the change to a parent archive for review very easily, so that the change can be made in the archive that the developer is targeting, but also start making its way in to parent archives at the same time.

We may also want to provide environments other than the host system in order to build packages. chroots or virtual machines are important as soon as you are building for a target that is different to the host in terms of package versions and the like. Clearly when building for a different architecture this needs to be done, in which case we should interface with arm-m-xdeb-cross-compilation-environment.

Image building

The tool should tie in to the results of arm-m-image-building-tool such that a developer can easily build themselves a test image, including results of their test builds.

In addition, the tool should tie in to arm-m-image-building-console such that they can also request image builds from a service. Here it would be needed to host all the packages remotely so that the image building service could make use of them.

Furthermore for the developer's workflow it would be ideal if they could submit a package to build in the archive, and at the same time queue an image build request that would start if and when the package built successfully. That would save them having to switch context too often.

Updating a derived archive

The tool should also allow the developer to make use of the features of the archives described in arm-m-archive-branching. It should allow them to visualise the difference between an archive and its parent, and then act on the result as well.

Crucially it should allow them to do three things:

  • Request a sync of a package.
  • Submit a change to the parent archive in the appropriate manner.
  • Merge a package from the parent where the package was modified in both archives.

As the last two operations can't be done through the web UI described in the other spec it is important that the tool make that part easy for developers to do.

Implementation

Interacting with an archive

Operations accessing remote locations through HTTP will need to be authenticated, so tbd will have a per-user registry of credentials that can be used for OAuth authentication. There will be one set of credentials for each remote location, but when such credentials don't exist tbd will guide the user, via their web browser, through the process of obtaining them.

Once it has the credentials it can make the necessary authenticated API calls to perform the requested operation.

It must understand enough of the response that it will receive from the server to present the results meaningfully to the developer, and also provide useful error messages where possible.

Workspaces

Any directory containing a .tbd.conf file is considered a workspace and tbd will use the archive specified there when one is not explicitly provided. The file will also be able to store other configuration defaults for that workspace, such as extra archives (e.g. PPAs) that should be included in images built from that workspace.

Workspaces are most useful when modifying packages or building images, so they can also cache .deb packages (for image building) and bazaar working trees for the source packages in the archive.

We need to properly lay out bazaar working trees and source/binary packages (resulting from test builds) in workspaces, to keep things sane. Here is an example of how that could look.

  •   .
      |-- workspace root
      |   |-- unity  (bzr working tree)
      |   |   `-- README.txt
      |   |-- gcc    (bzr working tree)
      |   |   `-- HACKING.txt
      |   `-- testbuilds
      |       |-- Packages (makes it an archive that image builders can use)
      |       |-- unity-N.NN-x86.deb
      |       `-- gcc-N.NN-x86.deb

Modifying packages

  • Get the source for a given package
    • We'll use bazaar to fetch the branch associated with the given package and place it in the current directory. - How do we deal with branches of work?
  • Test build a given bazaar working tree
    • Use bzr-builddeb to build a source package from the tree. This can then be built in to a binary package. - We need to take in to account building in a chroot/with x-deb here.
  • Pushing package
    • Push the given bazaar working tree to its parent branch. How the package gets in the archive depends on the facilities of the target:
    • If it has full building facilities then we request a build of the branch to a source/binary package.
    • If it can only build binaries then we build a source package locally and then upload that, which will trigger a binary build.
    • Otherwise we also upload source and binary packages.

Image building

  • Build an image with the given set of packages
    • Here we'll use the image building tool to generate an image containing the packages specified by the user.

To build an image we'll need to fetch lots of binary packages from the archive, so it makes sense to cache these binary packages locally for further image building. These will be cached in the workspace.

Updating a derived archive

  • Show delta between an archive and its parent
    • - The tool can show the list of modified packages in each category, and then allow the user to choose one to work on..

UI Changes

Mostly the tool will just have command-line arguments and status output.

There will be sometimes when it may need to present the user a list of options, so we should design how that would look.

Test/Demo Plan

There is clearly a lot of testing that will be required. We will make a lot of use of unit testing, and also dogfooding by developers.

Each feature should be explicitly tested as it is included though, and integration tests included where possible.

Unresolved issues

  • We'll need multiple OAuth credentials (e.g. for launchpad.net, archives.yap.com, etc), so we need to figure out which credentials to use depending on the arguments given.
  • If we go with OAuth for the authentication, it means the user will need a browser to obtain the OAuth credentials, so it will be tricky to run the tools on a server. To workaround that we can either copy existing credentials to the server or ask the user for their password and do the oauth dance ourselves (a la ground control). The latter is a really nasty trick, IMO.
  • For long running operations, do we want to try and design some sort of progress report or should we rely just on an email sent by vostok when it's done?
  • Maybe the caching of binary packages should be the responsibility of ARMImageBuildingTool?

  • Is supporting the local archive in a workspace in all the tools going to be a lot of work? Should there be a way for it to be transparently included instead?


CategorySpec

Specs/M/ARMDeveloperEnvironment (last edited 2010-06-08 10:57:12 by fw-unat)