This page is to discuss the merits of different possible mechanisms for the client tools to use.

There are two main questions, possibly without clear answers:

  • New tools, or modifications to existing ones?
  • Structured workflow or freeform.

Let's consider the questions in that order.

New tools?

Why any specific tools are needed

  • There will be a pure bzr interface to do everything, for instance "bzr branch lp:ubuntu/gcc/hardy/updates", but we probably want to provide wrappers to this as well, for a couple of reasons:
    • Provide simple tools that you can learn ignoring all the power of bzr.
    • You could hide bzr, so that you don't have to know you are using it.
    • It may be possible to make the wrappers more "helpful"
      • I don't know if I really mean that one, I can't come up with a situation that isn't just command line differences, which may be enough. e.g. "get-source gcc hardy-updates" for the above.
  • We already have tools that do some of the steps, "debcheckout", "debcommit", "debrelease". It would be possible to make them do whatever is necessary to support the new workflow.

Advantages of using the existing tools.

  • Familiar to developers. They don't have to work against muscle memory.
  • It feels like we are changing things less.
  • Existing documentation may still be relevant.
  • We may end up with less of a process delta from Debian. While forcing people to learn two workflows and sets of tools would be bad, having the behaviour of the tools may be almost as disruptive, but just more subtle.

Disadvantages of using the existing tools.

  • Any behaviour changes can cause problems.
    • e.g. This would change the branch (and possibly the VCS) that you get back from debcheckout, possibly leading to extra work. You would need to remember to use the compatibility mode, and have to think each time whether that was what you wanted.
  • Options to get the old behaviour would mean more code paths.
  • We would probably carry a permanent diff to Debian.
  • Some people running Ubuntu contribute to Debian as well, hampering them in that work would be bad.
  • Any behavioural changes may lead to existing documentation being wrong, with no easy way for the user to detect this.
    • A simple case would be documentation telling the user to use debcheckout, when they are on hardy and don't have the one that understands the new layout. While we can provide backports/ppa the user must enable them. It fails open. If there is a new command then it either works, or it doesn't and they need to find the new package. This fails closed.
  • We must maintain this over time. We don't know how the existing tools may evolve, so while the interface may fit now, it may not always.

Problems with either approach.

  • Users on old releases will need to use backports/ppas. While they are probably more than capable of this if they are after the source, its still a pain.

The third way

  • As debcheckout gets it's information from the packages files we could ensure that every package file contains the right information, so that debcheckout works seamlessly. However, there are somethings that a new tool could do, such as getting you the package from dapper-proposed without having it in your sources.list, that debcheckout can't. It would also increase the size of the Packages files quite a bit (and we would want to keep existing entries prefixed with Original- or similar to provide the compatibility mode).
  • This approach is even harder for the other tools, as there are a couple of very specific things that are done for the new scheme (such as setting a particular tag when uploading) that mean code changes would be needed.

I therefore don't think this scheme is workable.


  • The above discussion leads me to think that modifying the existing tools isn't the way to go, so new wrappers should be created to do what we want.
  • We should ensure that any new tools are available for all supported releases in a timely manner, and that the documentation (both integrated help and on the wiki etc.) is very high quality, as new tools mean learning new things.

--RobertCollins: I'm convinced. I will note that your fail-closed argument is flawed unless the new tool is perfect first time; subsequent releases of the distro will always add the opportunity for skew. I think what really is needed is a 'this tool can operate on the development trunk safely' flag/check/something if you want to cause a fail-closed situation consistently and in an ongoing manner. But I don't actually think thats important - its well known that e.g. debootstrap and other development tools always need to be from the trunk's toolchain.

Structured workflow?

Structured is probably not a very good term here, but this basically means how many decisions we make for the user in the tools.

If we make some decisions then we can ensure that the user is using the tool in an efficient way, and make it easier to give support online.

It is probably best to give an example of what this may look like:

Example workflow

  • User requests the intrepid gcc package.
  • The tool sets up a shared repo in ~/ubuntu-dev/gcc/ if it's not already there.
  • The tool branches the desired package to ~/ubuntu-dev/gcc/intrepid/

This ensures that the user has a shared repository set up, and so they will download a minimal amount of data. It could also be extended to take a branch name, and set up a personal branch on launchpad for the change, so that all modifications to a package are available as branches on launchpad.

It also means that it is very easy to help someone on IRC, as you know exactly where things will end up for them.

It also means that commands that work with multiple branches can be written, or at least given a cleaner UI, as they can assume the locations of specific branches that they need to use. You can have something like an "Update all branches" command that "just works," as it stands a chance of knowing where all of the branches are.


  • It may cause developers to feel they are constrained.
  • It has to support all use cases, while still remaining structured.


  • While developers may feel constrained many will set up a similar arrangement for their work, and if they can all use the same one then you get all of the benefits with little cost.
  • The bzr command line would still be available for those who want full control. Are there people who want full control without using the bzr client directly? It's possible that the wrappers could provide a higher-level command set.
  • It's easier for a new user to get started, as they don't have to learn all of the concepts, they can use higher-level commands, and trust that everything is working efficiently for them.

Providing some flexibility

  • Instead of making everthing fixed you could start to make things configurable in a way that means new users get most decisions made, and as they learn more they can tweak things as they like.
  • One of the simplest things would be making the root directory used configurable, such that a used can move it outside of /home for instance. It's a small thing, but it means that it can support users who use /devel or similar as they have a small /home. It does mean that you can't assume where a package appears, but you can have a guess, and a user should know how to translate if they moved it.
  • It would be possible to have a second set of commands, or a second tool, that gives more flexibility, but allows you to do the same operations, specifying extra paths to branches when needed. However, this is more code to maintain, more to document, and may provide little over using the bzr commands directly.


I don't have one yet...

Meeting Discussion

There was some discussion of these issues at a Foundations team meeting. You can find the log of the discussion at


DistributedDevelopment/ClientToolsDiscussion (last edited 2009-04-02 09:33:06 by i59F72099)