ArchiveAdministration

Differences between revisions 2 and 200 (spanning 198 versions)
Revision 2 as of 2006-05-11 08:37:25
Size: 14149
Editor: quest
Comment: add some contents
Revision 200 as of 2011-12-02 14:04:48
Size: 53066
Editor: jdstrand
Comment: add some text on native syncs and their impact on NEW
Deletions are marked like this. Additions are marked like this.
Line 2: Line 2:
[[TableOfContents]]

= Archive Administration =


This page details the processes for the [https://launchpad.net/people/ubuntu-archive Ubuntu Package Archive Administrators] team, and hopefully provides a decent guide for new members of the team.
<<TableOfContents>>

This page details the processes for the [[https://launchpad.net/~ubuntu-archive|Ubuntu Package Archive Administrators]] team, and hopefully provides a decent guide for new members of the team.
Line 10: Line 8:
The requests can be found at [https://launchpad.net/people/ubuntu-archive/+subscribedbugs]. The requests can be found at [[https://launchpad.net/~ubuntu-archive/+subscribedbugs]].
Line 14: Line 12:
== Logging In ==

All administration is performed on `drescher.ubuntu.com`, accounts are provided to members of the team;. Changes may be made only as the `lp_archive` user, to which you'll have `sudo` access.
= Logging In =

All administration is performed on `cocoplum.canonical.com`, accounts are provided to members of the team. Changes can only be made as the `lp_archive` user, to which you'll have `sudo` access.
Line 20: Line 18:
$ ssh drescher $ ssh cocoplum
Line 26: Line 24:
== NEW Processing == '''IMPORTANT:''' This document uses `$SUDO_USER` in several places. If your `cocoplum.canonical.com` uid is not that same as your Launchpad id, be sure to use your Launchpad id when running Launchpad related scripts.

= Client-side tools =

We are gradually transitioning towards client-side administration as the necessary facilities become available via the Launchpad API. To get hold of these tools:

 {{{
$ bzr get lp:ubuntu-archive-tools
}}}

Some of these tools still rely on `ssh` access to `cocoplum` for some operations, so the presence of a client-side tool unfortunately does not yet mean that community archive administrators can use it. It's a start.

At the moment, this transition tends to result in having two terminal windows open, one with a shell on `cocoplum` and one on your local machine. Sorry.

If your username on your local machine does not match your username on `cocoplum` remember to edit `remote_host` in `synclib.py`

= NEW Processing =
Line 32: Line 46:
$ queue info "*"
}}}

This is the `NEW` queue for `ubuntu/dapper` by default; you can change the queue with `-Q`, the distro with `-D` and the release using `-R`. To list the `UNAPPROVED` queue for `ubuntu/breezy`, for example:
 {{{
$ queue -R breezy -Q unapproved info "*"
$ queue info
}}}

This is the `NEW` queue for `ubuntu/precise` by default; you can change the queue with `-Q`, the distro with `-D` and the release using `-s`. To list the `UNAPPROVED` queue for `ubuntu/oneiric`, for example:
 {{{
$ queue -s oneiric -Q unapproved info
Line 42: Line 56:
You can give an string argument after info which is interpreted as a substring match filter.
Line 49: Line 65:
$ queue info "*" $ queue info
Line 67: Line 83:
New sources need to be checked to make sure they're well packaged, the licence details are correct and permissible for us to redistribute, etc. You can fetch a package from the queue for manual checking, be sure to do this in a directory of your own:
 {{{
$ mkdir /tmp/$USER
$ cd /tmp/$USER
New sources need to be checked to make sure they're well packaged, the licence details are correct and permissible for us to redistribute, etc. See [[PackagingGuide/Basic#NewPackages]], [[PackagingGuide/Basic#Copyright]] and [[http://ftp-master.debian.org/REJECT-FAQ.html|Debian's Reject FAQ]]. You can fetch a package from the queue for manual checking, be sure to do this in a directory of your own:
 {{{
$ mkdir /tmp/$SUDO_USER
$ cd /tmp/$SUDO_USER
Line 75: Line 91:
The source is now in the current directory and ready for checking. Any problems should result in the rejection of the package: The source is now in the current directory and ready for checking. Any problems should result in the rejection of the package (also send a mail to the uploader explaining the reason and Cc ubuntu-archive@lists.ubuntu.com):
Line 84: Line 100:
$ queue override guasssum source universe/
}}}

Where the last argument is `COMPONENT/SECTION`, leaving any part blank to leave it unchanged.
$ queue override -c universe source ubuntustudio-menu
}}}

Where the override can be -c <component> and/or -x <section>
Line 91: Line 107:
$ queue override language-pack-kde-co binary universe//
}}}

Where the last argument is `COMPONENT/SECTION/PRIORITY`.
$ queue override -c universe binary ubuntustudio-menu
}}}

Where the override can be -c <component>, -x <section> and/or -p <priority>
Line 98: Line 114:
Currently a special case of this are the kernel packages, which change package names with each ABI update and build many distinct binary packages in different sections. A helper tool has been written to apply overrides to the queue based on the existing packages in hardy:
 {{{
$ kernel-overrides [-s <sourcepackage>] <oldabi> <newabi>
}}}

Binary packages are not often rejected (they go into a black hole with no automatic notifications), do, check the .deb contains files, run lintian on it and file bugs when things are broken. The binaries also need to be put into universe etc as appropriate even if the source is already there.
Line 103: Line 126:
In the case of language packs, add `-M` to not spam the changes lists with the new packages.

== Anastacia and Changing Overrides ==

Sadly packages just don't stay where they're put. SeedManagement details how packages get chosen for the `main` component, the various meta packages and prescence on the CD. What it doesn't point out is that packages which fall out of the seeding process are destined for the `universe` component.

Every hour or so, the difference between what the seeds expect to be true and what the archive actually believes is evaluated by the `anastacia` tool, and the output placed at:

 http://people.ubuntu.com/~cjwatson/anastacia.txt
In the case of language packs, add `-M` to not spam the changes lists with the new packages.  You can also use ''queue accept binary-name'' which will accept it for all architectures.

= Component Mismatches and Changing Overrides =

Sadly packages just don't stay where they're put. SeedManagement details how packages get chosen for the `main` component, the various meta packages and presence on the CD. What it doesn't point out is that packages which fall out of the seeding process are destined for the `universe` component.

Every hour or so, the difference between what the seeds expect to be true and what the archive actually believes is evaluated by the `component-mismatches` tool, and the output placed at:

 http://people.ubuntu.com/~ubuntu-archive/component-mismatches.txt

 http://people.ubuntu.com/~ubuntu-archive/component-mismatches.svg ([[http://people.ubuntu.com/~ubuntu-archive/component-mismatches.dot|dot source]])
Line 127: Line 152:
 Binary packages in `main` that are no longer seeded or dependend on, but the source is still to remain in `main` -- usually because another binary saves it. Often these tend to be `-dev` or `-dbg` packages and need to be seeded, rather than demoted; but not always.  Binary packages in `main` that are no longer seeded or depended on, but the source is still to remain in `main` -- usually because another binary saves it. Often these tend to be `-dev` or `-dbg` packages and need to be seeded, rather than demoted; but not always.
Line 153: Line 178:
== Removals == = Removals =

== Manual ==
Line 157: Line 184:
$ remove-package.py -m "($USER) reason for removal" konserve $ lp-remove-package.py -u $SUDO_USER -m "reason for removal" konserve
Line 162: Line 189:
$ remove-package.py -m "($USER) NBS" -b nm-applet   $ lp-remove-package.py -u $SUDO_USER -m "NBS" -b konserve
Line 171: Line 198:
== Syncs ==

Syncing packages with Debian is a reasonably common request, and currently annoyingly complicated to do. The tools help you prepare an upload, which you'll still need to check and put into incoming. The following recipe takes away some of the pain:

First change into the `~/syncs` directory and make sure the Debian sources lists are up to date:
== Blacklisting ==

If you remove source packages which are in Debian, and they are not meant to ever come back, add it to the blacklist at `/srv/launchpad.net/dak/sync-blacklist.txt`, document the reason, and `bzr commit` it with an appropriate changelog. This will avoid getting the package back to source NEW in the next round of autosyncs from Debian.

== Removals in Debian ==

From time to time we should remove packages which were removed in Debian, to avoid accumulating cruft and unmaintained packages. This client-side tool (from `ubuntu-archive-tools`) will interactively go through the removals and ask for confirmation:

 {{{
$ ./process-removals.py
}}}

Please note that we do need to keep some packages which were removed
in Debian (e. g. "firefox", since we did not do the "firefox" →
"iceweasel" renaming).

= Syncs =

Syncing packages with Debian is a reasonably common request. The tools help you prepare an upload, which you'll still need to check and put into incoming. The `sync-helper.py` client-side tool in `ubuntu-archive-tools` deals with most of the work.

First, change into the `~/syncs` directory and make sure the Debian sources lists are up to date:
Line 178: Line 221:
lp_archive@...$ wget -O- http://ftp.uk.debian.org/debian/dists/unstable/main/source/Sources.gz | gunzip > Debian_unstable_main_Sources
lp_archive@...$ wget -O- http://ftp.uk.debian.org/debian/dists/experimental/main/source/Sources.gz | gunzip > Debian_experimental_main_Sources
}}}

Now prepare the source packages to be uploaded; elmo's tool to do this is almost always newer than the Launchpad one, and tends to actually work:
 {{{
lp_archive@...$ ~james/launchpad/scripts/ftpmaster-tools/sync-source.py -b LPUID srcpkg
}}}

Replace `LPUID` with the Launchpad username of whoever requested the sync, obtained from the bug, and `srcpkg` with the names of the sources they asked for.

This will fail if there are any Ubuntu changes, make sure they've asked to override them, and use `-f` to override them, e.g.
 {{{
lp_archive@...$ ~james/launchpad/scripts/ftpmaster-tools/sync-source.py -b keybuk -f dpkg
}}}

You'll now have a bunch of source packages in the `~/syncs` directory of the `lp_archive` user which need uploading. To do that, you have to switch to the `lp_queue` user; the `lp_archive` user has `sudo` permission to do this:
 {{{
lp_archive@...$ sudo -u lp_queue -i
}}}

Make a unique directory name under `~/sync-queue/incoming` and copy your sources into that:
 {{{
lp_queue@...$ mkdir ~/sync-queue/incoming/$USER-`date +%Y%m%d`
lp_queue@...$ cp ~lp_archive/syncs/... !$
}}}

And then process them; this will move the directory into `~/sync-queue/accepted`, or if there's a problem, `~/sync-queue/failed`:
 {{{
lp_queue@...$ ~/sync-queue/process-incoming.sh
lp_queue@...$ exit
lp_archive@...$
}}}

== Useful tools ==
lp_archive@...$ update-sources
}}}

Then, run `sync-helper.py` with some arbitrary filename as a parameter, such as this:{{{
$ ./sync-helper x
}}}

Review the bugs in turn, making sure sure that the sync request is ACK'd (or requested by) someone who can upload the package in question; these people are marked with a `(*)` in `sync-helper.py`'s display. If past FeatureFreeze, check the changelog to make sure the new version has only bug fixes and not new features. If they've asked to discard Ubuntu changes, use the `ubuntu-changes` script on cocoplum to show the Ubuntu changelog entries since the last branchpoint from Debian, and confirm that what they've described in the sync bug matches the outstanding Ubuntu changes.

Now, run `mass-sync.py` (this requires the ability to ssh to cocoplum), redirecting its standard input from the filename you gave to `sync-helper.py`:{{{
$ ./mass-sync.py <x
}}}

(The first time you run `mass-sync.py`, it will need to authenticate to Launchpad, and will fail if its standard input is redirected, so run it without redirection the first time round and then Ctrl-C it.)

You'll now have a bunch of source packages in the `~/syncs` directory of the `lp_archive` user which need uploading. To do that, just run:{{{
$ ./mass-sync.py --flush-syncs
}}}

To sync all the updates available in Debian

 {{{
sync-source.py -a
NOMAILS=-M flush-syncs
}}}

This does not import new packages from Debian that were not previously present in Ubuntu. To get a list of new packages available for sync, use the command
 {{{
new-source [contrib|non-free]
 }}}

which gives a list of packages that can be fed into `sync-source.py` on the commandline after review

To sync from Debian incoming wget the sources,
 {{{
apt-ftparchive sources ./ > Debian_incoming_main_Sources
sync-source.py -S incoming <package>
}}}

Backports work much the same way; there is a client-side tool in `ubuntu-archive-tools` called `backport-helper.py`, which you can use the same way as `sync-helper.py`. `./mass-sync.py --flush-backports` works the same way as `./mass-sync.py --flush-syncs`. Backports do not require any Sources files. Note that backporting packages which did not exist in the previous version will end up in NEW which defaults to main, so universe packages need to have that override set.

Backports should reference the Launchpad username of the backporter who approved the backport, not the user requesting the backport.

= NBS =

Sometimes binary packages are not built by any source (NBS) any more. This usually happens with library SONAME changes, package renamings, etc. Those need to be removed from the archive from time to time, and right before a release, to ensure that the entire archive can be rebuilt by current sources.

Such packages are detected by `archive-cruft-check.py /srv/launchpad.net/ubuntu-archive/`. Apart from NBS packages it also prints out 'ASBA' ("Arch: all" superseded by "Arch: any"), but they are irrelevant for day-to-day archive administration. This tool does not check for reverse dependencies, though, so you should use `checkrdepends -b` for checking if it is safe to actually remove NBS packages from the archive.

Look at the [[http://people.canonical.com/~ubuntu-archive/nbs.html|hourly generated NBS report]] which showns all NBS packages, their reverse dependencies, and a copy-and-paste-able command to clean up the "safe" ones.

The rest needs to be taken care of by developers, by doing transition uploads for library SONAME changes, updating build dependencies, etc. The remaining files will list all the packages which still need the package in question.

Please refrain from removing NBS kernel packages for old ABIs until debian-installer and the seeds have been updated, otherwise daily builds of alternate and server CDs will be made uninstallable.

= Adjusting Launchpad ACLs =

'''NOTE''': due to [[https://bugs.launchpad.net/soyuz/+bug/562451|bug #562451]], archive administrators cannot currently adjust Launchpad ACLs.

The new ArchiveReorganisation brings finer grained access controls than what components can provide. Launchpad ACLs allow individuals and teams to have upload or admin rights on certain packages, referred to as sets. In general, an archive administrator can process requests to create and delete package sets, as well as add or remove packages from package sets. Archive administrators should not add individuals or teams to package sets without explicit TechnicalBoard approval.

== Package sets ==
Packages can be added to or removed from package sets using the ```edit_acl.py``` tool from the ubuntu-archive-tools bzr branch.

To list the packages currently in the package set ```mozilla```:{{{
$ ./edit_acl.py query -P mozilla -S maverick
adblock-plus
all-in-one-sidebar
bindwood
...
}}}

To add a package to the ```mozilla``` package set:{{{
$ ./edit_acl.py -P mozilla -S precise -s foo -s bar -s baz add
}}}

To remove a package from the ```mozilla``` package set:{{{
$ ./edit_acl.py -P mozilla -S precise -s foo delete
}}}

For more information, please see ```edit_acl.py --help```.

= Useful tools =
Line 216: Line 307:
`madison-lite` examines the current state of the archive for a given binary/source package: == Archive state checks ==

`madison-lite` (aliased to `m`) examines the current state of the archive for a given binary/source package:
Line 270: Line 363:
$ checkrdepends -b nm-applet dapper $ checkrdepends -s precise -b nm-applet
Line 275: Line 368:
$ checkrdepends network-manager dapper
}}}
$ checkrdepends -s precise network-manager
}}}

== NEW handling ==

A lot of churn in NEW comes from Debian imports. Since they already went through NEW in Debian, we should not waste too much time on it, so there are some tools.

 * The first thing you need to handle are native syncs. These are syncs performed via URLs like https://launchpad.net/ubuntu/precise/+localpackagediffs or via the LP API. You can recognize these in the LP queue pages because they have '(sync)' in the name. On cocoplum, they show up as 'X-' (as opposed to 'S-' like normal source uploads). There are no changes files for these, so they cannot be fetched via `q fetch` (though old versions of the tools used to fake up a changes file so it would work). As such, you must clear out any native syncs before running the below commands which rely on `q fetch`. To verify a native sync:
  0. Download the source package from Debian (eg, via `dget` or `apt-get source <pkg>=<version>`)
  0. Download the imported dsc file from the Debian project in LP (eg https://launchpad.net/debian/sid/+source/pxe-kexec)
  0. Compare the dsc file from Debian and from LP. Since both should be signed, if they are identical, then you know the package hasn't been tampered with. I can also compare the full source package from Debian and LP if desired.
 Once verified, accept it normally via LP or `q accept <srcpkg>`

 * There are often duplicate source NEWs in the queue if the auto-syncer run twice in a row without clearing the imported sources from NEW. These can be weeded out with:

 {{{
 new-remove-duplicates > /tmp/$SUDO_USER/cmds
sh /tmp/$SUDO_USER/cmds }}}

 (Please eyeball `cmds` before feeding it to the queue).

 * `new-binary-debian-universe` creates queue commands for overriding and accepting all binary NEW packages whose source was imported from Debian and is in universe. While it runs, it `lintian`s all the imported .debs. Watch the output and note all particularly worrisome issues. Check the `cmds` file for obvious errors, and when you are happy, execute it with `sh cmds`.

 Warning: This command will fail when there are duplicates in the queue. Clean them up with `new-remove-duplicates` first.
 {{{
 new-binary-debian-universe > /tmp/$SUDO_USER/cmds
vi /tmp/$SUDO_USER/cmds
sh /tmp/$SUDO_USER/cmds }}}

 * For bulk processing of source NEW imported from Debian you should do something like:

 {{{
 cd /tmp/$SUDO_USER/
q fetch
for i in `ls *_source.changes| grep -v ubuntu`; do grep -q 'Changed-By: Ubuntu Archive Auto-Sync' $i || continue; egrep -q ' contrib|non-free' $i && continue ; echo "override source -c universe ${i%%_*}"; echo "accept ${i%%_*}"; done > cmds }}}

 Then go over the cmds list, verify on http://packages.qa.debian.org that all the packages mentioned are indeed in Debian main (and not in non-free, for example), and again feed it to the queue with `q -e -f cmds`.

 * When unpacking a source package for source NEW checks, you should run `suspicious-source`. This is basically a `find -type f` which ignores all files with a known-safe name (such as `*.c`, `configure`, `*.glade`). Every file that it outputs should be checked for being the preferred form of modification, as required by the GPL. This makes it easier to spot PDFs and other binary-only files that are not accompanied by a source. The `licensecheck` command is also useful for verifying the license status of source packages.

== Moving Packages to Updates ==

=== Standard case ===
Packages in -proposed can be moved to -updates once they are approved by someone from sru-verification, and have passed the minimum aging period of '''7 days'''.

 {{{
copy-package.py -vbs maverick-proposed --to-suite=maverick-updates kdebase
}}}

=== Special case: DDTP updates ===

 1. Disable publisher cron job and wait until it has finished. It must not run during the copy operation. (Alternatively, if the publisher is currently running and you know it will take some time yet to finish, you may make these changes in `/srv/launchpad.net/ubuntu-archive/ubuntu/dists.new/`.)
 1. Copy
 `/srv/launchpad.net/ubuntu-archive/ubuntu/dists/`''release''`-proposed/`''component''`/i18n/*` to the corresponding -updates directory, for all relevant components. This needs to happen as user `lp_publish`.
 1. Reenable publisher cron job.

=== Special case: debian-installer updates ===

 1. Disable publisher cron job and wait until it has finished. It must not run during the copy operation. (Alternatively, if the publisher is currently running and you know it will take some time yet to finish, you may make these changes in `/srv/launchpad.net/ubuntu-archive/ubuntu/dists.new/`.)
 1. As user `lp_publish`, copy `/srv/launchpad.net/ubuntu-archive/ubuntu/dists/`''release''`-proposed/main/installer-`''architecture''`/`''version'' to the corresponding -updates directory, for all architectures and for the version of `debian-installer` being copied.
 1. As user `lp_publish`, update `/srv/launchpad.net/ubuntu-archive/ubuntu/dists/`''release''`-updates/main/installer-`''architecture''`/current` to point to the version of `debian-installer` being copied, for all architectures.
 1. As user `lp_publish`, make sure that at most three versions of the installer remain in `/srv/launchpad.net/ubuntu-archive/ubuntu/dists/`''release''`-updates/main/installer-`''architecture'', for all architectures.
 1. Reenable publisher cron job.

=== Special case: update-manager updates ===

 1. Disable publisher cron job and wait until it has finished. It must not run during the copy operation. (Alternatively, if the publisher is currently running and you know it will take some time yet to finish, you may make these changes in `/srv/launchpad.net/ubuntu-archive/ubuntu/dists.new/`.)
 1. As user `lp_publish`, copy `/srv/launchpad.net/ubuntu-archive/ubuntu/dists/`''release''`-proposed/main/dist-upgrader-all/`''version'' to the corresponding -updates directory, for the version of `update-manager` being copied.
 1. As user `lp_publish`, update `/srv/launchpad.net/ubuntu-archive/ubuntu/dists/`''release''`-updates/main/dist-upgrader-all/current` to point to the version of `update-manager` being copied.
 1. As user `lp_publish`, make sure that at most three versions of the upgrader remain in `/srv/launchpad.net/ubuntu-archive/ubuntu/dists/`''release''`-updates/main/dist-upgrader-all`.
 1. Reenable publisher cron job.

=== Resources ===
 * [[http://people.ubuntu.com/~ubuntu-archive/pending-sru.html|Currently pending SRUs]]
 * Verified bugs for [[https://bugs.launchpad.net/ubuntu/intrepid/+bugs?field.tag=verification-done|intrepid]], [[https://bugs.launchpad.net/ubuntu/hardy/+bugs?field.tag=verification-done|hardy]], [[https://bugs.launchpad.net/ubuntu/gutsy/+bugs?field.tag=verification-done|gutsy]], [[https://bugs.launchpad.net/ubuntu/dapper/+bugs?field.tag=verification-done|dapper]]

== Publishing security uploads from the ubuntu-security private PPA ==

Security uploads in Soyuz are first built, published, and tested in the Security Team's private PPA. To unembargo them, we use a tool that re-publishes them to the primary archive. Note that this should never be done without an explicit request from a member of the Security Team.

To publish `nasm` from the `ubuntu-security` PPA to the `-security` pocket of `ubuntu`'s `hardy` release, you would do the following:

 {{{
LPCONFIG=production /srv/launchpad.net/codelines/soyuz-production/scripts/ftpmaster-tools/unembargo-package.py -p ubuntu-security -d ubuntu -s hardy-security nasm
}}}

== Publishing packages from the ubuntu-mozilla-security public PPA ==
Mozilla (ie, firefox and thunderbird) uploads in Soyuz are first built, published, and tested in the Mozilla Security Team's public PPA. To publish them into the main archive, use copy-package.py. Note that pocket copies to the security pocket should never be done without an explicit request from a member of the Ubuntu Security Team (Mozilla Security Team is not enough), and copies to the proposed pocket should not be done without an explicit request from a member of the SRU Team. Keep in mind that `firefox` 2.0 and later (ie `hardy` and later) will always have a corresponding `xulrunner` package that needs copying.

To publish `firefox-3.0` version `3.0.7+nobinonly-0ubuntu0.8.04.1` and `xulrunner-1.9` version `1.9.0.7+nobinonly-0ubuntu0.8.04.1` from the `ubuntu-mozilla-security` PPA to the `-security` pocket of `ubuntu`'s `hardy` release, you would do the following:
 {{{
$ copy-package.py -b --ppa=ubuntu-mozilla-security -s hardy --to-suite hardy-security -e 3.0.7+nobinonly-0ubuntu0.8.04.1 firefox-3.0
$ copy-package.py -b --ppa=ubuntu-mozilla-security -s hardy --to-suite hardy-security -e 1.9.0.7+nobinonly-0ubuntu0.8.04.1 xulrunner-1.9
$ firefox-overrides -S hardy-security
}}}

'''IMPORTANT:''' Due to current limitations of Launchpad, all packages copied from a PPA into the archive go to 'main'. For source packages with binaries wholly in universe (eg, `firefox` and `xulrunner` in 8.04 LTS and later, `seamonkey` everywhere, or `firefox-3.5` and `xulrunner-1.9.1` in 9.04), you can use `change-override.py` like normal to move them to universe (eg ```change-override.py -c universe -s lucid-security -S seamonkey```). For packages with some binaries in main and some in universe, you can use the `firefox-overrides` script on ```cocoplum```. The script currently knows how to fix `firefox` in `dapper`, `firefox-3.0` in `hardy` through `jaunty`, `xulrunner-1.9` in `hardy` through `natty`, `firefox-3.5` in `karmic` through `natty`, `xulrunner-1.9.1` in `karmic` through `natty`, and `xulrunner-1.9.2` in `hardy`, `lucid` through `natty`. To get a list of binaries that should be demoted, you can use the ```find-bin-overrides``` script from lp:ubuntu-qa-tools with ```find-bin-overrides <examined-pocket> <target-pocket> <releases> <srcpkg 1> <srcpkg 2>```. Eg:
 {{{
<user@your local machine>$ $UQT/security-tools/find-bin-overrides release security maverick cups
# maverick/cups
change-override.py -c universe -s maverick-security cups-ppdc cupsddk
lp_archive@cocoplum:~$ change-override.py -c universe -s maverick-security cups-ppdc cupsddk
}}}

== Copying PPA kernels to proposed ==
With the new [[Kernel/StableReleaseCadence|StableReleaseCadence]], kernels are built in the [[https://launchpad.net/~canonical-kernel-team/+archive/ppa|kernel team PPA]] and then pocket copied to the proposed pocket in the main archive once they are ACKd (process TBD).

The [[http://people.canonical.com/~ubuntu-archive/pending-sru.html#kernelppa|Pending SRU report]] has a section for the kernel PPA which shows all newer kernels in the PPA, provides clickable links to open all bugs (with separate CVE bugs), and copy&pasteable `copy-proposed-kernel.py` and `sru-accept.py` commands (both in [[https://code.launchpad.net/~ubuntu-archive/ubuntu-archive-tools/trunk|lp:ubuntu-archive-tools]]).
To publish `linux` version `2.6.32-27.49` from the `canonical-kernel-team` PPA to the `-proposed` pocket of `ubuntu`'s `lucid` release, you would do the following:

 * Click on the "open bugs" button and check that the bugs are in a reasonable state, i. e. they target the right source package (`linux` vs. `linux-ec2` etc.), are fixed in the development release (or at least upstream), and that the changes are limited to bug fixes, and are in general within the boundaries of the StableReleaseUpdates policy. They should ideally have a task for the stable release the SRU targets. Note that this button does not open CVE bugs, as they don't get verification or other tracking (there is a separate button for opening them, if desired).

 * Find the tracking bug by either traversing through the list of bugs, or opening the .changes file and looking at the top (the release bug is pointed out there). Ensure that there is a proper stable task, and that the main (development release) task is invalid.

 * Run the `copy-proposed-kernel.py` command to copy it to -proposed.

 However, with some flavours like -ec2 or armel kernels, which are mostly just a merge with the main `linux` kernel, it is too much overhead to add -ec2 tasks to all the bugs.

 * Due to current limitations of Launchpad, packages copied from a PPA into the archive sometimes go to 'universe'. As a result, please '''verify the overrides for all packages''' copied to -proposed, otherwise these packages might become uninstallable when they are ultimately copied to -updates/-security. ```find-bin-overrides``` from lp:ubuntu-qa-tools can help with this. You use it like so: `find-bin-overrides <pocket to compare to> <target pocket> <ubuntu release> <source package>=<version in pocket to compare to>,<old abi>,<new abi>`. Eg, suppose there is a new kernel in hardy-proposed with a new ABI of 2.6.24-30 and you want to get a list of overrides for the new kernel based on the old ABI of 2.6.24-16 in the 2.6.24.12-16.34 version from the release pocket of hardy. For the `linux-restricted-modules-2.6.24 source package`, you might use: {{{
$ find-bin-overrides release proposed hardy \
linux-restricted-modules-2.6.24=2.6.24.12-16.34,2.6.24-16,2.6.24-30
# hardy/linux-restricted-modules-2.6.24
change-override.py -c multiverse -s hardy-proposed fglrx-kernel-source ...
change-override.py -c restricted -s hardy-proposed avm-fritz-firmware-2.6.24-30 ...
 }}}
 For the `linux` source package, you might use: {{{
find-bin-overrides release proposed hardy linux=2.6.24-16.30,2.6.24-16,2.6.24-30
# hardy/linux
...
}}}
 You may also specify `--show-main` to also show the the change-override.py command to move things to main. This can be useful if you know the overrides are very wrong. See `find-bin-overrides -h` for details.

'''TODO:''' A process/script similar to `kernel-overrides` should be developed to make sure overrides are properly handled for binaries not in main. Is ```find-bin-overrides``` from lp:ubuntu-qa-tools good enough?

== Copying security uploads to updates ==

Security uploads are distributed from a single system, `security.ubuntu.com` (consisting of one or a small number of machines in the Canonical datacentre). While this ensures much quicker distribution of security updates than is possible from a mirror network, it places a very high load on the machines serving `security.ubuntu.com`, as well as inflating Canonical's bandwidth expenses very substantially. Every new installation of a stable release of Ubuntu is likely to be shortly followed by downloading all security updates to date, which is a significant ongoing cost.

To mitigate this, we periodically copy security uploads to the -updates pocket, which is distributed via the regular mirror network. (In fact, the pooled packages associated with -security are mirrored too, but mirrored -security entries are not in the default `/etc/apt/sources.list` to avoid causing even more HTTP requests on every `apt-get update`.) This is a cheap operation, and has no effect on the timely distribution of security updates, other than to reduce the load on central systems.

The `copy-report` tool lists all security uploads that need to be copied to -updates. If the package in question is not already in -updates, it can be copied without further checks. Otherwise, `copy-report` will extract the changelogs (which may take a little while) and confirm that the package in -security is a descendant of the package in -updates. If that is not the case, it will report that the package needs to be merged by hand.

The output of the tool looks like this:

{{{
$ copy-report
The following packages can be copied safely:
--------------------------------------------

copy-package.py -y -b -s feisty-security --to-suite feisty-updates -e 8.3.5-6ubuntu2.1 tk8.3
copy-package.py -y -b -s feisty-security --to-suite feisty-updates -e 8.4.14-0ubuntu2.1 tk8.4
}}}

The block of output under "The following packages can be copied safely:" may be copied and pasted in its entirety. If there is a block headed "The following packages need to be merged by hand:", then make sure that the security team is aware of those cases.

== Syncs with mass-sync.py ==

=== Purpose ===
If you process a long list of sync requests from Launchpad bugs, using
`sync-source.py` manually is tedious. To automate this, there is a
client-side tool `mass-sync.py` which does the following:

 * Take a list of sync request bug # and additional sync options as input.
 * For each sync request bug:
  * get the source package name and requestor from Launchpad
  * Call `sync-source.py` with the requestor and source package name and all additional sync options from the input file
  * On success, close the bug with the output of `sync-source.py`.

=== Steps ===

 * Open the [[https://launchpad.net/~ubuntu-archive/+subscribedbugs?field.searchtext=sync&orderby=targetname|list of current sync requests]] in browser.
 * Starting from the first bug which is associated to a package (see limitation above), use ctrl+mouse marking to select the column with the bug numbers. Paste them into a text file, let's call it `syncs.txt`.
 * `syncs.txt` is the input to `mass-sync.py` and must contain one line per sync request bug, with the word "sync" being leftmost, followed by the bug number. If you place a package name after the bug number, that will be used for bugs not assigned to a package. Everything after the bug number (or package name, if given) are extra options to `sync-source.py` which get passed to it unmodified.
 * Now open all the sync requests (in browser tabs) and walk through them:
  * Delete bug # from `syncs.txt` which are not approved or invalid. Set those to "Incomplete" in Launchpad, and provide necessary followup.
  * Use `rmadison -u debian` to verify the component to sync from (often, requestors get it wrong, or `unstable` got a newer version than `experimental` since the sync request was made)
  * Add appropriate sync options, e. g. if package has Ubuntu changes or needs to be synced from experimental (see sync-source.py --help for options). Eg: {{{
  sync 123456 -S expermintal
  sync 123457 -f -S testing
  sync 123458 -C contrib
  sync 123459 <new source package>
 }}}
 * Update Sources files on `cocoplum`:
 {{{
  cd ~/syncs
  update-sources
 }}}
 * Run the mass sync, on your client:
 {{{
  ./mass-sync.py < /tmp/syncs.txt
  ./mass-sync.py --flush-syncs
 }}}

 If you are not an archive admin with shell access to `cocoplum`, hand the file to someone who has.

=== sync-source.py options ===

The most common options are:

 || '''Option''' || '''Description''' || '''Default''' ||
 || `-a` || sync all the updates available in Debian || ||
 || `-f`, `--force` || Overwrite Ubuntu changes || abort if Ubuntu package has modifications ||
 || `-S` ''suite'' || Sync from particular suite (distrorelease), e. g. `experimental` || `unstable` ||
 || `-C` ''component'' || Sync from particular component, e. g. `non-free` || `main` ||

=== dholbach syncs ===

Many syncs requested by people who are not yet ubuntu-dev are ACKed by dholbach, his script creates a file with the sync numbers which can be downloaded and fed into mass-sync.py

http://people.canonical.com/~dholbach/tmp/sponsoring-list

== Backports with mass-sync.py ==

Since backports are very similar to syncs, `mass-sync.py` can also be used to do those. In this case, the source package name is mandatory, since backport requests are not filed against source packages but against ''release''`-backports` products.

=== Steps ===

 * Open the [[https://launchpad.net/hardy-backports/+bugs?field.status%3Alist=In+Progress|list of current backport requests]] for a particular release (this URL is for hardy) in browser. Note that this URL only lists bugs being "in progress", since that's what the backporters team will set when approving backports.
 * Use ctrl+mouse marking to select the column with the bug numbers. Paste them into a text file, let's call it `backports-hardy.txt`.
 * `backports-hardy.txt` is the input to `mass-sync.py` and must contain one line per backport request bug, with the word "backport" being leftmost, followed by the bug number, followed by the source package name. Everything after the package name are extra options to `backport-source-backend` which get passed to it unmodified.
 * Now open all the backport requests (in browser tabs) and walk through them:
  * Delete bug # from `backport.txt` which are invalid. Set those to "Incomplete" in Launchpad, and provide necessary followup.
  * Check with `rmadison` if the current version is still the same that was approved and tested. If there is a newer one, set back to "Incomplete" and mention the newer version.
  * If a backport requires an actual upload due to source changes, these need to be approved differently. Remove the bug from `backports-hardy.txt`, but do not change the bug report.
  * Add appropriate backport options to `backports-hardy.txt`, e. g. if package should not be backported from the current development release.

example backports.txt
 {{{
backport 586879 koffice-l10n
backport 587278 virtualbox-ose
backport 587278 virtualbox-guest-additions
backport 550880 simutrans-pak64 -S lucid -s karmic
 }}}
Final line backports from lucid to karmic

 * Run the mass backport, on your client:

 {{{
  ./mass-sync.py < /tmp/backports-hardy.txt
  ./mass-sync.py --flush-backports
 }}}

 If you are not an archive admin with shell access to `cocoplum`, hand the file to someone who has.

=== backport-source-backend options ===

The most common options are:

 || '''Option''' || '''Description''' || '''Default''' ||
 || `-S` ''suite'' || Backport from particular suite (distrorelease), e. g. `intrepid` || current development release ||
 || `-s` ''suite'' || Backport to a particular suite (distrorelease), e. g. `hardy` || ||

=== Example input file ===
{{{
backport 12345 lintian
backport 23456 frozen-bubble -S intrepid
}}}

== Diffs for unapproved uploads ==

The "unapproved" queue holds packages while a release is frozen, i. e. while a
milestone or final freeze is in progress, or for post-release updates (like
hardy-proposed). Packages in these queues need to be scrutinized before they
get accepted.

This can be done with the
[[http://bazaar.launchpad.net/%7Eubuntu-archive/ubuntu-archive-tools/trunk/annotate/head%3A/queuediff|queuediff]]
tool in
[[https://code.launchpad.net/~ubuntu-archive/ubuntu-archive-tools/trunk/|lp:~ubuntu-archive/ubuntu-archive-tools/trunk]],
which generates a debdiff between the current version in the archive, and the
package sitting in the unapproved queue:

{{{
$ queue-diff -s hardy-updates hal
$ queue-diff -b human-icon-theme | view -
}}}

`-s` specifies the release pocket to compare against and defaults to the
current development release. Please note that the pocket of the unapproved
queue is not checked or regarded; i. e. if there is a `hal` package waiting in
hardy-proposed/unapproved, but the previous version already migrated to
`hardy-updates`, then you need to compare against hardy-updates, not -proposed.

Check `--help`, the tool has more options, such as specifying a different
mirror, or `-b` to open the referred Launchpad bugs in the webbrowser.

This tool works very fast if the new package does not change the orig.tar.gz,
then it only downloads the diff.gz. For native packages or new upstream
versions it needs to download both tarballs and run debdiff on them. Thus for
large packages you might want to do this manually in the data center.

= Useful runes =

This section contains some copy&paste shell bits which ease recurring jobs.

== partner archive ==

The Canonical partner archive used to be known as ubuntu-partner, but now it is simply another component of Ubuntu. As such, use the same procedures when processing partner packages. Eg (notice 'Component: partner'):

{{{
$ queue -s hardy info
Initialising connection to queue new
Running: "info"
Listing ubuntu/hardy (NEW) 2/2
---------|----|----------------------|----------------------|---------------
 1370980 | S- | arkeia | 8.0.9-3 | 19 hours
  | * arkeia/8.0.9-3 Component: partner Section: utils
 1370964 | S- | arkeia-amd64 | 8.0.9-3 | 19 hours
  | * arkeia-amd64/8.0.9-3 Component: partner Section: utils
---------|----|----------------------|----------------------|---------------
                                                               2/2 total
}}}

Use -j to remove a package:
{{{
lp-remove-package.py -u jr -m "request Brian Thomason" -s oneiric adobe-flashplugin -j
}}}

New, server-related packages are to be reviewed by Dustin Kirkland before entering the partner archive, whereas desktop-related packages are to be reviewed by Jonathan Riddell.

= reprocess-failed-to-move =

In some cases, binary packages fail to move from the incoming queue to the accepted queue. To fix this, run {{{~lp_buildd/reprocess-failed-to-move}}} as lp_buildd

<<Anchor(SRU)>>
= Stable release updates =

Please see https://wiki.ubuntu.com/StableReleaseUpdates#Reviewing_procedure_and_tools

== langpack SRUs ==
 * Language packs are a special case; these packages are normally uploaded as a batch and will not normally reference specific bugs. The [[http://people.ubuntu.com/~ubuntu-archive/pending-sru.html|status page]] will only show {{{language-pack-en}}}. To find the full list of packages to be copied, use the {{{copy-packages}}} script from the {{{langpack-o-matic}}} bzr branch.

= Other archives =

[[http://extras.ubuntu.com/|extras.ubuntu.com]] is not managed by the Ubuntu archive administration team, but is a PPA owned by the [[https://launchpad.net/~app-review-board|Application Review Board]].

= Useful web pages =

Equally useful to the tools are the various auto-generated web pages in ubuntu-archive's `public_html` that can give you a feel for the state of the archive.

[[http://people.ubuntu.com/~ubuntu-archive/component-mismatches.txt]]

  As described above, this lists the differences between the archive and the output of the germinate script. Shows up packages that are in the wrong place, or need seeding.

[[http://people.ubuntu.com/~ubuntu-archive/germinate-output/]]

  This is the output of the germinate script, split up into each release of each flavour of ubuntu.

[[http://people.ubuntu.com/~ubuntu-archive/priority-mismatches.txt]]

  Shows discrepancies between priorities of packages and where they probably should go according to the seeds.

[[http://people.ubuntu.com/~ubuntu-archive/architecture-mismatches.txt]]

  Shows override discrepancies between architectures, which are generally bugs.

[[http://people.ubuntu.com/~ubuntu-archive/testing/precise_probs.html]]

  Generated by the hourly run of `britney` and indicates packages that are uninstallable on precise, usually due to missing dependencies or problematic conflicts.

[[http://people.ubuntu.com/~ubuntu-archive/testing/precise_outdate.html]]

  Lists differences between binary and source versions in the archive. This often shows up both build failures (where binaries are out of date for particular architectures) but also where a binary is no longer built from the source.

[[http://people.ubuntu.com/~ubuntu-archive/NBS/]]
[[http://people.ubuntu.com/~ubuntu-archive/nbs.html]]

  This contains a list of binary packages which are not built from source (NBS) any more. The files contain the list of reverse dependencies of those packages (output of `checkrdepends -b`). These packages need to be removed eventually, thus all reverse dependencies need to be fixed. This is updated hourly.

<<Anchor(Chroot management)>>
= Chroot management =

/!\ Please note that chroot management is something generally handled by Canonical IS (and specifically by Adam Conrad). The following section documents the procedures required should one have to, for instance, remove all the chroots for a certain suite to stop the build queue in its tracks while a breakage is hunted down and fixed, but please don't take this as an open invitation to mess with the buildd chroots willy-nilly.

Soyuz stores one chroot per (suite, archictecture).

`manage-chroot.py`, which runs only as 'lp_buildd' in cocoplum or cesium, allows the following actions upon a specified chroot:

{{{
$ sudo -u lp_buildd -i
lp_buildd@cocoplum:~$ LPCONFIG=ftpmaster /srv/launchpad.net/codelines/current/scripts/ftpmaster-tools/manage-chroot.py
ERROR manage-chroot.py <add|update|remove|get>
}}}

Downloading (get) an existing chroot:

{{{
$ manage-chroot.py [-s SUITE] <-a ARCH> get
}}}

The chroot will be downloaded and stored in the local disk name as 'chroot-<DISTRIBUTION>-<SERIES>-<ARCHTAG>.tar.bz2'

Uploading (add/update) a new chroot:

{{{
$ manage-chroot.py [-s SUITE] <-a ARCH> add -f <CHROOT_FILE>
}}}

'add' and 'update' action are equivalents. The new chroot will be immediatelly used for the next build job in the corresponding architecture.

Disabling (remove) an existing chroot:

/!\ Unless you have plans for creating a new chroots from scratch, it's better to download them to disk before the removal (recover is possible, but involves direct DB access)

{{{
$ manage-chroot.py [-s SUITE] <-a ARCH> remove
}}}

No builds will be dispatched for architectures with no chroot, the build-farm will continue functional for the rest of the system.

= Archive days =

This is currently being re-assessed to a more task oriented approach, rather than regular admin days.

Current members with regular admin days are:
 * Monday: SteveLangasek (?)
 * Tuesday: JonathanRiddell
 * Wednesday: ColinWatson
 * Thursday, SteveKowalik (?)
 * Friday: JamieStrandboge

Available for adhoc requests:
 * DustinKirkland (syncs, bug processing)
 * ?

On an archive day, the following things should be done:
 * If we are not yet in the DebianImportFreeze, run `sync-source.py -a` to sync unmodified packages from Debian (see [[ArchiveAdministration#Syncs|Syncs]]).
 * Process all [[https://launchpad.net/~ubuntu-archive/+subscribedbugs|pending archive bugs]]. Most of those are syncs, removals, component fixes, but there might be other, less common, requests.
 * Process the NEW queues of the current development release and `*-backports` of all supported stable releases.
 * If we are not yet in the DebianImportFreeze, run `process-removals.py` to review/remove packages which were removed in Debian.
 * Clean up component-mismatches, and poke people to fix dependencies/write MIRs.
 * Look at [[http://people.canonical.com/~ubuntu-archive/testing/precise_probs.html]], fix archive-admin related issues (component mismatches, etc.), and prod maintainers to fix package related problems.
 * Remove NBS packages without reverse dependencies, and prod maintainers to rebuild/fix packages to eliminate reverse dependencies to NBS packages.

== Archive Administration and Freezes ==

Archive admins should be familiar with the FreezeExceptionProcess, however it is the bug submitter's and sponsor's responsibility to make sure that the process is being followed. Some things to keep in mind for common tasks:
 * When the archive is frozen (ie the week before a Milestone, or from one week before RC until the final release), you need an ACK from ubuntu-release for all main/restricted uploads
 * During the week before final release, you need an ACK from `ubuntu-release` for any uploads to universe/multiverse
 * When the archive is not frozen, bugfix-only sync requests can be processed if filed by a `core-dev`, `ubuntu-dev` or `motu` (universe/multiverse only) or have an ACK by a sponsor or someone from ubuntu-sponsors.
 * After FeatureFreeze, all (new or otherwise) packages in the archive (ie main, restricted, universe and multiverse) require an ACK from ubuntu-release for any !FreezeException (eg FeatureFreeze, UserInterfaceFreeze, and [[MilestoneProcess|Milestone]]). Packages that do not require a !FreezeException can be processed normally.

See FreezeExceptionProcess for complete details.

This page details the processes for the Ubuntu Package Archive Administrators team, and hopefully provides a decent guide for new members of the team.

Bugs should be filed against the appropriate packages, and the team subscribed (not assigned) to the bug.

The requests can be found at https://launchpad.net/~ubuntu-archive/+subscribedbugs.

Team members may assign bugs to themselves and mark them In Progress if they're working on them, or discussing them; to act as a lock on that request.

1. Logging In

All administration is performed on cocoplum.canonical.com, accounts are provided to members of the team. Changes can only be made as the lp_archive user, to which you'll have sudo access.

So to begin:

  • $ ssh cocoplum
    $ sudo -u lp_archive -i

The -i is important as lp_archive's .bashrc sets the right environment variables and makes sure the directory with all of the tools is placed in the PATH.

IMPORTANT: This document uses $SUDO_USER in several places. If your cocoplum.canonical.com uid is not that same as your Launchpad id, be sure to use your Launchpad id when running Launchpad related scripts.

2. Client-side tools

We are gradually transitioning towards client-side administration as the necessary facilities become available via the Launchpad API. To get hold of these tools:

  • $ bzr get lp:ubuntu-archive-tools

Some of these tools still rely on ssh access to cocoplum for some operations, so the presence of a client-side tool unfortunately does not yet mean that community archive administrators can use it. It's a start.

At the moment, this transition tends to result in having two terminal windows open, one with a shell on cocoplum and one on your local machine. Sorry.

If your username on your local machine does not match your username on cocoplum remember to edit remote_host in synclib.py

3. NEW Processing

Both source packages and new binaries which have not yet been approved are not automatically accepted into the archive, but are instead held for checking and manual acceptance. Once accepted they'll be automatically approved from then on.

The current queue can be obtained with:

  • $ queue info

This is the NEW queue for ubuntu/precise by default; you can change the queue with -Q, the distro with -D and the release using -s. To list the UNAPPROVED queue for ubuntu/oneiric, for example:

  • $ queue -s oneiric -Q unapproved info

Packages are placed in the UNAPPROVED queue if they're uploaded to a closed distribution, and are usually security updates or similar; this should be checked with the uploader.

You can give an string argument after info which is interpreted as a substring match filter.

To obtain a report of the size of all the different queues for a particular release:

  • $ queue report

Back to the NEW queue for now, however. You'll see output that looks somewhat like this:

  • $ queue info
     Listing ubuntu/dapper (NEW) 4/4
    ---------|----|----------------------|----------------------|---------------
       25324 | S- | diveintopython-zh    | 5.4-0ubuntu1         | three minutes
             | * diveintopython-zh/5.4-0ubuntu1 Component: main Section: doc
       25276 | -B | language-pack-kde-co | 1:6.06+20060427      | 2 hours 20 minutes
             | * language-pack-kde-co-base/1:6.06+20060427/i386 Component: main Section: translations Priority: OPTIONAL
       23635 | -B | upbackup (i386)      | 0.0.1                | two days
             | * upbackup/0.0.1/i386 Component: main Section: admin Priority: OPTIONAL
             | * upbackup_0.0.1_i386_translations.tar.gz Format: ROSETTA_TRANSLATIONS
       23712 | S- | gausssum             | 1.0.3-2ubuntu1       | 45 hours
             | * gausssum/1.0.3-2ubuntu1 Component: main Section: science
    ---------|----|----------------------|----------------------|---------------
                                                                   4/4 total

The number at the start can be used with other commands instead of referring to a package by name. The next field shows you what is actually in the queue, "S-" means it's a new source and "-B" means it's a new binary. You then have the package name, the version and how long it's been in the queue.

New sources need to be checked to make sure they're well packaged, the licence details are correct and permissible for us to redistribute, etc. See PackagingGuide/Basic#NewPackages, PackagingGuide/Basic#Copyright and Debian's Reject FAQ. You can fetch a package from the queue for manual checking, be sure to do this in a directory of your own:

  • $ mkdir /tmp/$SUDO_USER
    $ cd /tmp/$SUDO_USER
    
    $ queue fetch 25324

The source is now in the current directory and ready for checking. Any problems should result in the rejection of the package (also send a mail to the uploader explaining the reason and Cc ubuntu-archive@lists.ubuntu.com):

  • $ queue reject 25324

If the package is fine, you should next check that it's going to end up in the right part of the archive. On the next line of the info output, you have details about the different parts of the package, including which component, section, etc. it is expected to head into. One of the important jobs is making sure that this information is actually correct through the application of overrides.

To alter the overrides for a source package, use:

  • $ queue override -c universe source ubuntustudio-menu

Where the override can be -c <component> and/or -x <section>

To alter the overrides for a binary package, use:

  • $ queue override -c universe binary ubuntustudio-menu

Where the override can be -c <component>, -x <section> and/or -p <priority>

Often a binary will be in the NEW queue because it is a shared library that has changed SONAME. In this case you'll probably want to check the existing overrides to make sure anything new matches. These can be found in `~/ubuntu/indices'.

Currently a special case of this are the kernel packages, which change package names with each ABI update and build many distinct binary packages in different sections. A helper tool has been written to apply overrides to the queue based on the existing packages in hardy:

  • $ kernel-overrides [-s <sourcepackage>] <oldabi> <newabi>

Binary packages are not often rejected (they go into a black hole with no automatic notifications), do, check the .deb contains files, run lintian on it and file bugs when things are broken. The binaries also need to be put into universe etc as appropriate even if the source is already there.

If you're happy with a package, and the overrides are correct, accept it with:

  • $ queue accept 23712

In the case of language packs, add -M to not spam the changes lists with the new packages. You can also use queue accept binary-name which will accept it for all architectures.

4. Component Mismatches and Changing Overrides

Sadly packages just don't stay where they're put. SeedManagement details how packages get chosen for the main component, the various meta packages and presence on the CD. What it doesn't point out is that packages which fall out of the seeding process are destined for the universe component.

Every hour or so, the difference between what the seeds expect to be true and what the archive actually believes is evaluated by the component-mismatches tool, and the output placed at:

This is split into four sections:

Source and binary promotions to main

  • These are source packages currently in universe that appear to need promoting to main. The usual reasons are that they are seeded, or that a package they build has become a dependency or build-dependency of a package in main. New sources need to be processed through the UbuntuMainInclusionQueue, and have been approved before they should be promoted. Also ensure that all of their dependencies (which will be in this list) are approved as well.

Binary only promotions to main

  • These are binary packages currently in universe that appear to need promoting to main, as above; except that their source package is already in main. An inclusion report isn't generally needed, though the package should be sanity checked. Especially check that all of the package's dependencies are already in main, or have been approved.

Source and binary demotions to universe

  • Sources and their binaries that are currently in main but are no longer seeded or depended on by another package. These either need to be seeded explicitly, or demoted.

Binary only demotions to universe

  • Binary packages in main that are no longer seeded or depended on, but the source is still to remain in main -- usually because another binary saves it. Often these tend to be -dev or -dbg packages and need to be seeded, rather than demoted; but not always.

Once you've determined what overrides need to be changed, use the change-override.py tool to do it.

To promote a binary package to main:

  • $ change-override.py -c main git-email

To demote a source package and all of its binaries to universe:

  • $ change-override.py -c universe -S tspc

Less-used are the options to just move a source, and leave its binaries where it is (usually just to repair a mistaken forgotten -S):

  • $ change-override.py -c universe tspc
    ...oops, forgot the source...
    $ change-override.py -c universe -t tspc

and the option to move a binary and its source, but leave any other binaries where they are:

  • $ change-override.py -c universe -B flite

5. Removals

5.1. Manual

Sometimes packages just need removing entirely, because they are no longer required. This can be done with:

  • $ lp-remove-package.py -u $SUDO_USER -m "reason for removal" konserve

By default this removes the named source and binaries, to remove just a binary use -b:

  •   $ lp-remove-package.py -u $SUDO_USER -m "NBS" -b konserve

"NBS" is a common short-hand meaning that the binary is No-longer Built by the Source.

To remove just a source, use -S.

The tool tells you what it's going to do, and asks for confirmation before doing it, so it's reasonably safe to get the wrong options and say N.

5.2. Blacklisting

If you remove source packages which are in Debian, and they are not meant to ever come back, add it to the blacklist at /srv/launchpad.net/dak/sync-blacklist.txt, document the reason, and bzr commit it with an appropriate changelog. This will avoid getting the package back to source NEW in the next round of autosyncs from Debian.

5.3. Removals in Debian

From time to time we should remove packages which were removed in Debian, to avoid accumulating cruft and unmaintained packages. This client-side tool (from ubuntu-archive-tools) will interactively go through the removals and ask for confirmation:

  • $ ./process-removals.py

Please note that we do need to keep some packages which were removed in Debian (e. g. "firefox", since we did not do the "firefox" → "iceweasel" renaming).

6. Syncs

Syncing packages with Debian is a reasonably common request. The tools help you prepare an upload, which you'll still need to check and put into incoming. The sync-helper.py client-side tool in ubuntu-archive-tools deals with most of the work.

First, change into the ~/syncs directory and make sure the Debian sources lists are up to date:

  • lp_archive@...$ cd ~/syncs
    lp_archive@...$ update-sources

Then, run sync-helper.py with some arbitrary filename as a parameter, such as this:

$ ./sync-helper x

Review the bugs in turn, making sure sure that the sync request is ACK'd (or requested by) someone who can upload the package in question; these people are marked with a (*) in sync-helper.py's display. If past FeatureFreeze, check the changelog to make sure the new version has only bug fixes and not new features. If they've asked to discard Ubuntu changes, use the ubuntu-changes script on cocoplum to show the Ubuntu changelog entries since the last branchpoint from Debian, and confirm that what they've described in the sync bug matches the outstanding Ubuntu changes.

Now, run mass-sync.py (this requires the ability to ssh to cocoplum), redirecting its standard input from the filename you gave to sync-helper.py:

$ ./mass-sync.py <x

(The first time you run mass-sync.py, it will need to authenticate to Launchpad, and will fail if its standard input is redirected, so run it without redirection the first time round and then Ctrl-C it.)

You'll now have a bunch of source packages in the ~/syncs directory of the lp_archive user which need uploading. To do that, just run:

$ ./mass-sync.py --flush-syncs

To sync all the updates available in Debian

  • sync-source.py -a
    NOMAILS=-M flush-syncs

This does not import new packages from Debian that were not previously present in Ubuntu. To get a list of new packages available for sync, use the command

  • new-source [contrib|non-free]

which gives a list of packages that can be fed into sync-source.py on the commandline after review

To sync from Debian incoming wget the sources,

  • apt-ftparchive sources ./ > Debian_incoming_main_Sources
    sync-source.py -S incoming <package>

Backports work much the same way; there is a client-side tool in ubuntu-archive-tools called backport-helper.py, which you can use the same way as sync-helper.py. ./mass-sync.py --flush-backports works the same way as ./mass-sync.py --flush-syncs. Backports do not require any Sources files. Note that backporting packages which did not exist in the previous version will end up in NEW which defaults to main, so universe packages need to have that override set.

Backports should reference the Launchpad username of the backporter who approved the backport, not the user requesting the backport.

7. NBS

Sometimes binary packages are not built by any source (NBS) any more. This usually happens with library SONAME changes, package renamings, etc. Those need to be removed from the archive from time to time, and right before a release, to ensure that the entire archive can be rebuilt by current sources.

Such packages are detected by archive-cruft-check.py /srv/launchpad.net/ubuntu-archive/. Apart from NBS packages it also prints out 'ASBA' ("Arch: all" superseded by "Arch: any"), but they are irrelevant for day-to-day archive administration. This tool does not check for reverse dependencies, though, so you should use checkrdepends -b for checking if it is safe to actually remove NBS packages from the archive.

Look at the hourly generated NBS report which showns all NBS packages, their reverse dependencies, and a copy-and-paste-able command to clean up the "safe" ones.

The rest needs to be taken care of by developers, by doing transition uploads for library SONAME changes, updating build dependencies, etc. The remaining files will list all the packages which still need the package in question.

Please refrain from removing NBS kernel packages for old ABIs until debian-installer and the seeds have been updated, otherwise daily builds of alternate and server CDs will be made uninstallable.

8. Adjusting Launchpad ACLs

NOTE: due to bug #562451, archive administrators cannot currently adjust Launchpad ACLs.

The new ArchiveReorganisation brings finer grained access controls than what components can provide. Launchpad ACLs allow individuals and teams to have upload or admin rights on certain packages, referred to as sets. In general, an archive administrator can process requests to create and delete package sets, as well as add or remove packages from package sets. Archive administrators should not add individuals or teams to package sets without explicit TechnicalBoard approval.

8.1. Package sets

Packages can be added to or removed from package sets using the edit_acl.py tool from the ubuntu-archive-tools bzr branch.

To list the packages currently in the package set mozilla:

$ ./edit_acl.py query -P mozilla -S maverick
adblock-plus
all-in-one-sidebar
bindwood
...

To add a package to the mozilla package set:

$ ./edit_acl.py -P mozilla -S precise -s foo -s bar -s baz add

To remove a package from the mozilla package set:

$ ./edit_acl.py -P mozilla -S precise -s foo delete

For more information, please see edit_acl.py --help.

9. Useful tools

There are other useful tools in your PATH which are invaluable.

9.1. Archive state checks

madison-lite (aliased to m) examines the current state of the archive for a given binary/source package:

  • $ madison-lite dpkg
          dpkg | 1.10.22ubuntu2 |         warty | source, amd64, i386, powerpc
          dpkg | 1.10.22ubuntu2.1 | warty-security | source, amd64, i386, powerpc
          dpkg | 1.10.27ubuntu1 |         hoary | source, amd64, i386, ia64, powerpc, sparc
          dpkg | 1.10.27ubuntu1.1 | hoary-security | source, amd64, i386, ia64, powerpc, sparc
          dpkg | 1.10.27ubuntu2 | hoary-updates | source, amd64, i386, ia64, powerpc, sparc
          dpkg | 1.13.10ubuntu4 |        breezy | source, amd64, hppa, i386, ia64, powerpc, sparc
          dpkg | 1.13.11ubuntu5 |        dapper | source, amd64, hppa, i386, ia64, powerpc, sparc
    
    $ madison-lite dselect
       dselect | 1.10.22ubuntu2 |         warty | amd64, i386, powerpc
       dselect | 1.10.22ubuntu2.1 | warty-security | amd64, i386, powerpc
       dselect | 1.10.27ubuntu1 |         hoary | amd64, i386, ia64, powerpc, sparc
       dselect | 1.10.27ubuntu1.1 | hoary-security | amd64, i386, ia64, powerpc, sparc
       dselect | 1.10.27ubuntu2 | hoary-updates | amd64, i386, ia64, powerpc, sparc
       dselect | 1.13.10ubuntu4 |        breezy | amd64, hppa, i386, ia64, powerpc, sparc
       dselect | 1.13.11ubuntu5 |        dapper | amd64, hppa, i386, ia64, powerpc, sparc

Or when used with -S and a source package, the source and every binary built by it:

  • $ madison-lite -S dpkg
          dpkg | 1.10.22ubuntu2 |         warty | source, amd64, i386, powerpc
          dpkg | 1.10.22ubuntu2.1 | warty-security | source, amd64, i386, powerpc
          dpkg | 1.10.27ubuntu1 |         hoary | source, amd64, i386, ia64, powerpc, sparc
          dpkg | 1.10.27ubuntu1.1 | hoary-security | source, amd64, i386, ia64, powerpc, sparc
          dpkg | 1.10.27ubuntu2 | hoary-updates | source, amd64, i386, ia64, powerpc, sparc
          dpkg | 1.13.10ubuntu4 |        breezy | source, amd64, hppa, i386, ia64, powerpc, sparc
          dpkg | 1.13.11ubuntu5 |        dapper | source, amd64, hppa, i386, ia64, powerpc, sparc
      dpkg-dev | 1.10.22ubuntu2 |         warty | all
      dpkg-dev | 1.10.22ubuntu2.1 | warty-security | all
      dpkg-dev | 1.10.27ubuntu1 |         hoary | all
      dpkg-dev | 1.10.27ubuntu1.1 | hoary-security | all
      dpkg-dev | 1.10.27ubuntu2 | hoary-updates | all
      dpkg-dev | 1.13.10ubuntu4 |        breezy | all
      dpkg-dev | 1.13.11ubuntu5 |        dapper | all
      dpkg-doc | 1.10.22ubuntu2 |         warty | all
      dpkg-doc | 1.10.22ubuntu2.1 | warty-security | all
      dpkg-doc | 1.10.27ubuntu1 |         hoary | all
      dpkg-doc | 1.10.27ubuntu1.1 | hoary-security | all
      dpkg-doc | 1.10.27ubuntu2 | hoary-updates | all
       dselect | 1.10.22ubuntu2 |         warty | amd64, i386, powerpc
       dselect | 1.10.22ubuntu2.1 | warty-security | amd64, i386, powerpc
       dselect | 1.10.27ubuntu1 |         hoary | amd64, i386, ia64, powerpc, sparc
       dselect | 1.10.27ubuntu1.1 | hoary-security | amd64, i386, ia64, powerpc, sparc
       dselect | 1.10.27ubuntu2 | hoary-updates | amd64, i386, ia64, powerpc, sparc
       dselect | 1.13.10ubuntu4 |        breezy | amd64, hppa, i386, ia64, powerpc, sparc
       dselect | 1.13.11ubuntu5 |        dapper | amd64, hppa, i386, ia64, powerpc, sparc

checkrdepends lists the reverse dependencies of a given binary:

  • $ checkrdepends -s precise -b nm-applet

or source package:

  • $ checkrdepends -s precise network-manager

9.2. NEW handling

A lot of churn in NEW comes from Debian imports. Since they already went through NEW in Debian, we should not waste too much time on it, so there are some tools.

  • The first thing you need to handle are native syncs. These are syncs performed via URLs like https://launchpad.net/ubuntu/precise/+localpackagediffs or via the LP API. You can recognize these in the LP queue pages because they have '(sync)' in the name. On cocoplum, they show up as 'X-' (as opposed to 'S-' like normal source uploads). There are no changes files for these, so they cannot be fetched via q fetch (though old versions of the tools used to fake up a changes file so it would work). As such, you must clear out any native syncs before running the below commands which rely on q fetch. To verify a native sync:

    1. Download the source package from Debian (eg, via dget or apt-get source <pkg>=<version>)

    2. Download the imported dsc file from the Debian project in LP (eg https://launchpad.net/debian/sid/+source/pxe-kexec)

    3. Compare the dsc file from Debian and from LP. Since both should be signed, if they are identical, then you know the package hasn't been tampered with. I can also compare the full source package from Debian and LP if desired.

    Once verified, accept it normally via LP or q accept <srcpkg>

  • There are often duplicate source NEWs in the queue if the auto-syncer run twice in a row without clearing the imported sources from NEW. These can be weeded out with:
     new-remove-duplicates > /tmp/$SUDO_USER/cmds
    sh /tmp/$SUDO_USER/cmds 

    (Please eyeball cmds before feeding it to the queue).

  • new-binary-debian-universe creates queue commands for overriding and accepting all binary NEW packages whose source was imported from Debian and is in universe. While it runs, it lintians all the imported .debs. Watch the output and note all particularly worrisome issues. Check the cmds file for obvious errors, and when you are happy, execute it with sh cmds.

    Warning: This command will fail when there are duplicates in the queue. Clean them up with new-remove-duplicates first.

     new-binary-debian-universe > /tmp/$SUDO_USER/cmds
    vi /tmp/$SUDO_USER/cmds
    sh /tmp/$SUDO_USER/cmds 
  • For bulk processing of source NEW imported from Debian you should do something like:
     cd /tmp/$SUDO_USER/
    q fetch
    for i in `ls *_source.changes| grep -v ubuntu`; do grep -q 'Changed-By: Ubuntu Archive Auto-Sync' $i || continue; egrep -q ' contrib|non-free' $i && continue ; echo "override source -c universe ${i%%_*}"; echo "accept ${i%%_*}"; done > cmds 

    Then go over the cmds list, verify on http://packages.qa.debian.org that all the packages mentioned are indeed in Debian main (and not in non-free, for example), and again feed it to the queue with q -e -f cmds.

  • When unpacking a source package for source NEW checks, you should run suspicious-source. This is basically a find -type f which ignores all files with a known-safe name (such as *.c, configure, *.glade). Every file that it outputs should be checked for being the preferred form of modification, as required by the GPL. This makes it easier to spot PDFs and other binary-only files that are not accompanied by a source. The licensecheck command is also useful for verifying the license status of source packages.

9.3. Moving Packages to Updates

9.3.1. Standard case

Packages in -proposed can be moved to -updates once they are approved by someone from sru-verification, and have passed the minimum aging period of 7 days.

  • copy-package.py -vbs maverick-proposed --to-suite=maverick-updates kdebase

9.3.2. Special case: DDTP updates

  1. Disable publisher cron job and wait until it has finished. It must not run during the copy operation. (Alternatively, if the publisher is currently running and you know it will take some time yet to finish, you may make these changes in /srv/launchpad.net/ubuntu-archive/ubuntu/dists.new/.)

  2. Copy

    /srv/launchpad.net/ubuntu-archive/ubuntu/dists/release-proposed/component/i18n/* to the corresponding -updates directory, for all relevant components. This needs to happen as user lp_publish.

  3. Reenable publisher cron job.

9.3.3. Special case: debian-installer updates

  1. Disable publisher cron job and wait until it has finished. It must not run during the copy operation. (Alternatively, if the publisher is currently running and you know it will take some time yet to finish, you may make these changes in /srv/launchpad.net/ubuntu-archive/ubuntu/dists.new/.)

  2. As user lp_publish, copy /srv/launchpad.net/ubuntu-archive/ubuntu/dists/release-proposed/main/installer-architecture/version to the corresponding -updates directory, for all architectures and for the version of debian-installer being copied.

  3. As user lp_publish, update /srv/launchpad.net/ubuntu-archive/ubuntu/dists/release-updates/main/installer-architecture/current to point to the version of debian-installer being copied, for all architectures.

  4. As user lp_publish, make sure that at most three versions of the installer remain in /srv/launchpad.net/ubuntu-archive/ubuntu/dists/release-updates/main/installer-architecture, for all architectures.

  5. Reenable publisher cron job.

9.3.4. Special case: update-manager updates

  1. Disable publisher cron job and wait until it has finished. It must not run during the copy operation. (Alternatively, if the publisher is currently running and you know it will take some time yet to finish, you may make these changes in /srv/launchpad.net/ubuntu-archive/ubuntu/dists.new/.)

  2. As user lp_publish, copy /srv/launchpad.net/ubuntu-archive/ubuntu/dists/release-proposed/main/dist-upgrader-all/version to the corresponding -updates directory, for the version of update-manager being copied.

  3. As user lp_publish, update /srv/launchpad.net/ubuntu-archive/ubuntu/dists/release-updates/main/dist-upgrader-all/current to point to the version of update-manager being copied.

  4. As user lp_publish, make sure that at most three versions of the upgrader remain in /srv/launchpad.net/ubuntu-archive/ubuntu/dists/release-updates/main/dist-upgrader-all.

  5. Reenable publisher cron job.

9.3.5. Resources

9.4. Publishing security uploads from the ubuntu-security private PPA

Security uploads in Soyuz are first built, published, and tested in the Security Team's private PPA. To unembargo them, we use a tool that re-publishes them to the primary archive. Note that this should never be done without an explicit request from a member of the Security Team.

To publish nasm from the ubuntu-security PPA to the -security pocket of ubuntu's hardy release, you would do the following:

  • LPCONFIG=production /srv/launchpad.net/codelines/soyuz-production/scripts/ftpmaster-tools/unembargo-package.py -p ubuntu-security -d ubuntu -s hardy-security nasm

9.5. Publishing packages from the ubuntu-mozilla-security public PPA

Mozilla (ie, firefox and thunderbird) uploads in Soyuz are first built, published, and tested in the Mozilla Security Team's public PPA. To publish them into the main archive, use copy-package.py. Note that pocket copies to the security pocket should never be done without an explicit request from a member of the Ubuntu Security Team (Mozilla Security Team is not enough), and copies to the proposed pocket should not be done without an explicit request from a member of the SRU Team. Keep in mind that firefox 2.0 and later (ie hardy and later) will always have a corresponding xulrunner package that needs copying.

To publish firefox-3.0 version 3.0.7+nobinonly-0ubuntu0.8.04.1 and xulrunner-1.9 version 1.9.0.7+nobinonly-0ubuntu0.8.04.1 from the ubuntu-mozilla-security PPA to the -security pocket of ubuntu's hardy release, you would do the following:

  • $ copy-package.py -b --ppa=ubuntu-mozilla-security -s hardy --to-suite hardy-security -e 3.0.7+nobinonly-0ubuntu0.8.04.1 firefox-3.0
    $ copy-package.py -b --ppa=ubuntu-mozilla-security -s hardy --to-suite hardy-security -e 1.9.0.7+nobinonly-0ubuntu0.8.04.1 xulrunner-1.9
    $ firefox-overrides -S hardy-security

IMPORTANT: Due to current limitations of Launchpad, all packages copied from a PPA into the archive go to 'main'. For source packages with binaries wholly in universe (eg, firefox and xulrunner in 8.04 LTS and later, seamonkey everywhere, or firefox-3.5 and xulrunner-1.9.1 in 9.04), you can use change-override.py like normal to move them to universe (eg change-override.py -c universe -s lucid-security -S seamonkey). For packages with some binaries in main and some in universe, you can use the firefox-overrides script on cocoplum. The script currently knows how to fix firefox in dapper, firefox-3.0 in hardy through jaunty, xulrunner-1.9 in hardy through natty, firefox-3.5 in karmic through natty, xulrunner-1.9.1 in karmic through natty, and xulrunner-1.9.2 in hardy, lucid through natty. To get a list of binaries that should be demoted, you can use the find-bin-overrides script from lp:ubuntu-qa-tools with find-bin-overrides <examined-pocket> <target-pocket> <releases> <srcpkg 1> <srcpkg 2>. Eg:

  • <user@your local machine>$ $UQT/security-tools/find-bin-overrides release security maverick cups
    # maverick/cups
    change-override.py -c universe -s maverick-security cups-ppdc cupsddk
    lp_archive@cocoplum:~$ change-override.py -c universe -s maverick-security cups-ppdc cupsddk

9.6. Copying PPA kernels to proposed

With the new StableReleaseCadence, kernels are built in the kernel team PPA and then pocket copied to the proposed pocket in the main archive once they are ACKd (process TBD).

The Pending SRU report has a section for the kernel PPA which shows all newer kernels in the PPA, provides clickable links to open all bugs (with separate CVE bugs), and copy&pasteable copy-proposed-kernel.py and sru-accept.py commands (both in lp:ubuntu-archive-tools). To publish linux version 2.6.32-27.49 from the canonical-kernel-team PPA to the -proposed pocket of ubuntu's lucid release, you would do the following:

  • Click on the "open bugs" button and check that the bugs are in a reasonable state, i. e. they target the right source package (linux vs. linux-ec2 etc.), are fixed in the development release (or at least upstream), and that the changes are limited to bug fixes, and are in general within the boundaries of the StableReleaseUpdates policy. They should ideally have a task for the stable release the SRU targets. Note that this button does not open CVE bugs, as they don't get verification or other tracking (there is a separate button for opening them, if desired).

  • Find the tracking bug by either traversing through the list of bugs, or opening the .changes file and looking at the top (the release bug is pointed out there). Ensure that there is a proper stable task, and that the main (development release) task is invalid.
  • Run the copy-proposed-kernel.py command to copy it to -proposed.

    However, with some flavours like -ec2 or armel kernels, which are mostly just a merge with the main linux kernel, it is too much overhead to add -ec2 tasks to all the bugs.

  • Due to current limitations of Launchpad, packages copied from a PPA into the archive sometimes go to 'universe'. As a result, please verify the overrides for all packages copied to -proposed, otherwise these packages might become uninstallable when they are ultimately copied to -updates/-security. find-bin-overrides from lp:ubuntu-qa-tools can help with this. You use it like so: find-bin-overrides <pocket to compare to> <target pocket> <ubuntu release> <source package>=<version in pocket to compare to>,<old abi>,<new abi>. Eg, suppose there is a new kernel in hardy-proposed with a new ABI of 2.6.24-30 and you want to get a list of overrides for the new kernel based on the old ABI of 2.6.24-16 in the 2.6.24.12-16.34 version from the release pocket of hardy. For the linux-restricted-modules-2.6.24 source package, you might use:

    $ find-bin-overrides release proposed hardy \
    linux-restricted-modules-2.6.24=2.6.24.12-16.34,2.6.24-16,2.6.24-30
    # hardy/linux-restricted-modules-2.6.24
    change-override.py -c multiverse -s hardy-proposed fglrx-kernel-source ...
    change-override.py -c restricted -s hardy-proposed avm-fritz-firmware-2.6.24-30 ...

    For the linux source package, you might use:

    find-bin-overrides release proposed hardy linux=2.6.24-16.30,2.6.24-16,2.6.24-30
    # hardy/linux
    ...

    You may also specify --show-main to also show the the change-override.py command to move things to main. This can be useful if you know the overrides are very wrong. See find-bin-overrides -h for details.

TODO: A process/script similar to kernel-overrides should be developed to make sure overrides are properly handled for binaries not in main. Is find-bin-overrides from lp:ubuntu-qa-tools good enough?

9.7. Copying security uploads to updates

Security uploads are distributed from a single system, security.ubuntu.com (consisting of one or a small number of machines in the Canonical datacentre). While this ensures much quicker distribution of security updates than is possible from a mirror network, it places a very high load on the machines serving security.ubuntu.com, as well as inflating Canonical's bandwidth expenses very substantially. Every new installation of a stable release of Ubuntu is likely to be shortly followed by downloading all security updates to date, which is a significant ongoing cost.

To mitigate this, we periodically copy security uploads to the -updates pocket, which is distributed via the regular mirror network. (In fact, the pooled packages associated with -security are mirrored too, but mirrored -security entries are not in the default /etc/apt/sources.list to avoid causing even more HTTP requests on every apt-get update.) This is a cheap operation, and has no effect on the timely distribution of security updates, other than to reduce the load on central systems.

The copy-report tool lists all security uploads that need to be copied to -updates. If the package in question is not already in -updates, it can be copied without further checks. Otherwise, copy-report will extract the changelogs (which may take a little while) and confirm that the package in -security is a descendant of the package in -updates. If that is not the case, it will report that the package needs to be merged by hand.

The output of the tool looks like this:

$ copy-report
The following packages can be copied safely:
--------------------------------------------

copy-package.py -y -b -s feisty-security --to-suite feisty-updates -e 8.3.5-6ubuntu2.1 tk8.3
copy-package.py -y -b -s feisty-security --to-suite feisty-updates -e 8.4.14-0ubuntu2.1 tk8.4

The block of output under "The following packages can be copied safely:" may be copied and pasted in its entirety. If there is a block headed "The following packages need to be merged by hand:", then make sure that the security team is aware of those cases.

9.8. Syncs with mass-sync.py

9.8.1. Purpose

If you process a long list of sync requests from Launchpad bugs, using sync-source.py manually is tedious. To automate this, there is a client-side tool mass-sync.py which does the following:

  • Take a list of sync request bug # and additional sync options as input.
  • For each sync request bug:
    • get the source package name and requestor from Launchpad
    • Call sync-source.py with the requestor and source package name and all additional sync options from the input file

    • On success, close the bug with the output of sync-source.py.

9.8.2. Steps

  • Open the list of current sync requests in browser.

  • Starting from the first bug which is associated to a package (see limitation above), use ctrl+mouse marking to select the column with the bug numbers. Paste them into a text file, let's call it syncs.txt.

  • syncs.txt is the input to mass-sync.py and must contain one line per sync request bug, with the word "sync" being leftmost, followed by the bug number. If you place a package name after the bug number, that will be used for bugs not assigned to a package. Everything after the bug number (or package name, if given) are extra options to sync-source.py which get passed to it unmodified.

  • Now open all the sync requests (in browser tabs) and walk through them:
    • Delete bug # from syncs.txt which are not approved or invalid. Set those to "Incomplete" in Launchpad, and provide necessary followup.

    • Use rmadison -u debian to verify the component to sync from (often, requestors get it wrong, or unstable got a newer version than experimental since the sync request was made)

    • Add appropriate sync options, e. g. if package has Ubuntu changes or needs to be synced from experimental (see sync-source.py --help for options). Eg:

        sync 123456 -S expermintal
        sync 123457 -f -S testing
        sync 123458 -C contrib
        sync 123459 <new source package>
  • Update Sources files on cocoplum:

      cd ~/syncs
      update-sources
  • Run the mass sync, on your client:
      ./mass-sync.py < /tmp/syncs.txt
      ./mass-sync.py --flush-syncs

    If you are not an archive admin with shell access to cocoplum, hand the file to someone who has.

9.8.3. sync-source.py options

The most common options are:

  • Option

    Description

    Default

    -a

    sync all the updates available in Debian

    -f, --force

    Overwrite Ubuntu changes

    abort if Ubuntu package has modifications

    -S suite

    Sync from particular suite (distrorelease), e. g. experimental

    unstable

    -C component

    Sync from particular component, e. g. non-free

    main

9.8.4. dholbach syncs

Many syncs requested by people who are not yet ubuntu-dev are ACKed by dholbach, his script creates a file with the sync numbers which can be downloaded and fed into mass-sync.py

http://people.canonical.com/~dholbach/tmp/sponsoring-list

9.9. Backports with mass-sync.py

Since backports are very similar to syncs, mass-sync.py can also be used to do those. In this case, the source package name is mandatory, since backport requests are not filed against source packages but against release-backports products.

9.9.1. Steps

  • Open the list of current backport requests for a particular release (this URL is for hardy) in browser. Note that this URL only lists bugs being "in progress", since that's what the backporters team will set when approving backports.

  • Use ctrl+mouse marking to select the column with the bug numbers. Paste them into a text file, let's call it backports-hardy.txt.

  • backports-hardy.txt is the input to mass-sync.py and must contain one line per backport request bug, with the word "backport" being leftmost, followed by the bug number, followed by the source package name. Everything after the package name are extra options to backport-source-backend which get passed to it unmodified.

  • Now open all the backport requests (in browser tabs) and walk through them:
    • Delete bug # from backport.txt which are invalid. Set those to "Incomplete" in Launchpad, and provide necessary followup.

    • Check with rmadison if the current version is still the same that was approved and tested. If there is a newer one, set back to "Incomplete" and mention the newer version.

    • If a backport requires an actual upload due to source changes, these need to be approved differently. Remove the bug from backports-hardy.txt, but do not change the bug report.

    • Add appropriate backport options to backports-hardy.txt, e. g. if package should not be backported from the current development release.

example backports.txt

  • backport 586879 koffice-l10n
    backport 587278 virtualbox-ose
    backport 587278 virtualbox-guest-additions
    backport 550880 simutrans-pak64 -S lucid -s karmic

Final line backports from lucid to karmic

  • Run the mass backport, on your client:
      ./mass-sync.py < /tmp/backports-hardy.txt
      ./mass-sync.py --flush-backports

    If you are not an archive admin with shell access to cocoplum, hand the file to someone who has.

9.9.2. backport-source-backend options

The most common options are:

  • Option

    Description

    Default

    -S suite

    Backport from particular suite (distrorelease), e. g. intrepid

    current development release

    -s suite

    Backport to a particular suite (distrorelease), e. g. hardy

9.9.3. Example input file

backport 12345 lintian
backport 23456 frozen-bubble -S intrepid

9.10. Diffs for unapproved uploads

The "unapproved" queue holds packages while a release is frozen, i. e. while a milestone or final freeze is in progress, or for post-release updates (like hardy-proposed). Packages in these queues need to be scrutinized before they get accepted.

This can be done with the queuediff tool in lp:~ubuntu-archive/ubuntu-archive-tools/trunk, which generates a debdiff between the current version in the archive, and the package sitting in the unapproved queue:

$ queue-diff -s hardy-updates hal
$ queue-diff -b human-icon-theme | view -

-s specifies the release pocket to compare against and defaults to the current development release. Please note that the pocket of the unapproved queue is not checked or regarded; i. e. if there is a hal package waiting in hardy-proposed/unapproved, but the previous version already migrated to hardy-updates, then you need to compare against hardy-updates, not -proposed.

Check --help, the tool has more options, such as specifying a different mirror, or -b to open the referred Launchpad bugs in the webbrowser.

This tool works very fast if the new package does not change the orig.tar.gz, then it only downloads the diff.gz. For native packages or new upstream versions it needs to download both tarballs and run debdiff on them. Thus for large packages you might want to do this manually in the data center.

10. Useful runes

This section contains some copy&paste shell bits which ease recurring jobs.

10.1. partner archive

The Canonical partner archive used to be known as ubuntu-partner, but now it is simply another component of Ubuntu. As such, use the same procedures when processing partner packages. Eg (notice 'Component: partner'):

$ queue -s hardy info
Initialising connection to queue new
Running: "info"
Listing ubuntu/hardy (NEW) 2/2
---------|----|----------------------|----------------------|---------------
 1370980 | S- | arkeia               | 8.0.9-3              | 19 hours
         | * arkeia/8.0.9-3 Component: partner Section: utils
 1370964 | S- | arkeia-amd64         | 8.0.9-3              | 19 hours
         | * arkeia-amd64/8.0.9-3 Component: partner Section: utils
---------|----|----------------------|----------------------|---------------
                                                               2/2 total

Use -j to remove a package:

lp-remove-package.py -u jr -m "request Brian Thomason" -s oneiric adobe-flashplugin -j

New, server-related packages are to be reviewed by Dustin Kirkland before entering the partner archive, whereas desktop-related packages are to be reviewed by Jonathan Riddell.

11. reprocess-failed-to-move

In some cases, binary packages fail to move from the incoming queue to the accepted queue. To fix this, run ~lp_buildd/reprocess-failed-to-move as lp_buildd

12. Stable release updates

Please see https://wiki.ubuntu.com/StableReleaseUpdates#Reviewing_procedure_and_tools

12.1. langpack SRUs

  • Language packs are a special case; these packages are normally uploaded as a batch and will not normally reference specific bugs. The status page will only show language-pack-en. To find the full list of packages to be copied, use the copy-packages script from the langpack-o-matic bzr branch.

13. Other archives

extras.ubuntu.com is not managed by the Ubuntu archive administration team, but is a PPA owned by the Application Review Board.

14. Useful web pages

Equally useful to the tools are the various auto-generated web pages in ubuntu-archive's public_html that can give you a feel for the state of the archive.

http://people.ubuntu.com/~ubuntu-archive/component-mismatches.txt

  • As described above, this lists the differences between the archive and the output of the germinate script. Shows up packages that are in the wrong place, or need seeding.

http://people.ubuntu.com/~ubuntu-archive/germinate-output/

  • This is the output of the germinate script, split up into each release of each flavour of ubuntu.

http://people.ubuntu.com/~ubuntu-archive/priority-mismatches.txt

  • Shows discrepancies between priorities of packages and where they probably should go according to the seeds.

http://people.ubuntu.com/~ubuntu-archive/architecture-mismatches.txt

  • Shows override discrepancies between architectures, which are generally bugs.

http://people.ubuntu.com/~ubuntu-archive/testing/precise_probs.html

  • Generated by the hourly run of britney and indicates packages that are uninstallable on precise, usually due to missing dependencies or problematic conflicts.

http://people.ubuntu.com/~ubuntu-archive/testing/precise_outdate.html

  • Lists differences between binary and source versions in the archive. This often shows up both build failures (where binaries are out of date for particular architectures) but also where a binary is no longer built from the source.

http://people.ubuntu.com/~ubuntu-archive/NBS/ http://people.ubuntu.com/~ubuntu-archive/nbs.html

  • This contains a list of binary packages which are not built from source (NBS) any more. The files contain the list of reverse dependencies of those packages (output of checkrdepends -b). These packages need to be removed eventually, thus all reverse dependencies need to be fixed. This is updated hourly.

15. Chroot management

Warning /!\ Please note that chroot management is something generally handled by Canonical IS (and specifically by Adam Conrad). The following section documents the procedures required should one have to, for instance, remove all the chroots for a certain suite to stop the build queue in its tracks while a breakage is hunted down and fixed, but please don't take this as an open invitation to mess with the buildd chroots willy-nilly.

Soyuz stores one chroot per (suite, archictecture).

manage-chroot.py, which runs only as 'lp_buildd' in cocoplum or cesium, allows the following actions upon a specified chroot:

$ sudo -u lp_buildd -i
lp_buildd@cocoplum:~$ LPCONFIG=ftpmaster /srv/launchpad.net/codelines/current/scripts/ftpmaster-tools/manage-chroot.py
ERROR   manage-chroot.py <add|update|remove|get>

Downloading (get) an existing chroot:

$ manage-chroot.py [-s SUITE] <-a ARCH> get

The chroot will be downloaded and stored in the local disk name as 'chroot-<DISTRIBUTION>-<SERIES>-<ARCHTAG>.tar.bz2'

Uploading (add/update) a new chroot:

$ manage-chroot.py [-s SUITE] <-a ARCH> add -f <CHROOT_FILE>

'add' and 'update' action are equivalents. The new chroot will be immediatelly used for the next build job in the corresponding architecture.

Disabling (remove) an existing chroot:

Warning /!\ Unless you have plans for creating a new chroots from scratch, it's better to download them to disk before the removal (recover is possible, but involves direct DB access)

$ manage-chroot.py [-s SUITE] <-a ARCH> remove

No builds will be dispatched for architectures with no chroot, the build-farm will continue functional for the rest of the system.

16. Archive days

This is currently being re-assessed to a more task oriented approach, rather than regular admin days.

Current members with regular admin days are:

Available for adhoc requests:

On an archive day, the following things should be done:

  • If we are not yet in the DebianImportFreeze, run sync-source.py -a to sync unmodified packages from Debian (see Syncs).

  • Process all pending archive bugs. Most of those are syncs, removals, component fixes, but there might be other, less common, requests.

  • Process the NEW queues of the current development release and *-backports of all supported stable releases.

  • If we are not yet in the DebianImportFreeze, run process-removals.py to review/remove packages which were removed in Debian.

  • Clean up component-mismatches, and poke people to fix dependencies/write MIRs.
  • Look at http://people.canonical.com/~ubuntu-archive/testing/precise_probs.html, fix archive-admin related issues (component mismatches, etc.), and prod maintainers to fix package related problems.

  • Remove NBS packages without reverse dependencies, and prod maintainers to rebuild/fix packages to eliminate reverse dependencies to NBS packages.

16.1. Archive Administration and Freezes

Archive admins should be familiar with the FreezeExceptionProcess, however it is the bug submitter's and sponsor's responsibility to make sure that the process is being followed. Some things to keep in mind for common tasks:

  • When the archive is frozen (ie the week before a Milestone, or from one week before RC until the final release), you need an ACK from ubuntu-release for all main/restricted uploads
  • During the week before final release, you need an ACK from ubuntu-release for any uploads to universe/multiverse

  • When the archive is not frozen, bugfix-only sync requests can be processed if filed by a core-dev, ubuntu-dev or motu (universe/multiverse only) or have an ACK by a sponsor or someone from ubuntu-sponsors.

  • After FeatureFreeze, all (new or otherwise) packages in the archive (ie main, restricted, universe and multiverse) require an ACK from ubuntu-release for any FreezeException (eg FeatureFreeze, UserInterfaceFreeze, and Milestone). Packages that do not require a FreezeException can be processed normally.

See FreezeExceptionProcess for complete details.

ArchiveAdministration (last edited 2024-11-12 13:41:12 by tjaalton)