ArchiveAdministration

Differences between revisions 5 and 139 (spanning 134 versions)
Revision 5 as of 2006-05-11 09:06:56
Size: 15437
Editor: quest
Comment: fix jessica
Revision 139 as of 2009-09-11 12:12:36
Size: 44405
Editor: pool-71-114-226-175
Comment: generalize mozilla pocket copies
Deletions are marked like this. Additions are marked like this.
Line 2: Line 2:
[[TableOfContents]]

= Archive Administration =


This page details the processes for the [https://launchpad.net/people/ubuntu-archive Ubuntu Package Archive Administrators] team, and hopefully provides a decent guide for new members of the team.
<<TableOfContents>>

This page details the processes for the [[https://launchpad.net/~ubuntu-archive|Ubuntu Package Archive Administrators]] team, and hopefully provides a decent guide for new members of the team.
Line 10: Line 8:
The requests can be found at [https://launchpad.net/people/ubuntu-archive/+subscribedbugs]. The requests can be found at [[https://launchpad.net/~ubuntu-archive/+subscribedbugs]].
Line 14: Line 12:
== Logging In ==

All administration is performed on `drescher.ubuntu.com`, accounts are provided to members of the team;. Changes may be made only as the `lp_archive` user, to which you'll have `sudo` access.
= Logging In =

All administration is performed on `cocoplum.canonical.com`, accounts are provided to members of the team. Changes can only be made as the `lp_archive` user, to which you'll have `sudo` access.
Line 20: Line 18:
$ ssh drescher $ ssh cocoplum
Line 26: Line 24:
== NEW Processing == '''IMPORTANT:''' This document uses `$SUDO_USER` in several places. If your `cocoplum.canonical.com` uid is not that same as your Launchpad id, be sure to use your Launchpad id when running Launchpad related scripts.

= Client-side tools =

We are gradually transitioning towards client-side administration as the necessary facilities become available via the Launchpad API. To get hold of these tools:

 {{{
$ bzr get lp:ubuntu-archive-tools
}}}

Some of these tools still rely on `ssh` access to `cocoplum` for some operations, so the presence of a client-side tool unfortunately does not yet mean that community archive administrators can use it. It's a start.

At the moment, this transition tends to result in having two terminal windows open, one with a shell on `cocoplum` and one on your local machine. Sorry.

= NEW Processing =
Line 32: Line 44:
$ queue info "*"
}}}

This is the `NEW` queue for `ubuntu/dapper` by default; you can change the queue with `-Q`, the distro with `-D` and the release using `-R`. To list the `UNAPPROVED` queue for `ubuntu/breezy`, for example:
 {{{
$ queue -R breezy -Q unapproved info "*"
$ queue info
}}}

This is the `NEW` queue for `ubuntu/feisty` by default; you can change the queue with `-Q`, the distro with `-D` and the release using `-s`. To list the `UNAPPROVED` queue for `ubuntu/edgy`, for example:
 {{{
$ queue -s edgy -Q unapproved info
Line 42: Line 54:
You can give an string argument after info which is interpreted as a substring match filter.
Line 49: Line 63:
$ queue info "*" $ queue info
Line 67: Line 81:
New sources need to be checked to make sure they're well packaged, the licence details are correct and permissible for us to redistribute, etc. You can fetch a package from the queue for manual checking, be sure to do this in a directory of your own:
 {{{
$ mkdir /tmp/$USER
$ cd /tmp/$USER
New sources need to be checked to make sure they're well packaged, the licence details are correct and permissible for us to redistribute, etc. See [[PackagingGuide/Basic#NewPackages]], [[PackagingGuide/Basic#Copyright]] and [[http://ftp-master.debian.org/REJECT-FAQ.html|Debian's Reject FAQ]]. You can fetch a package from the queue for manual checking, be sure to do this in a directory of your own:
 {{{
$ mkdir /tmp/$SUDO_USER
$ cd /tmp/$SUDO_USER
Line 75: Line 89:
The source is now in the current directory and ready for checking. Any problems should result in the rejection of the package: The source is now in the current directory and ready for checking. Any problems should result in the rejection of the package (also send a mail to the uploader explaining the reason and Cc ubuntu-archive@lists.ubuntu.com):
Line 84: Line 98:
$ queue override guasssum source universe/
}}}

Where the last argument is `COMPONENT/SECTION`, leaving any part blank to leave it unchanged.
$ queue override -c universe source ubuntustudio-menu
}}}

Where the override can be -c <component> and/or -x <section>
Line 91: Line 105:
$ queue override language-pack-kde-co binary universe//
}}}

Where the last argument is `COMPONENT/SECTION/PRIORITY`.
$ queue override -c universe binary ubuntustudio-menu
}}}

Where the override can be -c <component>, -x <section> and/or -p <priority>
Line 98: Line 112:
Currently a special case of this are the kernel packages, which change package names with each ABI update and build many distinct binary packages in different sections. A helper tool has been written to apply overrides to the queue based on the existing packages in hardy:
 {{{
$ kernel-overrides [-s <sourcepackage>] <oldabi> <newabi>
}}}

Binary packages are not often rejected (they go into a black hole with no automatic notifications), do, check the .deb contains files, run lintian on it and file bugs when things are broken. The binaries also need to be put into universe etc as appropriate even if the source is already there.
Line 103: Line 124:
In the case of language packs, add `-M` to not spam the changes lists with the new packages.

== Anastacia and Changing Overrides ==

Sadly packages just don't stay where they're put. SeedManagement details how packages get chosen for the `main` component, the various meta packages and prescence on the CD. What it doesn't point out is that packages which fall out of the seeding process are destined for the `universe` component.

Every hour or so, the difference between what the seeds expect to be true and what the archive actually believes is evaluated by the `anastacia` tool, and the output placed at:

 http://people.ubuntu.com/~cjwatson/anastacia.txt
In the case of language packs, add `-M` to not spam the changes lists with the new packages.  You can also use ''queue accept binary-name'' which will accept it for all architectures.

= Component Mismatches and Changing Overrides =

Sadly packages just don't stay where they're put. SeedManagement details how packages get chosen for the `main` component, the various meta packages and presence on the CD. What it doesn't point out is that packages which fall out of the seeding process are destined for the `universe` component.

Every hour or so, the difference between what the seeds expect to be true and what the archive actually believes is evaluated by the `component-mismatches` tool, and the output placed at:

 http://people.ubuntu.com/~ubuntu-archive/component-mismatches.txt
Line 127: Line 148:
 Binary packages in `main` that are no longer seeded or dependend on, but the source is still to remain in `main` -- usually because another binary saves it. Often these tend to be `-dev` or `-dbg` packages and need to be seeded, rather than demoted; but not always.  Binary packages in `main` that are no longer seeded or depended on, but the source is still to remain in `main` -- usually because another binary saves it. Often these tend to be `-dev` or `-dbg` packages and need to be seeded, rather than demoted; but not always.
Line 153: Line 174:
== Removals == = Removals =

== Manual ==
Line 157: Line 180:
$ remove-package.py -m "($USER) reason for removal" konserve $ lp-remove-package.py -u $SUDO_USER -m "reason for removal" konserve
Line 162: Line 185:
$ remove-package.py -m "($USER) NBS" -b nm-applet   $ lp-remove-package.py -u $SUDO_USER -m "NBS" -b konserve
Line 171: Line 194:
== Syncs == == Blacklisting ==

If you remove source packages which are in Debian, and they are not meant to ever come back, add it to the blacklist at `/srv/launchpad.net/dak/sync-blacklist.txt`, document the reason, and `bzr commit` it with an appropriate changelog. This will avoid getting the package back to source NEW in the next round of autosyncs from Debian.

== Removals in Debian ==

From time to time we should remove packages which were removed in Debian, to avoid accumulating cruft and unmaintained packages. This client-side tool (from `ubuntu-archive-tools`) will interactively go through the removals and ask for confirmation:

 {{{
$ ./process-removals.py
}}}

Please note that we do need to keep some packages which were removed
in Debian (e. g. "firefox", since we did not do the "firefox" →
"iceweasel" renaming).

= Syncs =
Line 175: Line 214:
First change into the `~/syncs` directory and make sure the Debian sources lists are up to date: First go to LP to see the [[https://launchpad.net/~ubuntu-archive/+subscribedbugs?field.searchtext=sync&orderby=targetname|list of current sync requests]]

Review the bugs, and make sure that the sync request is ACK'd (or requested by) someone with MOTU or core-dev privileges. If past FeatureFreeze, check the changelog to make sure the new version has only bug fixes and not new features.

If there are pending sync requests, change into the `~/syncs` directory and make sure the Debian sources lists are up to date:
Line 178: Line 221:
lp_archive@...$ wget -O- http://ftp.uk.debian.org/debian/dists/unstable/main/source/Sources.gz | gunzip > Debian_unstable_main_Sources
lp_archive@...$ wget -O- http://ftp.uk.debian.org/debian/dists/experimental/main/source/Sources.gz | gunzip > Debian_experimental_main_Sources
}}}

Now prepare the source packages to be uploaded; elmo's tool to do this is almost always newer than the Launchpad one, and tends to actually work:
 {{{
lp_archive@...$ ~james/launchpad/scripts/ftpmaster-tools/sync-source.py -b LPUID srcpkg
}}}

Replace `LPUID` with the Launchpad username of whoever requested the sync, obtained from the bug, and `srcpkg` with the names of the sources they asked for.
lp_archive@...$ update-sources
}}}

Now prepare the source packages to be uploaded:
 {{{
lp_archive@...$ sync-source.py -b LPUID srcpkg
}}}

Replace `LPUID` with the Launchpad username of the sync requester, or the acknowledger if the requester is not an active developer, and `srcpkg` with the names of the sources they asked for.
Line 191: Line 233:
lp_archive@...$ ~james/launchpad/scripts/ftpmaster-tools/sync-source.py -b keybuk -f dpkg
}}}

You'll now have a bunch of source packages in the `~/syncs` directory of the `lp_archive` user which need uploading. To do that, you have to switch to the `lp_queue` user; the `lp_archive` user has `sudo` permission to do this:
 {{{
lp_archive@...$ sudo -u lp_queue -i
}}}

Make a unique directory name under `~/sync-queue/incoming` and copy your sources into that:
 {{{
lp_queue@...$ mkdir ~/sync-queue/incoming/$USER-`date +%Y%m%d`
lp_queue@...$ cp ~lp_archive/syncs/... !$
}}}

And then process them; this will move the directory into `~/sync-queue/accepted`, or if there's a problem, `~/sync-queue/failed`:
 {{{
lp_queue@...$ ~/sync-queue/process-incoming.sh
lp_queue@...$ exit
lp_archive@...$
}}}

== Useful tools ==
lp_archive@...$ sync-source.py -b LPUID -f dpkg
}}}

If the source comes from a non-standard component, such as 'contrib', you might need:
 {{{
lp_archive@...$ sync-source.py -b LPUID -C contrib srcpkg
}}}

You'll now have a bunch of source packages in the `~/syncs` directory of the `lp_archive` user which need uploading. To do that, just run

 {{{
flush-syncs
}}}

To sync all the updates available in Debian

 {{{
sync-source.py -a
NOMAILS=-M flush-syncs
}}}

This does not import new packages from Debian that were not previously present in Ubuntu. To get a list of new packages available for sync, use the command
 {{{
new-source [contrib|non-free]
 }}}

which gives a list of packages that can be fed into `sync-source.py` on the commandline after review

To sync from Debian incoming wget the sources,
 {{{
apt-ftparchive sources ./ > Debian_incoming_main_Sources
sync-source.py -S incoming <package>
}}}

Backports work much the same way; there is a client-side tool in `ubuntu-archive-tools` called `backport.py`. There's also a `flush-backports` tool that works the same way as `flush-syncs` above. Backports do not require any Sources files. Note that backporting packages which did not exist in the previous version will end up in NEW which defaults to main, so universe packages need to have that override set.

Backports should reference the Launchpad username of the backporter who approved the backport, not the user requesting the backport.

= Useful tools =
Line 216: Line 275:
`madison-lite` examines the current state of the archive for a given binary/source package: == Archive state checks ==

`madison-lite` (aliased to `m`) examines the current state of the archive for a given binary/source package:
Line 278: Line 339:
== Useful web pages ==

Equally useful to the tools are the various auto-generated web pages, most of them in Colin's `public_html`, that can give you a feel for the state of the archive.

[http://people.ubuntu.com/~cjwatson/anastacia.txt]
== NEW handling ==

A lot of churn in NEW comes from Debian imports. Since they already went through NEW in Debian, we should not waste too much time on it, so there are some tools.

 * There are often duplicate source NEWs in the queue if the auto-syncer run twice in a row without clearing the imported sources from NEW. These can be weeded out with:

 {{{
 new-remove-duplicates > /tmp/$SUDO_USER/cmds
sh /tmp/$SUDO_USER/cmds }}}

 (Please eyeball `cmds` before feeding it to the queue).

 * `new-binary-debian-universe` creates queue commands for overriding and accepting all binary NEW packages whose source was imported from Debian and is in universe. While it runs, it `lintian`s all the imported .debs. Watch the output and note all particularly worrisome issues. Check the `cmds` file for obvious errors, and when you are happy, execute it with `sh cmds`.

 Warning: This command will fail when there are duplicates in the queue. Clean them up with `new-remove-duplicates` first.
 {{{
 new-binary-debian-universe > /tmp/$SUDO_USER/cmds
vi /tmp/$SUDO_USER/cmds
sh /tmp/$SUDO_USER/cmds }}}

 * For bulk processing of source NEW imported from Debian you should do something like:

 {{{
 cd /tmp/$SUDO_USER/
q fetch
for i in `ls *_source.changes| grep -v ubuntu`; do grep -q 'Changed-By: Ubuntu Archive Auto-Sync' $i || continue; egrep -q ' contrib|non-free' $i && continue ; echo "override source -c universe ${i%%_*}"; echo "accept ${i%%_*}"; done > cmds }}}

 Then go over the cmds list, verify on http://packages.qa.debian.org that all the packages mentioned are indeed in Debian main (and not in non-free, for example), and again feed it to the queue with `q -e -f cmds`.

 * When unpacking a source package for source NEW checks, you should run `suspicious-source`. This is basically a `find -type f` which ignores all files with a known-safe name (such as `*.c`, `configure`, `*.glade`). Every file that it outputs should be checked for being the preferred form of modification, as required by the GPL. This makes it easier to spot PDFs and other binary-only files that are not accompanied by a source. The `licensecheck` command is also useful for verifying the license status of source packages.

== Moving Packages to Updates ==

=== Standard case ===
Packages in -proposed can be moved to -updates once they are approved by someone from sru-verification, and have passed the minimum aging period of '''7 days'''.

 {{{
copy-package.py -vbs feisty-proposed --to-suite=feisty-updates kdebase
}}}

=== Special case: DDTP updates ===

 1. Disable publisher cron job and wait until it has finished. It must not run during the copy operation. (Alternatively, if the publisher is currently running and you know it will take some time yet to finish, you may make these changes in `/srv/launchpad.net/ubuntu-archive/ubuntu/dists.new/`.)
 1. Copy
 `/srv/launchpad.net/ubuntu-archive/ubuntu/dists/`''release''`-proposed/`''component''`/i18n/*` to the corresponding -updates directory, for all relevant components. This needs to happen as user `lp_publish`.
 1. Reenable publisher cron job.

=== Special case: debian-installer updates ===

 1. Disable publisher cron job and wait until it has finished. It must not run during the copy operation. (Alternatively, if the publisher is currently running and you know it will take some time yet to finish, you may make these changes in `/srv/launchpad.net/ubuntu-archive/ubuntu/dists.new/`.)
 1. As user `lp_publish`, copy `/srv/launchpad.net/ubuntu-archive/ubuntu/dists/`''release''`-proposed/main/installer-`''architecture''`/`''version'' to the corresponding -updates directory, for all architectures and for the version of `debian-installer` being copied.
 1. As user `lp_publish`, update `/srv/launchpad.net/ubuntu-archive/ubuntu/dists/`''release''`-updates/main/installer-`''architecture''`/current` to point to the version of `debian-installer` being copied, for all architectures.
 1. As user `lp_publish`, make sure that at most three versions of the installer remain in `/srv/launchpad.net/ubuntu-archive/ubuntu/dists/`''release''`-updates/main/installer-`''architecture'', for all architectures.
 1. Reenable publisher cron job.

=== Resources ===
 * [[http://people.ubuntu.com/~ubuntu-archive/pending-sru.html|Currently pending SRUs]]
 * Verified bugs for [[https://bugs.launchpad.net/ubuntu/intrepid/+bugs?field.tag=verification-done|intrepid]], [[https://bugs.launchpad.net/ubuntu/hardy/+bugs?field.tag=verification-done|hardy]], [[https://bugs.launchpad.net/ubuntu/gutsy/+bugs?field.tag=verification-done|gutsy]], [[https://bugs.launchpad.net/ubuntu/dapper/+bugs?field.tag=verification-done|dapper]]

== Publishing security uploads from the ubuntu-security private PPA ==

/!\ Note that this action, unlike most archive actions, requires you to be logged in as the `lp_publish` user (and currently only on germanium).

Security uploads in Soyuz are first built, published, and tested in the Security Team's private PPA. To unembargo them, we use a tool that re-publishes them to the primary archive. Note that this should never be done without an explicit request from a member of the Security Team.

To publish `nasm` from the `ubuntu-security` PPA to the `-security` pocket of `ubuntu`'s `hardy` release, you would do the following:

 {{{
LPCONFIG=production /srv/launchpad.net/codelines/ppa/scripts/ftpmaster-tools/unembargo-package.py -p ubuntu-security -d ubuntu -s hardy-security nasm
}}}

== Publishing packages from the ubuntu-mozilla-security public PPA ==
Mozilla (ie, firefox and thunderbird) uploads in Soyuz are first built, published, and tested in the Mozilla Security Team's public PPA. To publish them into the main archive, use copy-package.py. Note that pocket copies to the security pocket should never be done without an explicit request from a member of the Ubuntu Security Team (Mozilla Security Team is not enough), and copies to the proposed pocket should not be done without an explicit request from a member of the SRU Team. Keep in mind that `firefox` 2.0 and later (ie `hardy` and later) will always have a corresponding `xulrunner` package that needs copying.

To publish `firefox-3.0` version `3.0.7+nobinonly-0ubuntu0.8.04.1` and `xulrunner-1.9` version `1.9.0.7+nobinonly-0ubuntu0.8.04.1` from the `ubuntu-mozilla-security` PPA to the `-security` pocket of `ubuntu`'s `hardy` release, you would do the following:
 {{{
$ copy-package.py -b --ppa=ubuntu-mozilla-security -s hardy --to-suite hardy-security -e 3.0.7+nobinonly-0ubuntu0.8.04.1 firefox-3.0
$ copy-package.py -b --ppa=ubuntu-mozilla-security -s hardy --to-suite hardy-security -e 1.9.0.7+nobinonly-0ubuntu0.8.04.1 xulrunner-1.9
$ security-firefox-overrides -S hardy-security
}}}

'''IMPORTANT:''' Due to current limitations of Launchpad, all packages copied from a PPA into the archive go to 'main'. For source packages with binaries wholly in universe (eg, `firefox` and `xulrunner` in 8.04 LTS and later, `seamonkey` everywhere, or `firefox-3.5` and `xulrunner-1.9.1` in 9.04), you can use `change-override.py` like normal to move them to universe. For packages with some binaries in main and some in universe, you can use the `security-firefox-overrides` script. The script currently knows how to fix `firefox` in `dapper`, `firefox-3.0` in `hardy` through `karmic` and `xulrunner-1.9` in `hardy` through `karmic`.

== Copying security uploads to updates ==

Security uploads are distributed from a single system, `security.ubuntu.com` (consisting of one or a small number of machines in the Canonical datacentre). While this ensures much quicker distribution of security updates than is possible from a mirror network, it places a very high load on the machines serving `security.ubuntu.com`, as well as inflating Canonical's bandwidth expenses very substantially. Every new installation of a stable release of Ubuntu is likely to be shortly followed by downloading all security updates to date, which is a significant ongoing cost.

To mitigate this, we periodically copy security uploads to the -updates pocket, which is distributed via the regular mirror network. (In fact, the pooled packages associated with -security are mirrored too, but mirrored -security entries are not in the default `/etc/apt/sources.list` to avoid causing even more HTTP requests on every `apt-get update`.) This is a cheap operation, and has no effect on the timely distribution of security updates, other than to reduce the load on central systems.

The `copy-report` tool lists all security uploads that need to be copied to -updates. If the package in question is not already in -updates, it can be copied without further checks. Otherwise, `copy-report` will extract the changelogs (which may take a little while) and confirm that the package in -security is a descendant of the package in -updates. If that is not the case, it will report that the package needs to be merged by hand.

The output of the tool looks like this:

{{{
$ copy-report
The following packages can be copied safely:
--------------------------------------------

copy-package.py -y -b -s feisty-security --to-suite feisty-updates -e 8.3.5-6ubuntu2.1 tk8.3
copy-package.py -y -b -s feisty-security --to-suite feisty-updates -e 8.4.14-0ubuntu2.1 tk8.4
}}}

The block of output under "The following packages can be copied safely:" may be copied and pasted in its entirety. If there is a block headed "The following packages need to be merged by hand:", then make sure that the security team is aware of those cases.

== Syncs with mass-sync.py ==

=== Purpose ===
If you process a long list of sync requests from Launchpad bugs, using
`sync-source.py` manually is tedious. To automate this, there is a
client-side tool `mass-sync.py` which does the following:

 * Take a list of sync request bug # and additional sync options as input.
 * For each sync request bug:
  * get the source package name and requestor from Launchpad
  * Call `sync-source.py` with the requestor and source package name and all additional sync options from the input file
  * On success, close the bug with the output of `sync-source.py`.

=== Steps ===

 * Open the [[https://launchpad.net/~ubuntu-archive/+subscribedbugs?field.searchtext=sync&orderby=targetname|list of current sync requests]] in browser.
 * Starting from the first bug which is associated to a package (see limitation above), use ctrl+mouse marking to select the column with the bug numbers. Paste them into a text file, let's call it `syncs.txt`.
 * `syncs.txt` is the input to `mass-sync.py` and must contain one line per sync request bug, with the word "sync" being leftmost, followed by the bug number. If you place a package name after the bug number, that will be used for bugs not assigned to a package. Everything after the bug number (or package name, if given) are extra options to `sync-source.py` which get passed to it unmodified.
 * Now open all the sync requests (in browser tabs) and walk through them:
  * Delete bug # from `syncs.txt` which are not approved or invalid. Set those to "Incomplete" in Launchpad, and provide necessary followup.
  * Use `rmadison -u debian` to verify the component to sync from (often, requestors get it wrong, or `unstable` got a newer version than `experimental` since the sync request was made)
  * Add appropriate sync options, e. g. if package has Ubuntu changes or needs to be synced from experimental (see sync-source.py --help for options). Eg: {{{
  sync 123456 -S expermintal
  sync 123457 -f -S testing
  sync 123458 -C contrib
  sync 123459 <new source package>
 }}}
 * Update Sources files on `cocoplum`:
 {{{
  cd ~/syncs
  update-sources
 }}}
 * Run the mass sync, on your client:
 {{{
  ./mass-sync.py < /tmp/syncs.txt
  ./mass-sync.py --flush-syncs
 }}}

 If you are not an archive admin with shell access to `cocoplum`, hand the file to someone who has.

=== sync-source.py options ===

The most common options are:

 || '''Option''' || '''Description''' || '''Default''' ||
 || `-f`, `--force` || Overwrite Ubuntu changes || abort if Ubuntu package has modifications ||
 || `-S` ''suite'' || Sync from particular suite (distrorelease), e. g. `experimental` || `unstable` ||
 || `-C` ''component'' || Sync from particular component, e. g. `non-free` || `main` ||

== Backports with mass-sync.py ==

Since backports are very similar to syncs, `mass-sync.py` can also be used to do those. In this case, the source package name is mandatory, since backport requests are not filed against source packages but against ''release''`-backports` products.

=== Steps ===

 * Open the [[https://launchpad.net/hardy-backports/+bugs?field.status%3Alist=In+Progress|list of current backport requests]] for a particular release (this URL is for hardy) in browser. Note that this URL only lists bugs being "in progress", since that's what the backporters team will set when approving backports.
 * Use ctrl+mouse marking to select the column with the bug numbers. Paste them into a text file, let's call it `backports-hardy.txt`.
 * `backports-hardy.txt` is the input to `mass-sync.py` and must contain one line per backport request bug, with the word "backport" being leftmost, followed by the bug number, followed by the source package name. Everything after the package name are extra options to `backport-source-backend` which get passed to it unmodified.
 * Now open all the backport requests (in browser tabs) and walk through them:
  * Delete bug # from `backport.txt` which are invalid. Set those to "Incomplete" in Launchpad, and provide necessary followup.
  * Check with `rmadison` if the current version is still the same that was approved and tested. If there is a newer one, set back to "Incomplete" and mention the newer version.
  * If a backport requires an actual upload due to source changes, these need to be approved differently. Remove the bug from `backports-hardy.txt`, but do not change the bug report.
  * Add appropriate backport options to `backports-hardy.txt`, e. g. if package should not be backported from the current development release.
 * Run the mass backport, on your client:

 {{{
  ./mass-sync.py < /tmp/backports-hardy.txt
  ./mass-sync.py --flush-backports
 }}}

 If you are not an archive admin with shell access to `cocoplum`, hand the file to someone who has.

=== backport-source-backend options ===

The most common options are:

 || '''Option''' || '''Description''' || '''Default''' ||
 || `-S` ''suite'' || Backport from particular suite (distrorelease), e. g. `intrepid` || current development release ||

=== Example input file ===
{{{
backport 12345 lintian
backport 23456 frozen-bubble -S intrepid
}}}

== Diffs for unapproved uploads ==

The "unapproved" queue holds packages while a release is frozen, i. e. while a
milestone or final freeze is in progress, or for post-release updates (like
hardy-proposed). Packages in these queues need to be scrutinized before they
get accepted.

This can be done with the
[[http://bazaar.launchpad.net/%7Eubuntu-archive/ubuntu-archive-tools/trunk/annotate/head%3A/queuediff|queuediff]]
tool in
[[https://code.launchpad.net/~ubuntu-archive/ubuntu-archive-tools/trunk/|lp:~ubuntu-archive/ubuntu-archive-tools/trunk]],
which generates a debdiff between the current version in the archive, and the
package sitting in the unapproved queue:

{{{
$ queue-diff -s hardy-updates hal
$ queue-diff -b human-icon-theme | view -
}}}

`-s` specifies the release pocket to compare against and defaults to the
current development release. Please note that the pocket of the unapproved
queue is not checked or regarded; i. e. if there is a `hal` package waiting in
hardy-proposed/unapproved, but the previous version already migrated to
`hardy-updates`, then you need to compare against hardy-updates, not -proposed.

Check `--help`, the tool has more options, such as specifying a different
mirror, or `-b` to open the referred Launchpad bugs in the webbrowser.

This tool works very fast if the new package does not change the orig.tar.gz,
then it only downloads the diff.gz. For native packages or new upstream
versions it needs to download both tarballs and run debdiff on them. Thus for
large packages you might want to do this manually in the data center.

= Useful runes =

This section contains some copy&paste shell bits which ease recurring jobs.

== Cleaning up NBS ==

Sometimes binary packages are not built by any source (NBS) any more. This usually happens with library SONAME changes, package renamings, etc. Those need to be removed from the archive from time to time, and right before a release, to ensure that the entire archive can be rebuilt by current sources.

Such packages are detected by `archive-cruft-check.py /srv/launchpad.net/ubuntu-archive/`. Apart from NBS packages it also prints out 'ASBA' ("Arch: all" superseded by "Arch: any"), but they are irrelevant for day-to-day archive administration. This tool does not check for reverse dependencies, though, so you should use `checkrdepends -b` for checking if it is safe to actually remove NBS packages from the archive:

As a first step, create a work directory and a list of all packages (one file per package) which are NBS and check their reverse dependencies:

 {{{
 mkdir /tmp/$SUDO_USER/cruft
cd /tmp/$SUDO_USER/cruft
for i in $(archive-cruft-check.py /srv/launchpad.net/ubuntu-archive/ 2>&1| grep '^ *o ' | sed 's/^.*://; s/,//g'); do checkrdepends -b $i hardy > $i; done }}}

Replace `hardy` with the name of the current development release. This will take a long time, so consider using screen. Please note that this list is [[http://people.ubuntu.com/~ubuntu-archive/NBS/|generated automatically]] twice a day.

Those packages which do not have any reverse dependencies can be removed safely in one go:

 {{{
 for p in $(find -empty | sed 's_^./__'); do lp-remove-package.py -yb -u $SUDO_USER -m "NBS" $p; rm $p; done
}}}

The rest needs to be taken care of by developers, by doing transition uploads for library SONAME changes, updating build dependencies, etc. The remaining files will list all the packages which still need the package in question.

Please refrain from removing NBS kernel packages for old ABIs until debian-installer and the seeds have been updated, otherwise daily builds of alternate and server CDs will be made uninstallable.

== partner archive ==

The Canonical partner archive is in a distro of its own, ubuntu-partner.

Queue is much the same

 {{{
queue -d ubuntu-partner -s hardy info
}}}

Use -j to remove a package

 {{{
lp-remove-package.py -u jr -m "request Brian Thomason" -d ubuntu-partner -s feisty realplay -j
}}}

Please get sign-off from the Ubuntu QA team (via Steve Beattie) before accepting packages into partner.

= reprocess-failed-to-move =

In some cases, binary packages fail to move from the incoming queue to the accepted queue. To fix this, run {{{~lp_buildd/reprocess-failed-to-move}}} as lp_buildd

<<Anchor(SRU)>>
= Stable release updates =

Archive admins manually need to process StableReleaseUpdates. Run the `pending-sru` script to 'queue fetch' all currently unapproved uploads. Review them according to the following guidelines:

 * Reject uploads which do not conform to the StableReleaseUpdates policy. If the changelog refers to a bug number, follow up there with an explanation.

 * Only process an upload if the SRU team approved the update in the bug report trail.

 * Verify that the package delta matches the debdiff attached to the bug report and that there are no other unrelated changes.

 * If you accept a package into `-proposed`,
  1. Add a `verification-needed` tag to the bug report.
  1. Set the appropriate bug task to '''Status: Fix Committed'''.
  1. Subscribe the `sru-verification` team.

 * After the package in `-proposed` has been successfully tested and passed a minimum aging period of '''7 days''' (check the [[http://people.ubuntu.com/~ubuntu-archive/pending-sru.html|status page]]), and is approved by the SRU verification team, the package should be moved to `-updates`:
   1. Use `copy-package.py` to copy the source and binary packages from `-proposed` to `-updates`.
   1. Set the bug task to '''Status: Fix Released'''.

= Useful web pages =

Equally useful to the tools are the various auto-generated web pages in ubuntu-archive's `public_html` that can give you a feel for the state of the archive.

[[http://people.ubuntu.com/~ubuntu-archive/component-mismatches.txt]]
Line 286: Line 640:
[http://people.ubuntu.com/~cjwatson/germinate-output/] [[http://people.ubuntu.com/~ubuntu-archive/germinate-output/]]
Line 290: Line 644:
[http://people.ubuntu.com/~cjwatson/jessica.txt] [[http://people.ubuntu.com/~ubuntu-archive/priority-mismatches.txt]]
Line 294: Line 648:
[http://people.ubuntu.com/~cjwatson/testing/dapper_probs.html]

  Generated by the hourly run of `britney` and indicates packages that are uninstallable on dapper, usually due to missing dependencies or problematic conflicts.

[http://people.ubuntu.com/~cjwatson/testing/dapper_outdate.txt]
[[http://people.ubuntu.com/~ubuntu-archive/architecture-mismatches.txt]]

  Sho
ws override discrepancies between architectures, which are generally bugs.

[[http://people.ubuntu.com/~ubuntu-archive/testing/karmic
_probs.html]]

  Generated by the hourly run of `britney` and indicates packages that are uninstallable on karmic, usually due to missing dependencies or problematic conflicts.

[[http://people.ubuntu.com/~ubuntu-archive/testing/karmic_outdate.html]]
Line 301: Line 659:

[[http://people.ubuntu.com/~ubuntu-archive/NBS/]]

  This contains a list of binary packages which are not built from source (NBS) any more. The files contain the list of reverse dependencies of those packages (output of `checkrdepends -b`). These packages need to be removed eventually, thus all reverse dependencies need to be fixed. This is updated twice a day.

<<Anchor(Chroot management)>>
= Chroot management =

/!\ Please note that chroot management is something generally handled by Canonical IS (and specifically by Adam Conrad). The following section documents the procedures required should one have to, for instance, remove all the chroots for a certain suite to stop the build queue in its tracks while a breakage is hunted down and fixed, but please don't take this as an open invitation to mess with the buildd chroots willy-nilly.

Soyuz stores one chroot per (suite, archictecture).

`manage-chroot.py`, which runs only as 'lp_buildd' in cocoplum or cesium, allows the following actions upon a specified chroot:

{{{
$ sudo -u lp_buildd -i
lp_buildd@cocoplum:~$ LPCONFIG=ftpmaster /srv/launchpad.net/codelines/current/scripts/ftpmaster-tools/manage-chroot.py
ERROR manage-chroot.py <add|update|remove|get>
}}}

Downloading (get) an existing chroot:

{{{
$ manage-chroot.py [-s SUITE] <-a ARCH> get
}}}

The chroot will be downloaded and stored in the local disk name as 'chroot-<DISTRIBUTION>-<SERIES>-<ARCHTAG>.tar.bz2'

Uploading (add/update) a new chroot:

{{{
$ manage-chroot.py [-s SUITE] <-a ARCH> add -f <CHROOT_FILE>
}}}

'add' and 'update' action are equivalents. The new chroot will be immediatelly used for the next build job in the corresponding architecture.

Disabling (remove) an existing chroot:

/!\ Unless you have plans for creating a new chroots from scratch, it's better to download them to disk before the removal (recover is possible, but involves direct DB access)

{{{
$ manage-chroot.py [-s SUITE] <-a ARCH> remove
}}}

No builds will be dispatched for architectures with no chroot, the build-farm will continue functional for the rest of the system.

= Archive days =

Current members with regular admin days are:
 * Monday: SteveLangasek; JamesWestby
 * Tuesday: JonathanRiddell
 * Wednesday: ColinWatson (morning); DustinKirkland (syncs, bug processing)
 * Thursday, SteveKowalik; MuharemHrnjadovic
 * Friday: JamieStrandboge

On an archive day, the following things should be done:
 * If we are not yet in the DebianImportFreeze, run `sync-source.py -a` to sync unmodified packages from Debian (see [[ArchiveAdministration#Syncs|Syncs]]).
 * Process all [[https://launchpad.net/~ubuntu-archive/+subscribedbugs|pending archive bugs]]. Most of those are syncs, removals, component fixes, but there might be other, less common, requests.
 * Process the NEW queues of the current development release and `*-backports` of all supported stable releases.
 * Run `process-removals.py` to review/remove packages which were removed in Debian.
 * Clean up component-mismatches, and poke people to fix dependencies/write MIRs.
 * Look at [[http://people.ubuntu.com/~ubuntu-archive/testing/karmic_probs.html]], fix archive-admin related issues (component mismatches, etc.), and prod maintainers to fix package related problems.
 * Remove NBS packages without reverse dependencies, and prod maintainers to rebuild/fix packages to eliminate reverse dependencies to NBS packages.

== Archive Administration and Freezes ==

Archive admins should be familiar with the FreezeExceptionProcess, however it is the bug submitter's and sponsor's responsibility to make sure that the process is being followed. Some things to keep in mind for common tasks:
 * When the archive is frozen (ie the week before a Milestone, or from one week before RC until the final release), you need an ACK from ubuntu-release for all main/restricted uploads
 * During the week before final release, you need an ACK from `motu-release` for any uploads to universe/multiverse
 * When the archive is not frozen, bugfix-only sync requests can be processed if filed by a `core-dev`, `ubuntu-dev` or `motu` (universe/multiverse only) or have an ACK by a sponsor from one of these groups, ubuntu-main-sponsors or ubuntu-universe-sponsors
 * After FeatureFreeze, all packages in main require an ACK from ubuntu-release for any FreezeException (eg FeatureFreeze, UserInterfaceFreeze, and [[MilestoneProcess|Milestone]]) and an ACK from motu-release for universe/multiverse
 * New packages in universe/multiverse need two ACKs after FeatureFreeze

See FreezeExceptionProcess for complete details.

This page details the processes for the Ubuntu Package Archive Administrators team, and hopefully provides a decent guide for new members of the team.

Bugs should be filed against the appropriate packages, and the team subscribed (not assigned) to the bug.

The requests can be found at https://launchpad.net/~ubuntu-archive/+subscribedbugs.

Team members may assign bugs to themselves and mark them In Progress if they're working on them, or discussing them; to act as a lock on that request.

1. Logging In

All administration is performed on cocoplum.canonical.com, accounts are provided to members of the team. Changes can only be made as the lp_archive user, to which you'll have sudo access.

So to begin:

  • $ ssh cocoplum
    $ sudo -u lp_archive -i

The -i is important as lp_archive's .bashrc sets the right environment variables and makes sure the directory with all of the tools is placed in the PATH.

IMPORTANT: This document uses $SUDO_USER in several places. If your cocoplum.canonical.com uid is not that same as your Launchpad id, be sure to use your Launchpad id when running Launchpad related scripts.

2. Client-side tools

We are gradually transitioning towards client-side administration as the necessary facilities become available via the Launchpad API. To get hold of these tools:

  • $ bzr get lp:ubuntu-archive-tools

Some of these tools still rely on ssh access to cocoplum for some operations, so the presence of a client-side tool unfortunately does not yet mean that community archive administrators can use it. It's a start.

At the moment, this transition tends to result in having two terminal windows open, one with a shell on cocoplum and one on your local machine. Sorry.

3. NEW Processing

Both source packages and new binaries which have not yet been approved are not automatically accepted into the archive, but are instead held for checking and manual acceptance. Once accepted they'll be automatically approved from then on.

The current queue can be obtained with:

  • $ queue info

This is the NEW queue for ubuntu/feisty by default; you can change the queue with -Q, the distro with -D and the release using -s. To list the UNAPPROVED queue for ubuntu/edgy, for example:

  • $ queue -s edgy -Q unapproved info

Packages are placed in the UNAPPROVED queue if they're uploaded to a closed distribution, and are usually security updates or similar; this should be checked with the uploader.

You can give an string argument after info which is interpreted as a substring match filter.

To obtain a report of the size of all the different queues for a particular release:

  • $ queue report

Back to the NEW queue for now, however. You'll see output that looks somewhat like this:

  • $ queue info
     Listing ubuntu/dapper (NEW) 4/4
    ---------|----|----------------------|----------------------|---------------
       25324 | S- | diveintopython-zh    | 5.4-0ubuntu1         | three minutes
             | * diveintopython-zh/5.4-0ubuntu1 Component: main Section: doc
       25276 | -B | language-pack-kde-co | 1:6.06+20060427      | 2 hours 20 minutes
             | * language-pack-kde-co-base/1:6.06+20060427/i386 Component: main Section: translations Priority: OPTIONAL
       23635 | -B | upbackup (i386)      | 0.0.1                | two days
             | * upbackup/0.0.1/i386 Component: main Section: admin Priority: OPTIONAL
             | * upbackup_0.0.1_i386_translations.tar.gz Format: ROSETTA_TRANSLATIONS
       23712 | S- | gausssum             | 1.0.3-2ubuntu1       | 45 hours
             | * gausssum/1.0.3-2ubuntu1 Component: main Section: science
    ---------|----|----------------------|----------------------|---------------
                                                                   4/4 total

The number at the start can be used with other commands instead of referring to a package by name. The next field shows you what is actually in the queue, "S-" means it's a new source and "-B" means it's a new binary. You then have the package name, the version and how long it's been in the queue.

New sources need to be checked to make sure they're well packaged, the licence details are correct and permissible for us to redistribute, etc. See PackagingGuide/Basic#NewPackages, PackagingGuide/Basic#Copyright and Debian's Reject FAQ. You can fetch a package from the queue for manual checking, be sure to do this in a directory of your own:

  • $ mkdir /tmp/$SUDO_USER
    $ cd /tmp/$SUDO_USER
    
    $ queue fetch 25324

The source is now in the current directory and ready for checking. Any problems should result in the rejection of the package (also send a mail to the uploader explaining the reason and Cc ubuntu-archive@lists.ubuntu.com):

  • $ queue reject 25324

If the package is fine, you should next check that it's going to end up in the right part of the archive. On the next line of the info output, you have details about the different parts of the package, including which component, section, etc. it is expected to head into. One of the important jobs is making sure that this information is actually correct through the application of overrides.

To alter the overrides for a source package, use:

  • $ queue override -c universe source ubuntustudio-menu

Where the override can be -c <component> and/or -x <section>

To alter the overrides for a binary package, use:

  • $ queue override -c universe binary ubuntustudio-menu

Where the override can be -c <component>, -x <section> and/or -p <priority>

Often a binary will be in the NEW queue because it is a shared library that has changed SONAME. In this case you'll probably want to check the existing overrides to make sure anything new matches. These can be found in `~/ubuntu/indices'.

Currently a special case of this are the kernel packages, which change package names with each ABI update and build many distinct binary packages in different sections. A helper tool has been written to apply overrides to the queue based on the existing packages in hardy:

  • $ kernel-overrides [-s <sourcepackage>] <oldabi> <newabi>

Binary packages are not often rejected (they go into a black hole with no automatic notifications), do, check the .deb contains files, run lintian on it and file bugs when things are broken. The binaries also need to be put into universe etc as appropriate even if the source is already there.

If you're happy with a package, and the overrides are correct, accept it with:

  • $ queue accept 23712

In the case of language packs, add -M to not spam the changes lists with the new packages. You can also use queue accept binary-name which will accept it for all architectures.

4. Component Mismatches and Changing Overrides

Sadly packages just don't stay where they're put. SeedManagement details how packages get chosen for the main component, the various meta packages and presence on the CD. What it doesn't point out is that packages which fall out of the seeding process are destined for the universe component.

Every hour or so, the difference between what the seeds expect to be true and what the archive actually believes is evaluated by the component-mismatches tool, and the output placed at:

This is split into four sections:

Source and binary promotions to main

  • These are source packages currently in universe that appear to need promoting to main. The usual reasons are that they are seeded, or that a package they build has become a dependency or build-dependency of a package in main. New sources need to be processed through the UbuntuMainInclusionQueue, and have been approved before they should be promoted. Also ensure that all of their dependencies (which will be in this list) are approved as well.

Binary only promotions to main

  • These are binary packages currently in universe that appear to need promoting to main, as above; except that their source package is already in main. An inclusion report isn't generally needed, though the package should be sanity checked. Especially check that all of the package's dependencies are already in main, or have been approved.

Source and binary demotions to universe

  • Sources and their binaries that are currently in main but are no longer seeded or depended on by another package. These either need to be seeded explicitly, or demoted.

Binary only demotions to universe

  • Binary packages in main that are no longer seeded or depended on, but the source is still to remain in main -- usually because another binary saves it. Often these tend to be -dev or -dbg packages and need to be seeded, rather than demoted; but not always.

Once you've determined what overrides need to be changed, use the change-override.py tool to do it.

To promote a binary package to main:

  • $ change-override.py -c main git-email

To demote a source package and all of its binaries to universe:

  • $ change-override.py -c universe -S tspc

Less-used are the options to just move a source, and leave its binaries where it is (usually just to repair a mistaken forgotten -S):

  • $ change-override.py -c universe tspc
    ...oops, forgot the source...
    $ change-override.py -c universe -t tspc

and the option to move a binary and its source, but leave any other binaries where they are:

  • $ change-override.py -c universe -B flite

5. Removals

5.1. Manual

Sometimes packages just need removing entirely, because they are no longer required. This can be done with:

  • $ lp-remove-package.py -u $SUDO_USER -m "reason for removal" konserve

By default this removes the named source and binaries, to remove just a binary use -b:

  •   $ lp-remove-package.py -u $SUDO_USER -m "NBS" -b konserve

"NBS" is a common short-hand meaning that the binary is No-longer Built by the Source.

To remove just a source, use -S.

The tool tells you what it's going to do, and asks for confirmation before doing it, so it's reasonably safe to get the wrong options and say N.

5.2. Blacklisting

If you remove source packages which are in Debian, and they are not meant to ever come back, add it to the blacklist at /srv/launchpad.net/dak/sync-blacklist.txt, document the reason, and bzr commit it with an appropriate changelog. This will avoid getting the package back to source NEW in the next round of autosyncs from Debian.

5.3. Removals in Debian

From time to time we should remove packages which were removed in Debian, to avoid accumulating cruft and unmaintained packages. This client-side tool (from ubuntu-archive-tools) will interactively go through the removals and ask for confirmation:

  • $ ./process-removals.py

Please note that we do need to keep some packages which were removed in Debian (e. g. "firefox", since we did not do the "firefox" → "iceweasel" renaming).

6. Syncs

Syncing packages with Debian is a reasonably common request, and currently annoyingly complicated to do. The tools help you prepare an upload, which you'll still need to check and put into incoming. The following recipe takes away some of the pain:

First go to LP to see the list of current sync requests

Review the bugs, and make sure that the sync request is ACK'd (or requested by) someone with MOTU or core-dev privileges. If past FeatureFreeze, check the changelog to make sure the new version has only bug fixes and not new features.

If there are pending sync requests, change into the ~/syncs directory and make sure the Debian sources lists are up to date:

  • lp_archive@...$ cd ~/syncs
    lp_archive@...$ update-sources

Now prepare the source packages to be uploaded:

  • lp_archive@...$ sync-source.py -b LPUID srcpkg

Replace LPUID with the Launchpad username of the sync requester, or the acknowledger if the requester is not an active developer, and srcpkg with the names of the sources they asked for.

This will fail if there are any Ubuntu changes, make sure they've asked to override them, and use -f to override them, e.g.

  • lp_archive@...$ sync-source.py -b LPUID -f dpkg

If the source comes from a non-standard component, such as 'contrib', you might need:

  • lp_archive@...$ sync-source.py -b LPUID -C contrib srcpkg

You'll now have a bunch of source packages in the ~/syncs directory of the lp_archive user which need uploading. To do that, just run

  • flush-syncs

To sync all the updates available in Debian

  • sync-source.py -a
    NOMAILS=-M flush-syncs

This does not import new packages from Debian that were not previously present in Ubuntu. To get a list of new packages available for sync, use the command

  • new-source [contrib|non-free]

which gives a list of packages that can be fed into sync-source.py on the commandline after review

To sync from Debian incoming wget the sources,

  • apt-ftparchive sources ./ > Debian_incoming_main_Sources
    sync-source.py -S incoming <package>

Backports work much the same way; there is a client-side tool in ubuntu-archive-tools called backport.py. There's also a flush-backports tool that works the same way as flush-syncs above. Backports do not require any Sources files. Note that backporting packages which did not exist in the previous version will end up in NEW which defaults to main, so universe packages need to have that override set.

Backports should reference the Launchpad username of the backporter who approved the backport, not the user requesting the backport.

7. Useful tools

There are other useful tools in your PATH which are invaluable.

7.1. Archive state checks

madison-lite (aliased to m) examines the current state of the archive for a given binary/source package:

  • $ madison-lite dpkg
          dpkg | 1.10.22ubuntu2 |         warty | source, amd64, i386, powerpc
          dpkg | 1.10.22ubuntu2.1 | warty-security | source, amd64, i386, powerpc
          dpkg | 1.10.27ubuntu1 |         hoary | source, amd64, i386, ia64, powerpc, sparc
          dpkg | 1.10.27ubuntu1.1 | hoary-security | source, amd64, i386, ia64, powerpc, sparc
          dpkg | 1.10.27ubuntu2 | hoary-updates | source, amd64, i386, ia64, powerpc, sparc
          dpkg | 1.13.10ubuntu4 |        breezy | source, amd64, hppa, i386, ia64, powerpc, sparc
          dpkg | 1.13.11ubuntu5 |        dapper | source, amd64, hppa, i386, ia64, powerpc, sparc
    
    $ madison-lite dselect
       dselect | 1.10.22ubuntu2 |         warty | amd64, i386, powerpc
       dselect | 1.10.22ubuntu2.1 | warty-security | amd64, i386, powerpc
       dselect | 1.10.27ubuntu1 |         hoary | amd64, i386, ia64, powerpc, sparc
       dselect | 1.10.27ubuntu1.1 | hoary-security | amd64, i386, ia64, powerpc, sparc
       dselect | 1.10.27ubuntu2 | hoary-updates | amd64, i386, ia64, powerpc, sparc
       dselect | 1.13.10ubuntu4 |        breezy | amd64, hppa, i386, ia64, powerpc, sparc
       dselect | 1.13.11ubuntu5 |        dapper | amd64, hppa, i386, ia64, powerpc, sparc

Or when used with -S and a source package, the source and every binary built by it:

  • $ madison-lite -S dpkg
          dpkg | 1.10.22ubuntu2 |         warty | source, amd64, i386, powerpc
          dpkg | 1.10.22ubuntu2.1 | warty-security | source, amd64, i386, powerpc
          dpkg | 1.10.27ubuntu1 |         hoary | source, amd64, i386, ia64, powerpc, sparc
          dpkg | 1.10.27ubuntu1.1 | hoary-security | source, amd64, i386, ia64, powerpc, sparc
          dpkg | 1.10.27ubuntu2 | hoary-updates | source, amd64, i386, ia64, powerpc, sparc
          dpkg | 1.13.10ubuntu4 |        breezy | source, amd64, hppa, i386, ia64, powerpc, sparc
          dpkg | 1.13.11ubuntu5 |        dapper | source, amd64, hppa, i386, ia64, powerpc, sparc
      dpkg-dev | 1.10.22ubuntu2 |         warty | all
      dpkg-dev | 1.10.22ubuntu2.1 | warty-security | all
      dpkg-dev | 1.10.27ubuntu1 |         hoary | all
      dpkg-dev | 1.10.27ubuntu1.1 | hoary-security | all
      dpkg-dev | 1.10.27ubuntu2 | hoary-updates | all
      dpkg-dev | 1.13.10ubuntu4 |        breezy | all
      dpkg-dev | 1.13.11ubuntu5 |        dapper | all
      dpkg-doc | 1.10.22ubuntu2 |         warty | all
      dpkg-doc | 1.10.22ubuntu2.1 | warty-security | all
      dpkg-doc | 1.10.27ubuntu1 |         hoary | all
      dpkg-doc | 1.10.27ubuntu1.1 | hoary-security | all
      dpkg-doc | 1.10.27ubuntu2 | hoary-updates | all
       dselect | 1.10.22ubuntu2 |         warty | amd64, i386, powerpc
       dselect | 1.10.22ubuntu2.1 | warty-security | amd64, i386, powerpc
       dselect | 1.10.27ubuntu1 |         hoary | amd64, i386, ia64, powerpc, sparc
       dselect | 1.10.27ubuntu1.1 | hoary-security | amd64, i386, ia64, powerpc, sparc
       dselect | 1.10.27ubuntu2 | hoary-updates | amd64, i386, ia64, powerpc, sparc
       dselect | 1.13.10ubuntu4 |        breezy | amd64, hppa, i386, ia64, powerpc, sparc
       dselect | 1.13.11ubuntu5 |        dapper | amd64, hppa, i386, ia64, powerpc, sparc

checkrdepends lists the reverse dependencies of a given binary:

  • $ checkrdepends -b nm-applet dapper

or source package:

  • $ checkrdepends network-manager dapper

7.2. NEW handling

A lot of churn in NEW comes from Debian imports. Since they already went through NEW in Debian, we should not waste too much time on it, so there are some tools.

  • There are often duplicate source NEWs in the queue if the auto-syncer run twice in a row without clearing the imported sources from NEW. These can be weeded out with:
     new-remove-duplicates > /tmp/$SUDO_USER/cmds
    sh /tmp/$SUDO_USER/cmds 

    (Please eyeball cmds before feeding it to the queue).

  • new-binary-debian-universe creates queue commands for overriding and accepting all binary NEW packages whose source was imported from Debian and is in universe. While it runs, it lintians all the imported .debs. Watch the output and note all particularly worrisome issues. Check the cmds file for obvious errors, and when you are happy, execute it with sh cmds.

    Warning: This command will fail when there are duplicates in the queue. Clean them up with new-remove-duplicates first.

     new-binary-debian-universe > /tmp/$SUDO_USER/cmds
    vi /tmp/$SUDO_USER/cmds
    sh /tmp/$SUDO_USER/cmds 
  • For bulk processing of source NEW imported from Debian you should do something like:
     cd /tmp/$SUDO_USER/
    q fetch
    for i in `ls *_source.changes| grep -v ubuntu`; do grep -q 'Changed-By: Ubuntu Archive Auto-Sync' $i || continue; egrep -q ' contrib|non-free' $i && continue ; echo "override source -c universe ${i%%_*}"; echo "accept ${i%%_*}"; done > cmds 

    Then go over the cmds list, verify on http://packages.qa.debian.org that all the packages mentioned are indeed in Debian main (and not in non-free, for example), and again feed it to the queue with q -e -f cmds.

  • When unpacking a source package for source NEW checks, you should run suspicious-source. This is basically a find -type f which ignores all files with a known-safe name (such as *.c, configure, *.glade). Every file that it outputs should be checked for being the preferred form of modification, as required by the GPL. This makes it easier to spot PDFs and other binary-only files that are not accompanied by a source. The licensecheck command is also useful for verifying the license status of source packages.

7.3. Moving Packages to Updates

7.3.1. Standard case

Packages in -proposed can be moved to -updates once they are approved by someone from sru-verification, and have passed the minimum aging period of 7 days.

  • copy-package.py -vbs feisty-proposed --to-suite=feisty-updates kdebase

7.3.2. Special case: DDTP updates

  1. Disable publisher cron job and wait until it has finished. It must not run during the copy operation. (Alternatively, if the publisher is currently running and you know it will take some time yet to finish, you may make these changes in /srv/launchpad.net/ubuntu-archive/ubuntu/dists.new/.)

  2. Copy

    /srv/launchpad.net/ubuntu-archive/ubuntu/dists/release-proposed/component/i18n/* to the corresponding -updates directory, for all relevant components. This needs to happen as user lp_publish.

  3. Reenable publisher cron job.

7.3.3. Special case: debian-installer updates

  1. Disable publisher cron job and wait until it has finished. It must not run during the copy operation. (Alternatively, if the publisher is currently running and you know it will take some time yet to finish, you may make these changes in /srv/launchpad.net/ubuntu-archive/ubuntu/dists.new/.)

  2. As user lp_publish, copy /srv/launchpad.net/ubuntu-archive/ubuntu/dists/release-proposed/main/installer-architecture/version to the corresponding -updates directory, for all architectures and for the version of debian-installer being copied.

  3. As user lp_publish, update /srv/launchpad.net/ubuntu-archive/ubuntu/dists/release-updates/main/installer-architecture/current to point to the version of debian-installer being copied, for all architectures.

  4. As user lp_publish, make sure that at most three versions of the installer remain in /srv/launchpad.net/ubuntu-archive/ubuntu/dists/release-updates/main/installer-architecture, for all architectures.

  5. Reenable publisher cron job.

7.3.4. Resources

7.4. Publishing security uploads from the ubuntu-security private PPA

Warning /!\ Note that this action, unlike most archive actions, requires you to be logged in as the lp_publish user (and currently only on germanium).

Security uploads in Soyuz are first built, published, and tested in the Security Team's private PPA. To unembargo them, we use a tool that re-publishes them to the primary archive. Note that this should never be done without an explicit request from a member of the Security Team.

To publish nasm from the ubuntu-security PPA to the -security pocket of ubuntu's hardy release, you would do the following:

  • LPCONFIG=production /srv/launchpad.net/codelines/ppa/scripts/ftpmaster-tools/unembargo-package.py -p ubuntu-security -d ubuntu -s hardy-security nasm

7.5. Publishing packages from the ubuntu-mozilla-security public PPA

Mozilla (ie, firefox and thunderbird) uploads in Soyuz are first built, published, and tested in the Mozilla Security Team's public PPA. To publish them into the main archive, use copy-package.py. Note that pocket copies to the security pocket should never be done without an explicit request from a member of the Ubuntu Security Team (Mozilla Security Team is not enough), and copies to the proposed pocket should not be done without an explicit request from a member of the SRU Team. Keep in mind that firefox 2.0 and later (ie hardy and later) will always have a corresponding xulrunner package that needs copying.

To publish firefox-3.0 version 3.0.7+nobinonly-0ubuntu0.8.04.1 and xulrunner-1.9 version 1.9.0.7+nobinonly-0ubuntu0.8.04.1 from the ubuntu-mozilla-security PPA to the -security pocket of ubuntu's hardy release, you would do the following:

  • $ copy-package.py -b --ppa=ubuntu-mozilla-security -s hardy --to-suite hardy-security -e 3.0.7+nobinonly-0ubuntu0.8.04.1 firefox-3.0
    $ copy-package.py -b --ppa=ubuntu-mozilla-security -s hardy --to-suite hardy-security -e 1.9.0.7+nobinonly-0ubuntu0.8.04.1 xulrunner-1.9
    $ security-firefox-overrides -S hardy-security

IMPORTANT: Due to current limitations of Launchpad, all packages copied from a PPA into the archive go to 'main'. For source packages with binaries wholly in universe (eg, firefox and xulrunner in 8.04 LTS and later, seamonkey everywhere, or firefox-3.5 and xulrunner-1.9.1 in 9.04), you can use change-override.py like normal to move them to universe. For packages with some binaries in main and some in universe, you can use the security-firefox-overrides script. The script currently knows how to fix firefox in dapper, firefox-3.0 in hardy through karmic and xulrunner-1.9 in hardy through karmic.

7.6. Copying security uploads to updates

Security uploads are distributed from a single system, security.ubuntu.com (consisting of one or a small number of machines in the Canonical datacentre). While this ensures much quicker distribution of security updates than is possible from a mirror network, it places a very high load on the machines serving security.ubuntu.com, as well as inflating Canonical's bandwidth expenses very substantially. Every new installation of a stable release of Ubuntu is likely to be shortly followed by downloading all security updates to date, which is a significant ongoing cost.

To mitigate this, we periodically copy security uploads to the -updates pocket, which is distributed via the regular mirror network. (In fact, the pooled packages associated with -security are mirrored too, but mirrored -security entries are not in the default /etc/apt/sources.list to avoid causing even more HTTP requests on every apt-get update.) This is a cheap operation, and has no effect on the timely distribution of security updates, other than to reduce the load on central systems.

The copy-report tool lists all security uploads that need to be copied to -updates. If the package in question is not already in -updates, it can be copied without further checks. Otherwise, copy-report will extract the changelogs (which may take a little while) and confirm that the package in -security is a descendant of the package in -updates. If that is not the case, it will report that the package needs to be merged by hand.

The output of the tool looks like this:

$ copy-report
The following packages can be copied safely:
--------------------------------------------

copy-package.py -y -b -s feisty-security --to-suite feisty-updates -e 8.3.5-6ubuntu2.1 tk8.3
copy-package.py -y -b -s feisty-security --to-suite feisty-updates -e 8.4.14-0ubuntu2.1 tk8.4

The block of output under "The following packages can be copied safely:" may be copied and pasted in its entirety. If there is a block headed "The following packages need to be merged by hand:", then make sure that the security team is aware of those cases.

7.7. Syncs with mass-sync.py

7.7.1. Purpose

If you process a long list of sync requests from Launchpad bugs, using sync-source.py manually is tedious. To automate this, there is a client-side tool mass-sync.py which does the following:

  • Take a list of sync request bug # and additional sync options as input.
  • For each sync request bug:
    • get the source package name and requestor from Launchpad
    • Call sync-source.py with the requestor and source package name and all additional sync options from the input file

    • On success, close the bug with the output of sync-source.py.

7.7.2. Steps

  • Open the list of current sync requests in browser.

  • Starting from the first bug which is associated to a package (see limitation above), use ctrl+mouse marking to select the column with the bug numbers. Paste them into a text file, let's call it syncs.txt.

  • syncs.txt is the input to mass-sync.py and must contain one line per sync request bug, with the word "sync" being leftmost, followed by the bug number. If you place a package name after the bug number, that will be used for bugs not assigned to a package. Everything after the bug number (or package name, if given) are extra options to sync-source.py which get passed to it unmodified.

  • Now open all the sync requests (in browser tabs) and walk through them:
    • Delete bug # from syncs.txt which are not approved or invalid. Set those to "Incomplete" in Launchpad, and provide necessary followup.

    • Use rmadison -u debian to verify the component to sync from (often, requestors get it wrong, or unstable got a newer version than experimental since the sync request was made)

    • Add appropriate sync options, e. g. if package has Ubuntu changes or needs to be synced from experimental (see sync-source.py --help for options). Eg:

        sync 123456 -S expermintal
        sync 123457 -f -S testing
        sync 123458 -C contrib
        sync 123459 <new source package>
  • Update Sources files on cocoplum:

      cd ~/syncs
      update-sources
  • Run the mass sync, on your client:
      ./mass-sync.py < /tmp/syncs.txt
      ./mass-sync.py --flush-syncs

    If you are not an archive admin with shell access to cocoplum, hand the file to someone who has.

7.7.3. sync-source.py options

The most common options are:

  • Option

    Description

    Default

    -f, --force

    Overwrite Ubuntu changes

    abort if Ubuntu package has modifications

    -S suite

    Sync from particular suite (distrorelease), e. g. experimental

    unstable

    -C component

    Sync from particular component, e. g. non-free

    main

7.8. Backports with mass-sync.py

Since backports are very similar to syncs, mass-sync.py can also be used to do those. In this case, the source package name is mandatory, since backport requests are not filed against source packages but against release-backports products.

7.8.1. Steps

  • Open the list of current backport requests for a particular release (this URL is for hardy) in browser. Note that this URL only lists bugs being "in progress", since that's what the backporters team will set when approving backports.

  • Use ctrl+mouse marking to select the column with the bug numbers. Paste them into a text file, let's call it backports-hardy.txt.

  • backports-hardy.txt is the input to mass-sync.py and must contain one line per backport request bug, with the word "backport" being leftmost, followed by the bug number, followed by the source package name. Everything after the package name are extra options to backport-source-backend which get passed to it unmodified.

  • Now open all the backport requests (in browser tabs) and walk through them:
    • Delete bug # from backport.txt which are invalid. Set those to "Incomplete" in Launchpad, and provide necessary followup.

    • Check with rmadison if the current version is still the same that was approved and tested. If there is a newer one, set back to "Incomplete" and mention the newer version.

    • If a backport requires an actual upload due to source changes, these need to be approved differently. Remove the bug from backports-hardy.txt, but do not change the bug report.

    • Add appropriate backport options to backports-hardy.txt, e. g. if package should not be backported from the current development release.

  • Run the mass backport, on your client:
      ./mass-sync.py < /tmp/backports-hardy.txt
      ./mass-sync.py --flush-backports

    If you are not an archive admin with shell access to cocoplum, hand the file to someone who has.

7.8.2. backport-source-backend options

The most common options are:

  • Option

    Description

    Default

    -S suite

    Backport from particular suite (distrorelease), e. g. intrepid

    current development release

7.8.3. Example input file

backport 12345 lintian
backport 23456 frozen-bubble -S intrepid

7.9. Diffs for unapproved uploads

The "unapproved" queue holds packages while a release is frozen, i. e. while a milestone or final freeze is in progress, or for post-release updates (like hardy-proposed). Packages in these queues need to be scrutinized before they get accepted.

This can be done with the queuediff tool in lp:~ubuntu-archive/ubuntu-archive-tools/trunk, which generates a debdiff between the current version in the archive, and the package sitting in the unapproved queue:

$ queue-diff -s hardy-updates hal
$ queue-diff -b human-icon-theme | view -

-s specifies the release pocket to compare against and defaults to the current development release. Please note that the pocket of the unapproved queue is not checked or regarded; i. e. if there is a hal package waiting in hardy-proposed/unapproved, but the previous version already migrated to hardy-updates, then you need to compare against hardy-updates, not -proposed.

Check --help, the tool has more options, such as specifying a different mirror, or -b to open the referred Launchpad bugs in the webbrowser.

This tool works very fast if the new package does not change the orig.tar.gz, then it only downloads the diff.gz. For native packages or new upstream versions it needs to download both tarballs and run debdiff on them. Thus for large packages you might want to do this manually in the data center.

8. Useful runes

This section contains some copy&paste shell bits which ease recurring jobs.

8.1. Cleaning up NBS

Sometimes binary packages are not built by any source (NBS) any more. This usually happens with library SONAME changes, package renamings, etc. Those need to be removed from the archive from time to time, and right before a release, to ensure that the entire archive can be rebuilt by current sources.

Such packages are detected by archive-cruft-check.py /srv/launchpad.net/ubuntu-archive/. Apart from NBS packages it also prints out 'ASBA' ("Arch: all" superseded by "Arch: any"), but they are irrelevant for day-to-day archive administration. This tool does not check for reverse dependencies, though, so you should use checkrdepends -b for checking if it is safe to actually remove NBS packages from the archive:

As a first step, create a work directory and a list of all packages (one file per package) which are NBS and check their reverse dependencies:

  •  mkdir /tmp/$SUDO_USER/cruft
    cd /tmp/$SUDO_USER/cruft
    for i in $(archive-cruft-check.py /srv/launchpad.net/ubuntu-archive/ 2>&1| grep '^ *o ' | sed 's/^.*://; s/,//g'); do checkrdepends -b $i hardy > $i; done 

Replace hardy with the name of the current development release. This will take a long time, so consider using screen. Please note that this list is generated automatically twice a day.

Those packages which do not have any reverse dependencies can be removed safely in one go:

  •  for p in $(find -empty | sed 's_^./__'); do lp-remove-package.py -yb -u $SUDO_USER -m "NBS" $p; rm $p; done

The rest needs to be taken care of by developers, by doing transition uploads for library SONAME changes, updating build dependencies, etc. The remaining files will list all the packages which still need the package in question.

Please refrain from removing NBS kernel packages for old ABIs until debian-installer and the seeds have been updated, otherwise daily builds of alternate and server CDs will be made uninstallable.

8.2. partner archive

The Canonical partner archive is in a distro of its own, ubuntu-partner.

Queue is much the same

  • queue -d ubuntu-partner -s hardy info

Use -j to remove a package

  • lp-remove-package.py -u jr -m "request Brian Thomason" -d ubuntu-partner -s feisty realplay -j

Please get sign-off from the Ubuntu QA team (via Steve Beattie) before accepting packages into partner.

9. reprocess-failed-to-move

In some cases, binary packages fail to move from the incoming queue to the accepted queue. To fix this, run ~lp_buildd/reprocess-failed-to-move as lp_buildd

10. Stable release updates

Archive admins manually need to process StableReleaseUpdates. Run the pending-sru script to 'queue fetch' all currently unapproved uploads. Review them according to the following guidelines:

  • Reject uploads which do not conform to the StableReleaseUpdates policy. If the changelog refers to a bug number, follow up there with an explanation.

  • Only process an upload if the SRU team approved the update in the bug report trail.
  • Verify that the package delta matches the debdiff attached to the bug report and that there are no other unrelated changes.
  • If you accept a package into -proposed,

    1. Add a verification-needed tag to the bug report.

    2. Set the appropriate bug task to Status: Fix Committed.

    3. Subscribe the sru-verification team.

  • After the package in -proposed has been successfully tested and passed a minimum aging period of 7 days (check the status page), and is approved by the SRU verification team, the package should be moved to -updates:

    1. Use copy-package.py to copy the source and binary packages from -proposed to -updates.

    2. Set the bug task to Status: Fix Released.

11. Useful web pages

Equally useful to the tools are the various auto-generated web pages in ubuntu-archive's public_html that can give you a feel for the state of the archive.

http://people.ubuntu.com/~ubuntu-archive/component-mismatches.txt

  • As described above, this lists the differences between the archive and the output of the germinate script. Shows up packages that are in the wrong place, or need seeding.

http://people.ubuntu.com/~ubuntu-archive/germinate-output/

  • This is the output of the germinate script, split up into each release of each flavour of ubuntu.

http://people.ubuntu.com/~ubuntu-archive/priority-mismatches.txt

  • Shows discrepancies between priorities of packages and where they probably should go according to the seeds.

http://people.ubuntu.com/~ubuntu-archive/architecture-mismatches.txt

  • Shows override discrepancies between architectures, which are generally bugs.

http://people.ubuntu.com/~ubuntu-archive/testing/karmic_probs.html

  • Generated by the hourly run of britney and indicates packages that are uninstallable on karmic, usually due to missing dependencies or problematic conflicts.

http://people.ubuntu.com/~ubuntu-archive/testing/karmic_outdate.html

  • Lists differences between binary and source versions in the archive. This often shows up both build failures (where binaries are out of date for particular architectures) but also where a binary is no longer built from the source.

http://people.ubuntu.com/~ubuntu-archive/NBS/

  • This contains a list of binary packages which are not built from source (NBS) any more. The files contain the list of reverse dependencies of those packages (output of checkrdepends -b). These packages need to be removed eventually, thus all reverse dependencies need to be fixed. This is updated twice a day.

12. Chroot management

Warning /!\ Please note that chroot management is something generally handled by Canonical IS (and specifically by Adam Conrad). The following section documents the procedures required should one have to, for instance, remove all the chroots for a certain suite to stop the build queue in its tracks while a breakage is hunted down and fixed, but please don't take this as an open invitation to mess with the buildd chroots willy-nilly.

Soyuz stores one chroot per (suite, archictecture).

manage-chroot.py, which runs only as 'lp_buildd' in cocoplum or cesium, allows the following actions upon a specified chroot:

$ sudo -u lp_buildd -i
lp_buildd@cocoplum:~$ LPCONFIG=ftpmaster /srv/launchpad.net/codelines/current/scripts/ftpmaster-tools/manage-chroot.py
ERROR   manage-chroot.py <add|update|remove|get>

Downloading (get) an existing chroot:

$ manage-chroot.py [-s SUITE] <-a ARCH> get

The chroot will be downloaded and stored in the local disk name as 'chroot-<DISTRIBUTION>-<SERIES>-<ARCHTAG>.tar.bz2'

Uploading (add/update) a new chroot:

$ manage-chroot.py [-s SUITE] <-a ARCH> add -f <CHROOT_FILE>

'add' and 'update' action are equivalents. The new chroot will be immediatelly used for the next build job in the corresponding architecture.

Disabling (remove) an existing chroot:

Warning /!\ Unless you have plans for creating a new chroots from scratch, it's better to download them to disk before the removal (recover is possible, but involves direct DB access)

$ manage-chroot.py [-s SUITE] <-a ARCH> remove

No builds will be dispatched for architectures with no chroot, the build-farm will continue functional for the rest of the system.

13. Archive days

Current members with regular admin days are:

On an archive day, the following things should be done:

  • If we are not yet in the DebianImportFreeze, run sync-source.py -a to sync unmodified packages from Debian (see Syncs).

  • Process all pending archive bugs. Most of those are syncs, removals, component fixes, but there might be other, less common, requests.

  • Process the NEW queues of the current development release and *-backports of all supported stable releases.

  • Run process-removals.py to review/remove packages which were removed in Debian.

  • Clean up component-mismatches, and poke people to fix dependencies/write MIRs.
  • Look at http://people.ubuntu.com/~ubuntu-archive/testing/karmic_probs.html, fix archive-admin related issues (component mismatches, etc.), and prod maintainers to fix package related problems.

  • Remove NBS packages without reverse dependencies, and prod maintainers to rebuild/fix packages to eliminate reverse dependencies to NBS packages.

13.1. Archive Administration and Freezes

Archive admins should be familiar with the FreezeExceptionProcess, however it is the bug submitter's and sponsor's responsibility to make sure that the process is being followed. Some things to keep in mind for common tasks:

  • When the archive is frozen (ie the week before a Milestone, or from one week before RC until the final release), you need an ACK from ubuntu-release for all main/restricted uploads
  • During the week before final release, you need an ACK from motu-release for any uploads to universe/multiverse

  • When the archive is not frozen, bugfix-only sync requests can be processed if filed by a core-dev, ubuntu-dev or motu (universe/multiverse only) or have an ACK by a sponsor from one of these groups, ubuntu-main-sponsors or ubuntu-universe-sponsors

  • After FeatureFreeze, all packages in main require an ACK from ubuntu-release for any FreezeException (eg FeatureFreeze, UserInterfaceFreeze, and Milestone) and an ACK from motu-release for universe/multiverse

  • New packages in universe/multiverse need two ACKs after FeatureFreeze

See FreezeExceptionProcess for complete details.

ArchiveAdministration (last edited 2024-11-12 13:41:12 by tjaalton)