PublicationNotes
This page describes the publication process for packages when it deviates from SecurityTeam/UpdateProcedures.
Kernel (regular)
Security updates are now a part of the regular kernel cadence. As such, the tracking and publication differs from other updates, and is detailed in SecurityTeam/UpdatePublication/Kernel.
Kernel (emergency)
Generally speaking, the kernel follows a different procedure for publication and tracking, which is detailed in SecurityTeam/UpdatePublication/Kernel. In case of an emergency update done outside of this process, the procedures below can be used.
Patching
Patching and testing the kernel is primarily the responsibility of the Ubuntu Kernel team who follow KernelTeam/KernelMaintenance. Tracking kernel CVEs, building patched kernels and publishing those kernels is the responsibility of the Ubuntu Security team. As such, the Ubuntu Security team should:
- Enter kernel CVEs into the Ubuntu CVE Tracker
- Forward this information to the kernel team
- Coordinate the timing of kernel security updates (usually monthly, unless a high priority CVE warrants an earlier date)
- Coordinate the Ubuntu Kernel team's work with other vendors as appropriate
Building
Once the kernel team is satisfied with their patching and testing, they will provide packages on chinstrap.canonical.com, currently in chinstrap:~smb/security/srcpkg. Since the kernels can be large, the packages should be remotely signed and uploaded from chinstrap (see below). To verify, sign and upload:
- On chinstrap, copy the kernel team's packages to ~/sign:
$ cd ~smb/security/srcpkg/ # requires you are in the 'kernel_devs' group $ test -d ~/sign/ || mkdir -m 0750 ~/sign/ ; chgrp ubuntu_security ~/sign/ $ cp * ~/sign/
On local system (may require setup, see below), verify and sign the packages in ~/sign on chinstrap:
$ $UST/package-tools/u-verify-chinstrap # verify the signatures in ~/sign $ $UST/package-tools/u-sign-chinstrap # sign the packages in ~/sign $ $UST/package-tools/u-verify-chinstrap # reverify the signatures in ~/sign
If needed, on chinstrap setup the kern-up symlink:
$ test -e ~/bin/kern-up || ln -s /home/jamie/bin/kern-up ~/bin/kern-up
On chinstrap, perform a test upload with kern-up. Eg:
$ cd ~/sign $ ~/bin/kern-up # | sed 's/ security\-/ security-proposed-/' # for proposed ppa dput security-dapper linux-source-2.6.15_2.6.15-55.87_source.changes dput security-hardy linux_2.6.24-28.75_source.changes dput security-jaunty linux_2.6.28-19.64_source.changes dput security-karmic linux_2.6.31-22.63_source.changes dput security-karmic linux-mvl-dove_2.6.31-214.30_source.changes dput security-karmic linux-ec2_2.6.31-307.17_source.changes dput security-lucid linux_2.6.32-24.41_source.changes dput security-lucid linux-mvl-dove_2.6.32-208.24_source.changes dput security-lucid linux-meta-mvl-dove_2.6.32.208.11_source.changes dput security-lucid linux-ec2_2.6.32-308.15_source.changes dput security-lucid linux-ti-omap_2.6.33-502.10_source.changes dput security-lucid linux-fsl-imx51_2.6.31-608.19_source.changes
Compare the output of kern-up with Kernel/Dev/ABIPackages. Ignore the netbook kernels cause they are outside the archive. Also, linux-qcm-msm/lucid is abandoned. If there is an ABI bump, then the ABI meta source package is also listed, otherwise it is not. Every "topic branch" (i.e. source package and referred to as 'git branch' in Kernel/Dev/ABIPackages) has up to two "meta" packages that define ABI, but normally there is just one. Sometimes there is an additional "ports" meta for the non-supported archs. Kernel/Dev/ABIPackages will always have the most up to date information, so consult it with each update (kern-up may need to be adjusted if the kernel team makes changes).
Compare the ABIs of the packages output by kern-up with the archive. If there is an ABI bump and the meta package is missing, contact the kernel team.
- On chinstrap, upload the kernels (see the 'Setup' section below if publishing for the first time):
If non-ABI bump, do ~/bin/kern-up --real
- If ABI bump:
Take the output of ~/bin/kern-up and run the individual dput commands for each kernel and meta package, being careful to not upload any ABI-tracking packages at this time
- Wait for the kernels to build on all architectures
After the kernels are finished building, for each of the remaining ABI-tracking packages (as seen in the output of ~/bin/kern-up), run the dput commands for that package
If destined for the ubuntu-security-proposed PPA, take the output of ~/bin/kern-up from above (after uncommenting the pipe to sed) and run the individual dput commands
- On chinstrap, verify all the packages were uploaded by comparing the number of .source.changes files with the number of .upload files in the ~/sign directory:
$ ls -1 ~sign/*_source.changes | wc -l $ ls -1 ~sign/*.upload | wc -l
Testing the kernel
Most testing is performed by the kernel team. The Ubuntu Security should at a minimum do the following:
Using copy_sppa_to_repos from UST, copy the kernels to your local repository. Please see the instructions at the top of copy_sppa_to_repos for different kernels, ABI-tracking packages and meta-packages.
Using the meta packages, perform upgrade testing for all affected releases for both i386 and amd64. This can be done by ensuring linux-image-generic (linux-image-amd64-generic or linux-image-386 for Dapper) is installed, then performing an apt-get dist-upgrade to pull in the packages from your local repository.
- After upgrading, verify the following:
$ uname -a $ cat /proc/version_signature # for non-Dapper
Verify the QRT test scripts for the kernel pass for both i386 and amd64. Run all $QRT/scripts/test-kernel*py scripts except test-kernel-hardening.py (as a convenience, $QRT/notes_testing/kernel/kernel-test-wrapper.sh can be used to automate these first 3 steps)
- Log into an Ubuntu desktop with the new kernel, and verify the basic desktop works (mouse, keyboard, display, networking and creating/editing a file)
If there are reproducers or test cases, try to forward them to the kernel team (or better yet, integrate them into QRT before they do their testing). Private reproducers will need to be tested by the Ubuntu Security team. When possible, include a regression test for the patched functionality along with the test to see if the bug is fixed (ie, "Did this fix the bug? Did this introduce a regression?"). It is probably a good idea to adjust test_updated_modules() in $QRT/scripts/test-kernel-root-ops.py for any modules that have been updated (this will perform a modinfo, modprobe and rmmod on the module).
Finally, using virtualization for testing is fine most of the time, but if the patch is for a problem with real hardware, every effort should be made to test the patch on that hardware.
Publishing
In general, publication is the same as with other security updates. Keep in mind the following:
- Unembargo all non-meta packages at the same time, then after they are mirrored to security.ubuntu.com, upload the meta packages. This will ensure that people don't get a meta package that depends on a kernel that doesn't exist.
While not required, you can use the pull-usn-desc.py tool from UCT. This is helpful since kernel updates typically have many CVEs to describe in the USN. You give it the CVE list that is in new-usn.sh and it will output example text that you can paste into new-usn.sh and edit. Eg:
$ cd $UCT $ $UCT/scripts/pull-usn-desc.py --cve CVE-... --cve CVE-...
- The title for the USN should be 'Linux kernel vulnerabilities'
- The summary for the USN should be something like 'linux, linux-{ec2,fsl-imx51,mvl-dove,source-2.6.15,ti-omap} vulnerabilities'
- In the minimum binaries section of the USN, use linux-image*, omitting debug and dbgsym packages
ABI bump for -security and -updates pockets
When a kernel is being built for -security that will introduce an ABI bump for both -security and -updates, the following items must be built in order:
- Build the ABI-bumping kernel in the security PPA.
Build all ABI-tracked packages in the security PPA.
- Build the related kernel meta package in the security PPA.
When publishing, publish the kernel and ABI-tracked packages first, just to avoid any glitches where the meta package would get successfully published but something would block the kernel packages. Once the kernel package publications are verified in the archive, the meta package can be safely published.
Once the kernel is published in -security, it can be pocket-copied normally to -updates.
ABI bump for -security pocket only
When a -proposed kernel has an ABI bump and makes it into -updates, then the next security update kernel will be an ABI bump for -security only users (since security fixes pull from -updates). Since this is an ABI bump for -security only users, the ABI meta source packages and ABI-tracking source packages must be copied from -updates to -security after all of the -security kernels are mirrored. Look at Kernel/Dev/ABIPackages for the list of packages to copy over. For example, if we have a security update for the 'master' kernel (ie, not arm, not backports, etc) and lucid-security currently has 2.6.32-25.45, lucid-updates has 2.6.32-26.47 and the pending lucid-security update has 2.6.32-26.48, then:
- Unembargo the pending lucid-security kernel as normal
- Wait for it to be fully mirrored to security.ubuntu.com
Review Kernel/Dev/ABIPackages. In this example, the affected ABI meta and meta-tracking packages for the 'master' kernel are linux-meta, linux-ports-meta and linux-backports-modules-2.6.32.
Have an archive admin copy the packages from -updates. In this example:
$ copy-package.py -vbs lucid-updates --to-suite=lucid-security linux-meta $ copy-package.py -vbs lucid-updates --to-suite=lucid-security linux-ports-meta $ copy-package.py -vbs lucid-updates --to-suite=lucid-security linux-backports-modules-2.6.32
Signed updates
From Ubuntu 14.04 forward, grub2 updates consist of two source packages: grub2 and grub2-signed. At a high level, publication requires:
- Uploading grub2 to -proposed or a signing ppa. As part of this build process, LP will generate 'signed efi artifacts'.
Uploading grub2-signed source (with its Build-Depends updated for grub-efi-*-bin to use the new grub2 version) to -proposed or a signing ppa to be built. As part of this build process, the signed efi artifacts from the corresponding grub2 build will be pulled into the package
- copying grub2 and grub2-signed binaries in lockstep to their destination (eg, -security, -updates or an ESM ppa)
In addition to grub2, the following packages use the same process:
- fwupd
- linux and linux-signed (though with different PPAs; for now only do this with kernel and release team assistance)
Publishing to the Ubuntu archive
- Upload grub2 source to one of the security team's PPAs in the normal way
- Upload grub2-signed source with an updated Build-Depends from the grub2 upload in step '1' to one of the security team's PPAs in the normal way
At publication, copy grub2 source and binaries to -proposed to generate the signed efi artifacts using the copy-package tool from ubuntu-archive-tools. Eg, for focal from the private security PPA:
$ cd $UAT && ./copy-package --include-binaries --from-suite focal --from ~ubuntu-security/ubuntu/ppa --to ubuntu --to-suite focal-proposed --unembargo -y grub2
Accept the package from the Unapproved queue into -proposed via the Ubuntu archive queue. Eg, for focal, https://launchpad.net/ubuntu/focal/+queue?queue_state=1&queue_text=
- Accepting the package into -proposed in the last step triggers Launchpad to perform signing and generate the 'signed efi artifacts'. When they show up in the Unapproved queue, they must be accepted before moving to the next step
Copy grub2-signed source (ie, omit --include-binaries) to -proposed (to build the grub2-signed binaries with the signed efi artifacts) using the copy-package tool from ubuntu-archive-tools. Eg, for focal from the private security PPA (once in -proposed, the grub2-signed package will build for each architecture that the package supports. As part of the build process, the package will pull the signed efi artifacts from -proposed for grub2 and put them into the resulting binaries):
$ cd $UAT && ./copy-package --from-suite focal --from ~ubuntu-security/ubuntu/ppa --to ubuntu --to-suite focal-proposed --unembargo -y grub2-signed
Once the grub2-signed package is built (verify with https://launchpad.net/ubuntu/+source/grub2-signed/+publishinghistory) and after performing any additional testing, copy the grub2 and grub2-signed source and binaries in lockstep from -proposed to the -security pocket. Eg, for focal:
$ cd $UAT && ./copy-package --from ubuntu --from-suite focal-proposed --to ubuntu --to-suite focal-security --unembargo --auto-approve -y grub2 grub2-signed
Alternatively, simply use the sru-release tool. Eg:
$ cd $UAT && ./sru-release --security <release name> grub2 grub2-signed
Publishing to Ubuntu ESM
Publishing to ESM requires the same process as the Ubuntu archive except a) instead of copying to -proposed for signing the packages are copied to the ESM signing ppa and b) instead of publishing to the -security pocket the packages are copied to an ESM infra ppa.
Note: normally ESM packages are built in an ESM staging PPA, tested and pushed from the ESM staging ppa to ESM proper. While we could build in ESM infra staging, copy to ESM signing and then push to ESM infra, we prefer for the signed packages to go to ESM infra staging for testing before pushing to ESM infra. As such, the below describes uploading first in the security private PPA (but any private ppa with only -security enabled will do), then copying to ESM signing, then pushing to ESM infra staging (for testing) and finally pushing to ESM infra.
- Upload grub2 source to one of the security team's non-ESM PPAs in the normal way
- Upload grub2-signed source with an updated Build-Depends from the grub2 upload in step '1' to one of the security team's non-ESM PPAs in the normal way
At publication, copy grub2 source and binaries to the ESM signing ppa (https://launchpad.net/~canonical-signing/+archive/ubuntu/esm/+packages) using the copy-package tool from ubuntu-archive-tools. Eg, for trusty from the private security PPA (once in the signing PPA, the grub2 package will be shown with the 'gear icon'. As part of this process, efi signed artifacts are generated behind the scenes (ie, unlike the archive queue, the signed artifacts can't be seen via the normal ppa pages)):
$ cd $UAT && ./copy-package --include-binaries --from ~ubuntu-security/ubuntu/ppa --from-suite trusty --to ~canonical-signing/ubuntu/esm --to-suite trusty --unembargo --auto-approve -y grub2
When the grub2 packages from step 3 are published to the ESM signing PPA (checkmark icon), copy grub2-signed source (ie, omit --include-binaries) to the ESM signing PPA (to build the grub2-signed binaries with the signed efi artifacts) using the copy-package tool from ubuntu-archive-tools. Eg, for trusty from the private security PPA (once in the signing PPA, the grub2-signed package will build for each architecture that the package supports. As part of the build process, the package will pull the signed efi artifacts from the signing ppa for grub2 and put them into the resulting binaries):
$ cd $UAT && ./copy-package --from ~ubuntu-security/ubuntu/ppa --from-suite trusty --to ~canonical-signing/ubuntu/esm --to-suite trusty --unembargo --auto-approve -y grub2-signed
- Once the grub2-signed package is built, copy the grub2 and grub2-signed source and binaries in lockstep from the ESM signing PPA to the ESM infra staging PPA. Eg, for trusty:
$ cd $UAT && ./copy-package --include-binaries --from ~canonical-signing/ubuntu/esm --from-suite trusty --to ~ubuntu-esm/ubuntu/esm-infra-security-staging --to-suite trusty --unembargo --auto-approve -y grub2 grub2-signed
- Once verified and testing is complete, copy grub2 and grub2-signed from ESM infra staging to ESM infra in the normal way. Eg, for trusty:
$ cd $UAT && ./copy-package --include-binaries --from ~ubuntu-esm/ubuntu/esm-infra-security-staging --from-suite trusty --to ~ubuntu-esm/ubuntu/esm-infra-security --to-suite trusty -y grub2 grub2-signed
Image-based updates
Ubuntu Touch, Ubuntu Core and (the future) Ubuntu Personal do not use apt for upgrades and instead get their updates via system-image updates (Ubuntu Touch and Ubuntu Core 15.04) or kernel/OS snap updates (Ubuntu Core and Personal in 16 and higher). In general, like with archive updates, it is sufficient to test the security update on the appropriate channel with kvm and add device testing as needed (eg, testing NetworkManager on a phone image or an arm-specific bug on a beaglebone black Ubuntu Core image).
Ubuntu Core
Testing
Testing can normally happen in a VM. Series 16, 18 and 20 images are available on cdimage (see below for custom VM/image generation). Start with:
Ubuntu Core 16 and 18:
$ kvm -smp 2 -m 1500 -netdev user,id=mynet0,hostfwd=tcp::8022-:22,hostfwd=tcp::8090-:80 -device virtio-net-pci,netdev=mynet0 -drive file=ubuntu-core-16-amd64.img,format=raw $ ssh -p 8022 ubuntu@localhost
Ubuntu core 20:
kvm -smp 2 -m 1500 -netdev user,id=mynet0,hostfwd=tcp::8022-:22,hostfwd=tcp::8090-:80 -device virtio-net-pci,netdev=mynet0 -drive if=virtio,file=ubuntu-core-20-amd64.img,format=raw -bios /usr/share/OVMF/OVMF_CODE.fd
Series 16+
See https://developer.ubuntu.com/core/get-started/kvm for details.
Some tips:
- On first boot, configure the machine to use your LP-configured ssh key
Perform local installs with:
$ sudo snap install --dangerous /path/to/snap
Update snaps with (a systemd timer is set to do this automatically):
$ sudo snap refresh
- change the hostname by editing /etc/hostname
- change timezone by adjusting /etc/writable/timezone and running 'sudo ln -sf /usr/share/zoneinfo/YOUR/Timezone /etc/writable/localtime'
- configure NTP by adjusting /etc/systemd/timesyncd.conf and restarting systemd-timesyncd
- subiquity/cloud-init is used to setup the user with the ssh keys from Launchpad. This gives ssh access but does not allow console logins and this user is not an admin user, but can use passwordless sudo. Use 'sudo adduser --extrausers' to add new users (deluser/userdel don't understand --extrausers yet, so you'll have to adjust the files in /var/lib/extrausers directly to remove users).
- to enable console logins:
Set a password:
$ sudo passwd <username>
adjust /etc/sudoers.d/create-user-<username> to have (ie, use 'ALL' instead of 'NOPASSWD:ALL'):
<username> ALL=(ALL) ALL
- may want to adjust /etc/ssh/sshd_config for no passwords, logins from certain hosts, etc
the serial vault can be used to configure with a system-user (and avoiding connections to LP)
Most of these above will eventually be configurable via 'snap get core/snap set core'.
On early series 16 images netplan was used but it wasn't complete-- every reboot results in a new IP address even though the MAC didn't change. This shouldn't be an issue with newer images, but if it comes up it can be worked around with:
$ virsh net-edit default # add 'host' entries to dhcp section ... <dhcp> <range start='192.168.122.2' end='192.168.122.254'/> <host mac='52:54:00:52:ea:29' name='snappy-16-amd64' ip='192.168.122.64'/> </dhcp> ... $ virsh net-destroy default $ virsh net-start default
15.04
Some tips:
Can disable autopilot (not to be confused with autopilot GUI testing; this does not exist on series 16)
cat > /tmp/disable-autopilot.sh <<EOM #!/bin/sh snappy config ubuntu-core > "/tmp/core_config" sed -i 's/autopilot: .*$/autopilot: false/' "/tmp/core_config" sudo snappy config ubuntu-core "/tmp/core_config" sudo reboot EOM $ sh /tmp/disable-autopilot.sh
After this you can perform manual updates with sudo snap update.
Perform local installs with:
$ sudo snappy install /path/to/snap
- change the hostname using 'snappy config', like with auto updates
- change the timezone using 'snappy config', like with auto updates
- copy your ssh key over with ssh-copy-id (not needed with series 16 subiquity, just specify launchpad id)
Integrating into uvt
- Download/uncompress an official image or generate an image (see below)
convert the raw img into a qcow2, Eg: qemu-img convert -f raw ./snappy-16-amd64.img -O qcow2 snappy-16-amd64.qcow2
virsh dumpxml an existing VM for the same release (eg, virsh dumpxml sec-xenial-amd64 > /tmp/xml)
- remove the uuid and the MAC address in /tmp/xml, then adjust the name to what you want (eg, snappy-16-amd64), then point the disk to your converted qcow2
If using the core 18 release image, you also need to adjust the type='file' device='disk' disk tag as the image does not support virtio disks:
... <disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/path/to/snappy-18-amd64.qcow2'/> <target dev='hda' bus='ide'/> <address type='drive' controller='0' bus='0' unit='0'/> </disk> <controller type='ide' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> ...
If your xml defines a cdrom, you should either remove the cdrom stanza or adjust it to be 'hdb' (since the qcow2 is now hda) and increment the unit. Eg:
... <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <target dev='hdb' bus='ide'/> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> ...
If using the the core 20 release image, you also need to configure the os loader tag as the core 20 release must use OVMF for its bios:
... <memory unit='KiB'>1572864</memory> <currentMemory unit='KiB'>1572864</currentMemory> <os> <type arch='x86_64' machine='pc-i440fx-focal'>hvm</type> <boot dev='hd'/> <loader type='rom'>/usr/share/OVMF/OVMF_CODE.fd</loader> </os> ...
virsh define /tmp/xml
create a snapshot: uvt snapshot snappy-16-amd64
- At this point you can use it with uvt like normal. Some notes to consider:
- the core 18 release image may hang during boot because of a lack of entropy. Simply tap shift over and over until the boot continues
the core 18 release image ships with broken symlinks for /etc/hostname, /etc/localtime and /etc/timezone
- the core 20 bootstrap process will reboot the image before presenting with the typical configure prompts
the core 20 release image ships with broken symlinks for /etc/hostname, /etc/localtime and /etc/timezone
the core 18 release image is very stripped down and does not ship with 'vim', but it does have 'vi'. If you want vim, you can:
$ snap install extract-deb $ snap connect extract-deb:home $ extract-deb.download --arch amd64 vim-tiny $ ./unpack/usr/bin/vim.tiny
- Few things you might want to do (also see 'Image testing tips'):
- disable auto updates as per above (15.04)
- change the hostname
- change the timezone
update your host's ~/.ssh/config, Eg:
Host snappy-16-amd64 User <your-launchpad-id>
Image generation (16 and higher)
Using existing GA images
By far the easiest way to get started is to use the official images. There are official images for amd64, i386, pi2 (armhf), pi3 (armhf) and dragonboard (arm64).
The steps are:
Download an image and flash it to a device, boot it with kvm or integrate into uvt (see above)
- Start the image the first time, configure networking, etc
- Run 'sudo snap refresh' and reboot
If you want another channel (eg, 'edge'):
$ sudo snap refresh --edge core $ sudo snap refresh --edge pc-kernel $ sudo snap refresh --edge pc
Generating kvm images (16)
Install ubuntu-image with (currently need edge channel):
$ sudo snap install --channel=edge --devmode ubuntu-image
Download the amd64 model assertion from http://people.canonical.com/~vorlon/official-models/
- Generate the image:
edge:
$ sudo ubuntu-image --image-size=8G -c edge -o snappy-amd64.img pc-amd64-model.assertion
beta:
$ sudo ubuntu-image --image-size=8G -c beta -o snappy-amd64.img pc-amd64-model.assertion
stable (emergency):
$ sudo ubuntu-image --image-size=8G -c stable -o snappy-amd64.img pc-amd64-model.assertion
See https://github.com/CanonicalLtd/snappy-docs/blob/master/core/images.md for more information.
http://cdimage.ubuntu.com/ubuntu-core/xenial/daily-preinstalled/current/ has pregenerated images.
Generating raspberry pi2 images (16)
Generating the image itself is the same as with 'Generating kvm images (16.04)' (above), except choose the pi2-model.assertion model assertion. Eg:
Generate the image:
$ sudo ubuntu-image -c beta -o snappy-pi2.img pi2-model.assertion
flash it: sudo dd if=snappy-pi2.img of=/dev/sdX bs=32M && sync
Note: depending on your host hardware, the device may either be /dev/sdX or /dev/mmcblkX
Note 2: need to connect a keyboard and monitor for console-conf. It downloads your ssh key from LP for the email address you provide
Note 3: may want to use the 'godd' snap for flashing. Eg: cat snappy-pi2.img | sudo godd - /dev/sdX
Generating dragonboard images (16)
Generating the image itself is the same as with 'Generating kvm images (16.04)' (above), except choose the dragonboard-model.assertion model assertion. Eg:
Generate the image:
$ sudo ubuntu-image -c beta -o snappy-dragonboard.img dragonboard-model.assertion
flash it: sudo dd if=snappy-dragonboard.img of=/dev/sdX bs=32M && sync
Note: depending on your host hardware, the device may either be /dev/sdX or /dev/mmcblkX
Note 2: make sure the SD card dipswitch is switched on for SD card booting
Note 3: may want to use the 'godd' snap for flashing. Eg: cat snappy-dragonboard.img | sudo godd - /dev/sdX
Note 4: need to connect a keyboard and monitor for console-conf. It downloads your ssh key from LP for the email address you provide
Generating beaglebone black images (16)
Beaglebone Black is a community kernel and gadget and is not officially supported. As such, you need to create your own model assertion and specify 'linux-generic-bbb' as the kernel snap and 'bbb' as the gadget snap. See https://docs.ubuntu.com/core/en/guides/build-device/board-enablement for details of creating a snap signing key and how to create the model assertion.
Assuming the model assertion is named 'my-bbb-model.assertion', then generating the image itself is the same as with 'Generating kvm images (16.04)' (above), except choose the my-bbb-model.assertion model assertion. Eg:
Generate the image:
$ sudo ubuntu-image -c edge -o snappy-bbb-stable.img my-bbb-model.assertion
flash it: sudo dd if=snappy-bbb.img of=/dev/sdX bs=32M && sync
http://people.canonical.com/~ogra/snappy/all-snaps/daily/current/ has pregenereated images.
Note: you need an sdcard (eg 8G) instead of using internal storage at the time of this writing
Note 2: depending on your host hardware, the device may either be /dev/sdX or /dev/mmcblkX
Note 3: it is recommended to remove the eMMC bootloader with sudo dd if=/dev/zero of=/dev/mmcblk1 bs=1024 count=1024. You may want to make a backup of this (eg, sudo dd if=/dev/mmcblk1 of=./bbb.emmc bs=1024 count=1024). See the snappy-devel mailing list for details.
Note 4: may want to use the 'godd' snap for flashing. Eg: cat snappy-bbb.img | sudo godd - /dev/sdX
Note 5: unless using a serial console, need to connect a keyboard and monitor for console-conf. It downloads your ssh key from LP for the email address you provide
Note 6: since you can't install an unasserted kernel over an asserted one, when testing a new kernel, you might need to build an image and kernel so you can later perform locally unasserted kernel installs
Using external drive for writable partition
Sometimes it might be handy to boot off of the SD card but use an external USB hardrive for the 'writable' partition. From irc:
#snappy: <@ogra> jdstrand, do you actually want to boot without SD ? (sorry, i'm at a trade show so only saw your ping now) ... if you can live with SD, just copy the writable partition over to the USB drive, re-label the SD one to "writable-old" and make sure the USB one is labeled "writable" ...
Eg, on a raspberry pi3, this procedure could be used:
shutdown the device, remove the SD card and insert it into another computer to relabel it: Eg, assuming the SD shows up as /dev/mmcblk0... on your computer:
$ sudo umount /dev/mmcblk0p1 # unmount the automounted partitions $ sudo umount /dev/mmcblk0p2 $ sudo e2label /dev/mmcblk0p2 # show current label writable $ sudo e2label /dev/mmcblk0p2 writable-old $ sudo e2label /dev/mmcblk0p2 writable-old
partition an external drive with a single partition using gdisk. Here is some output for the end results of an external disk (/dev/sda) that has been modified in this way:
$ sudo gdisk -l /dev/sda GPT fdisk (gdisk) version 1.0.4 Partition table scan: MBR: protective BSD: not present APM: not present GPT: present Found valid GPT with protective MBR; using GPT. Disk /dev/sda: 3906963456 sectors, 1.8 TiB Model: easystore 25FC Sector size (logical/physical): 512/512 bytes Disk identifier (GUID): 5715C8FF-5530-490A-937D-5D1101095E0A Partition table holds up to 128 entries Main partition table begins at sector 2 and ends at sector 33 First usable sector is 34, last usable sector is 3906963422 Partitions will be aligned on 2048-sector boundaries Total free space is 4029 sectors (2.0 MiB) Number Start (sector) End (sector) Size Code Name 1 2048 3906961407 1.8 TiB 8300 $ sudo e2label /dev/sda1 writable
- copy over everything from writable-old (eg, /dev/mmcblk0p2 in the above) to writable (eg, /dev/sda1 in the above)
put the SD card in the device and plug the USB drive into the device and boot. Then check df output to confirm writable is on the external drive:
$ df ... /dev/sda1 1921770696 1445492 1822634836 1% /writable ... /dev/mmcblk0p1 129039 45023 84016 35% /boot/uboot
Image generation (15.04)
This is gleaned from information found here:
Before you start with anything, you'll need to install some tools from the snappy-dev-tools PPA. See the getting started page for details.
You can see all the available images with (there may be additional historical channels in the output, but the below is what you should use):
$ ubuntu-device-flash query --list-channels --device=generic_amd64 | grep core ubuntu-core/15.04/edge ubuntu-core/15.04/stable ubuntu-core/16/edge ubuntu-core/16/stable ubuntu-core/rolling/alpha ubuntu-core/rolling/edge
Typically you'll use these channels for testing security updates:
development branch: ubuntu-core/rolling/edge (or ubuntu-core/rolling/alpha when it exists)
15.04 release (batched updates): ubuntu-core/15.04/alpha (or ubuntu-core/15.04/edge if needed)
15.04 release (emergency updates): ubuntu-core/15.04/stable
Generating kvm images (15.04)
rolling release (trunk):
$ sudo ubuntu-device-flash core --channel=ubuntu-core/rolling/edge --size=8 --enable-ssh --output=rolling-edge.img # the above doesn't work due to LP: #1458006. Use this instead: $ sudo ubuntu-device-flash core --channel=edge --size=8 --enable-ssh --output=rolling-edge.img rolling
15.04 edge (SRU development):
$ sudo ubuntu-device-flash core --channel=ubuntu-core/15.04/edge --size=8 --enable-ssh --output=15.04-edge.img # the above doesn't work due to LP: #1458006. Use this instead: $ sudo ubuntu-device-flash core --channel=edge --size=8 --enable-ssh --output=15.04-edge.img 15.04
15.04 alpha (batched):
$ sudo ubuntu-device-flash core --channel=ubuntu-core/15.04/alpha --size=8 --enable-ssh --output=15.04-alpha.img # the above doesn't work due to LP: #1458006. Use this instead: $ sudo ubuntu-device-flash core --channel=alpha --size=8 --enable-ssh --output=15.04-alpha.img 15.04
15.04 stable release (emergency):
$ sudo ubuntu-device-flash core --channel=ubuntu-core/15.04/stable --size=8 --enable-ssh --output=15.04-stable.img # the above doesn't work due to LP: #1458006. Use this instead: $ sudo ubuntu-device-flash core --channel=stable --size=8 --enable-ssh --output=15.04-stable.img 15.04
Useful options:
- use --developer-mode to not require --allow-unauthenticated for sideloading
- use --device=generic_i386 for i386 kvm images (not primary target)
you may want to choose a different stability level than 'alpha' depending on the situation
That's it (see 'Ubuntu Core/Testing' above).
Generating beaglebone black images (15.04)
devel release:
$ sudo ubuntu-device-flash core --channel=ubuntu-core/rolling/edge --oem=beagleblack --enable-ssh --output=rolling-edge.bbb # the above doesn't work due to LP: #1458006. Use this instead: $ sudo ubuntu-device-flash core --channel=edge --oem=beagleblack --enable-ssh --output=rolling-edge.bbb rolling $ sudo dd if=rolling-edge.bbb of=/dev/sdX bs=32M && sync
15.04 edge (SRU development):
$ sudo ubuntu-device-flash core --channel=ubuntu-core/15.04/edge --oem=beagleblack --enable-ssh --output=15.04-edge.bbb # the above doesn't work due to LP: #1458006. Use this instead: $ sudo ubuntu-device-flash core --channel=edge --oem=beagleblack --enable-ssh --output=15.04-edge.bbb 15.04 $ sudo dd if=15.04-edge.bbb of=/dev/sdX bs=32M && sync
15.04 alpha (batched):
$ sudo ubuntu-device-flash core --channel=ubuntu-core/15.04/alpha --oem=beagleblack --enable-ssh --output=15.04-alpha.bbb # the above doesn't work due to LP: #1458006. Use this instead: $ sudo ubuntu-device-flash core --channel=alpha --oem=beagleblack --enable-ssh --output=15.04-alpha.bbb 15.04 $ sudo dd if=15.04-alpha.bbb of=/dev/sdX bs=32M && sync
15.04 release (emergency):
$ sudo ubuntu-device-flash core --channel=ubuntu-core/15.04/stable --oem=beagleblack --enable-ssh --output=15.04-stable.bbb # the above doesn't work due to LP: #1458006. Use this instead: $ sudo ubuntu-device-flash core --channel=stable --oem=beagleblack --enable-ssh --output=15.04-stable.bbb 15.04 $ sudo dd if=15.04-stable.bbb of=/dev/sdX bs=32M && sync
Note: you need an sdcard (eg 8G) instead of using internal storage at the time of this writing
Note 2: depending on your host hardware, the device may either be /dev/sdX or /dev/mmcblkX
Note 3: it is recommended to remove the eMMC bootloader with sudo dd if=/dev/zero of=/dev/mmcblk1 bs=1024 count=1024. You may want to make a backup of this (eg, sudo dd if=/dev/mmcblk1 of=./bbb.emmc bs=1024 count=1024). See the snappy-devel mailing list for details.
Publishing (userspace)
In general, Ubuntu Core stable images/OS snaps are released on a cadence and therefore will bundle security updates every few weeks. See Note for cloud images, cloud archive, Ubuntu Core and/or Ubuntu Touch for details.
Triage
When packages are in the image PPA or the candidate images, the CVE for the affected release should be marked pending (<version>). Eg:
Candidate: CVE-2015-8317 References: http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-8317 http://www.ubuntu.com/usn/usn-2834-1 Description: The xmlParseXMLDecl function in parser.c in libxml2 before 2.9.3 allows context-dependent attackers to obtain sensitive information via an (1) unterminated encoding value or (2) incomplete XML declaration in XML data, which triggers an out-of-bounds heap read. ... vivid_libxml2: released (2.9.2+dfsg1-3ubuntu0.2) vivid/ubuntu-core_libxml2: pending (2.9.2+dfsg1-3ubuntu0.2) ...
Notice that a USN was issues (2834-1) for vivid and vivid_libxml2 is marked released, however vivid/ubuntu-core_libxml2 is marked pending (2.9.2+dfsg1-3ubuntu0.2) because that version has not yet been provided in an OTA update for the stable images.
When a new OTA stable update is available, the triager should:
verify in the image or OS snap manifest what was included (15.04 images can use dpkg -l (be sure to pipe to another program such as grep or redirect to a file to avoid truncation); 16 and higher has this information in /usr/share/snappy/dpkg.list for the OS snap and /snaps/canonical-*-linux.canonical/current/dpkg.list for the kernel snap). Eg, check if the manifest for the latest OS snap included libxml2 2.9.2+dfsg1-3ubuntu0.2
If the manifest has the new release, see what CVEs it fixes with cd $UCT ; grep 'vivid/ubuntu-core_<srcpkg>: pending (' ./active/CVE*. Eg:
$ grep 'vivid/stable-phone-overlay_libxml2: pending (' ./active/CVE* ./active/CVE-2015-7942:vivid/stable-phone-overlay_libxml2: pending (2.9.2+dfsg1-3ubuntu0.2) ./active/CVE-2015-8241:vivid/stable-phone-overlay_libxml2: pending (2.9.2+dfsg1-3ubuntu0.2) ./active/CVE-2015-8242:vivid/stable-phone-overlay_libxml2: pending (2.9.2+dfsg1-3ubuntu0.2) ./active/CVE-2015-8317:vivid/stable-phone-overlay_libxml2: pending (2.9.2+dfsg1-3ubuntu0.2)
if and only if the image/OS snap has a fixed version, use $UCT/scripts/mass-cve-edit for any fixed CVEs. Eg:
$ cd $UCT ; ./scripts/mass-cve-edit -p libxml2 -r vivid/ubuntu-core -s released -v 2.9.2+dfsg1-3ubuntu0.2 CVE-2015-8317 CVE-2015-8242 CVE-2015-8241 CVE-2015-7942 CVE-2015-8317... vivid/ubuntu-core_libxml2 updated CVE-2015-8242... vivid/ubuntu-core_libxml2 updated CVE-2015-8241... vivid/ubuntu-core_libxml2 updated CVE-2015-7942... vivid/ubuntu-core_libxml2 updated
Publishing (kernel)
In general, Ubuntu Core stable images/kernel snaps are released on a cadence and therefore will bundle security updates every few weeks. See Note for cloud images, cloud archive, Ubuntu Core and/or Ubuntu Touch for details.
Touch
Testing
The summary here was gleaned from:
https://developer.ubuntu.com/en/start/ubuntu-for-devices/installing-ubuntu-for-devices/
https://developer.ubuntu.com/en/start/ubuntu-for-devices/image-channels/
You can see all the available images with (there may be additional historical channels in the output, but the below should be what you should use):
$ ubuntu-device-flash query --list-channels --device=mako | grep touch ubuntu-touch/devel/ubuntu ubuntu-touch/devel/ubuntu-developer ubuntu-touch/devel-proposed/ubuntu-developer ubuntu-touch/devel-proposed/ubuntu ubuntu-touch/rc/ubuntu ubuntu-touch/rc/ubuntu-developer ubuntu-touch/rc-proposed/ubuntu ubuntu-touch/rc-proposed/ubuntu-developer ubuntu-touch/stable/ubuntu ubuntu-touch/stable/ubuntu-developer ubuntu-touch/stable-proposed/ubuntu ubuntu-touch/stable-proposed/ubuntu-developer
Typically you'll use these channels for testing security updates:
development branch: ubuntu-touch/devel-proposed/ubuntu
staging branch (future overlay): ubuntu-touch/staging/ubuntu
stable-phone-overlay (batched updates): ubuntu-touch/rc-proposed/ubuntu
stable (emergency updates): ubuntu-touch/stable/ubuntu
Sideload click apps with:
$ pkcon install-local --allow-untrusted /path/to/click
Flashing devices
devel:
$ ubuntu-device-flash touch --channel=ubuntu-touch/devel-proposed/ubuntu # first time, add --bootstrap
stable-phone-overlay:
$ ubuntu-device-flash touch --channel=ubuntu-touch/rc-proposed/ubuntu # first time, add --bootstrap
stable:
$ ubuntu-device-flash touch --channel=ubuntu-touch/stable/ubuntu # first time, add --bootstrap
Generating emulator images
First install the necessary packages:
$ sudo apt-get install ubuntu-emulator ubuntu-emulator-runtime
Then create images:
devel:
$ sudo ubuntu-emulator create touch.devel --arch=i386 --channel=ubuntu-touch/devel-proposed/ubuntu --password=0000
stable-phone-overlay:
$ sudo ubuntu-emulator create touch.rc-proposed --arch=i386 --channel=ubuntu-touch/rc-proposed/ubuntu --password=0000
stable (NOTE: as of 2016/01/27 stable emulator images are extremely out of date due to Bug #1535583 and Bug #1517597:
$ sudo ubuntu-emulator create touch.stable --arch=i386 --channel=ubuntu-touch/stable/ubuntu --password=0000
A few potentially useful options:
--memory=<n> if you want more than 512MB of ram
- --revision=## to specify a revision
specify ubuntu-touch/<stability level>/ubuntu-developer for developer mode
You can run the emulator with:
$ ubuntu-emulator run stable.x86 $ ubuntu-emulator run rc-proposed.x86 $ ubuntu-emulator run --scale=0.75 devel-proposed.x86
Some things to keep in mind:
- it takes a while for the emulator to come up
- swiping can be done with the mouse, but you only have a few pixels to work with from the edge to register the swipe
- you need to do the initial setup of the device, unlock screen and enable developer mode in System Settings/About this phone/Developer mode to be able to connect with adb/phablet-shell
- the screen needs to be unlocked to connect with adb/phablet-shell
- the default username is 'ubuntu' with PIN of '0000'
Publishing (userspace)
In general, Ubuntu Touch images are released on a cadence and therefore will bundle security updates every few weeks. See Note for cloud images, cloud archive, Ubuntu Core and/or Ubuntu Touch for details.
Ubuntu Touch images are built using the Ubuntu archive as a base with a PPA overlay for anything additional or updated to have in the image. The current Ubuntu Touch images are built with Ubuntu 15.04 and the vivid packages from the stable-phone-overlay. Security and SRU update to the Ubuntu archive for the base release will automatically flow into the daily rc-proposed images for the next OTA update if the updated package doesn't exist in the PPA. When the package does exist in the PPA, it will need to be updated using the standard silo process. The basic steps are (TODO: clean this up):
- build packages with stable-phone-overlay sources (both vivid (rc-proposed) and wily (devel-proposed))
- request silo in the direct upload way (document)
- upload to silo, wait for them to finish building
- do a 'watchonly' build step
- grab the packages and test in the emulator/on a device
- note testing performed in comment, edit the ticket to say 'QA requested'
- when they signoff, do 'Publish' and sign off on packaging changes
Triage
When packages are in the stable-phone-overlay the CVE for the affected release should be marked pending (<version>). Eg:
Candidate: CVE-2015-8317 References: http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-8317 http://www.ubuntu.com/usn/usn-2834-1 Description: The xmlParseXMLDecl function in parser.c in libxml2 before 2.9.3 allows context-dependent attackers to obtain sensitive information via an (1) unterminated encoding value or (2) incomplete XML declaration in XML data, which triggers an out-of-bounds heap read. ... vivid_libxml2: released (2.9.2+dfsg1-3ubuntu0.2) vivid/stable-phone-overlay_libxml2: pending (2.9.2+dfsg1-3ubuntu0.2) ...
Notice that a USN was issues (2834-1) for vivid and vivid_libxml2 is marked released, however vivid/stable-phone-overlay_libxml2 is marked pending (2.9.2+dfsg1-3ubuntu0.2) because that version has not yet been provided in an OTA update for the stable images.
When a new OTA stable update is available, the triager should:
update a device or emulator to the latest OTA. Eg:
$ system-image-cli -i current build number: 9 channel: ubuntu-touch/stable/meizu.en version version: 9 version tag: OTA-9 ...
phablet-shell (or adb shell) in one terminal and search for the package with apt-cache policy <binpkg> or dpkg -l|grep <version> (be sure to always pipe dpkg -l output to another program such as grep or redirect to a file to avoid truncation). Eg:
$ apt-cache policy libxml2 libxml2: Installed: 2.9.2+dfsg1-3ubuntu0.2 Candidate: 2.9.2+dfsg1-3ubuntu0.2 Version table: *** 2.9.2+dfsg1-3ubuntu0.2 0 100 /var/lib/dpkg/status
cd $UCT ; grep 'vivid/stable-phone-overlay_<srcpkg>: pending (' ./active/CVE* in one terminal. Eg:
$ grep 'vivid/stable-phone-overlay_libxml2: pending (' ./active/CVE* ./active/CVE-2015-7942:vivid/stable-phone-overlay_libxml2: pending (2.9.2+dfsg1-3ubuntu0.2) ./active/CVE-2015-8241:vivid/stable-phone-overlay_libxml2: pending (2.9.2+dfsg1-3ubuntu0.2) ./active/CVE-2015-8242:vivid/stable-phone-overlay_libxml2: pending (2.9.2+dfsg1-3ubuntu0.2) ./active/CVE-2015-8317:vivid/stable-phone-overlay_libxml2: pending (2.9.2+dfsg1-3ubuntu0.2)
if and only if the overlay has a fixed version, use $UCT/scripts/mass-cve-edit for any fixed CVEs. Eg:
$ cd $UCT ; ./scripts/mass-cve-edit -p libxml2 -r vivid/stable-phone-overlay -s released -v 2.9.2+dfsg1-3ubuntu0.2 CVE-2015-8317 CVE-2015-8242 CVE-2015-8241 CVE-2015-7942 CVE-2015-8317... vivid/stable-phone-overlay_libxml2 updated CVE-2015-8242... vivid/stable-phone-overlay_libxml2 updated CVE-2015-8241... vivid/stable-phone-overlay_libxml2 updated CVE-2015-7942... vivid/stable-phone-overlay_libxml2 updated
Publishing (kernel)
In general, Ubuntu Touch images are released on a cadence and therefore will bundle security updates every few weeks. See Note for cloud images, cloud archive, Ubuntu Core and/or Ubuntu Touch for details.
Ubuntu Touch kernel images are vendor kernels and not built from the Ubuntu archive. In general, the Ubuntu Security and Kernel teams will track and file bugs for these product kernel and alert the responsible internal Canonical team to update the product kernels. See the UCT documentation for more information on working with product kernels.
Mozilla
Patching and Building
Patching and building is currently the responsibility of the ubuntu-mozillateam, specifically ChrisCoulson as a backup. Mozilla products have a standing MicroReleaseException and so the ubuntu-mozillateam will get official tarballs from upstream, add/update the debian/ directory and push to the ubuntu-mozilla-security PPA. The ubuntu-mozillateam will also ask for USN to put in the changelog prior to preparing updates since upstream does not make public security vulnerabilities prior to release. People reading the changelog are then able to see the USN and look up the details online. Once packages are built, you can use the standard Ubuntu Security team's tools for publication (using the --ppa=ubuntu-mozilla-security option where appropriate).
The tarballs are built using a script in the packaging branches. To run the script, look in the upstream Mozilla tree to find the build tag of interest and then invoke debian/rules accordingly:
$ debian/rules get-orig-source DEBIAN_TAG=FIREFOX_42_0_BUILD2
The packaging branches can be found under the ~mozillateam's Launchpad code page:
- lp:~mozillateam/firefox/firefox.precise
- lp:~mozillateam/firefox/firefox.trusty
- lp:~mozillateam/firefox/firefox.vivid
- lp:~mozillateam/firefox/firefox.wily
- lp:~mozillateam/firefox/firefox.xenial
There are corresponding branches for thunderbird.
Once the tarball is created, build the first package:
$ bzr-buildpackage -S -sa
For the remaining packages, use:
$ bzr-buildpackage -S -sd
Upstream Mozilla does not give us access to their security bugs, and we get all of your information from upstream's MFSAs (Mozilla Foundation Security Announcements). As such, when the update does not fix a security issue (have an assigned CVE) the developer that uploads packages to the ubuntu-mozilla-security PPA will file a placeholder bug to be used in the changelog. At time of publication, this placeholder bug is filled in with a link to the USN. As such, changelog entries are shorted and typically consist of something like:
firefox (14.0.1+build1-0ubuntu0.12.04.1) precise-security; urgency=low * New upstream stable release (FIREFOX_14_0_1_BUILD1) - see LP: #1024562 for USN information
Firefox and thunderbird, while they may share some of the same CVEs, will have separate '-1' preallocated USNs.
Mozilla Pretesting Schedule
Pretesting of Firefox and Thunderbird is performed on the Aurora (pre-beta) and Beta channels. The schedule below starts on the Mozilla release week.
Week |
Day |
Channel |
Release |
Arch |
Notes |
1 |
Monday |
Release |
All |
Both |
Coordinate with ChrisCoulson to guarantee packages will be ready |
1 |
Tuesday |
Release Day |
|
||
1 |
Friday |
Beta |
Natty |
i386 |
|
2 |
Tuesday |
Aurora |
Lucid |
amd64 |
|
2 |
Friday |
Beta |
Oneiric |
i386 |
|
3 |
Tuesday |
Aurora |
Oneiric |
amd64 |
|
4 |
Tuesday |
Beta |
Natty |
amd64 |
|
4 |
Friday |
Aurora |
Precise |
i386 |
|
5 |
Tuesday |
Beta |
Precise |
amd64 |
|
6 |
Tuesday |
Beta |
Lucid |
i386 |
|
The Launchpad PPAs corresponding to the various Mozilla channels are listed below:
Mozilla Channel |
Ubuntu Source Pkg |
PPA |
Release |
Firefox |
|
Release |
Thunderbird |
|
Beta |
Firefox |
|
Beta |
Thunderbird |
|
Aurora |
Firefox |
|
Aurora |
Thunderbird |
Testing mozilla browsers
Verify the QRT test-browser.py script for the all affected products passes for both i386 and amd64. This script will test a variety of functions, test pages, SSL, javascript, plugins, etc. Since some packages other than Firefox use XUL or NSS, the following gives information on basic testing procedures when a particular source package is updated. These instructions should provide good enough test coverage for a particular update, but are not intended to provide exhaustive testing procedures. To know which packages to test, look in the ubuntu-mozilla-security PPA at the source packages to be tested, and look up the test procedure in the tables below for that release. Eg:
$ for i in firefox firefox-3.0 firefox-3.5 xulrunner-1.9.2 ; do copy_sppa_to_repos --ppa=ubuntu-mozilla-security $i ; done
It is recommended that your testing environment use the security team's vm-tools. Assuming your testing environment is properly setup, firefox testing with test-browser.py should not take more than 30 minutes per arch/release (eg, 4 hours total for 4 stable releases on both amd64 and i386, typically less).
Ubuntu 10.04 LTS and higher
Updated Source Package |
Additional Affected Binaries |
Testing procedure |
firefox |
N/A |
sudo aa-enforce /etc/apparmor.d/usr.bin.firefox1 && $QRT/scripts/test-browser.py -v2 |
thunderbird |
N/A |
see $QRT/notes_testing/nss/README and $QRT/notes_testing/thunderbird for email, then also test addressbook and feed reader |
be sure there are no new AppArmor denials after running the script
this will test java, flash, totem, etc so those applications don't have to be tested separately, but be sure to test both icedtea6-plugin and sun-java6-plugin. Can test the java plugin alone with $QRT/scripts/test-browser.py -v -t java
Notes on test-browser.py
The test-browser.py script is an interactive script that guides the tester through a number of actions that when completed should demonstrate that the browser is functional for the most important use cases. Unlike other QRT, it has a certain 'feel' to it, and if you have not used it before you should run the tests against the current version of the browser in the archive, then compare that to a test run against the version to be in the security update. Some things to keep in mind:
- it is useful to start libreoffice before running test-browser.py, so that the office documents tests don't have to start libreoffice from scratch each time
use 'Ctrl+w' to close Firefox and Gnome applications. Use 'Ctrl+q' to close LibreOffice
- the classpath java test should show an animated image in the upper left, and may need a page reload to trigger it
- the clocks java test rarely shows all the clocks correctly. Just make sure some of them load
- may not be able to use 'Ctrl+w' to close the browser during the java tests
- the 'plugins' tests (eg, for mp3, ogg, ogv, mpeg, etc) may require a page reload. A crash in the flash plugin when visiting youtube is rare, but possible. If this happens you should do a page reload and try various other videos to make sure it isn't a problem)
- about:cache?device=memory works sporadically in gecko browsers
10.10 and higher uses Software Center for apt urls, but the firefox AppArmor profile blocks launching of Software Center
12.04 and higher does not display embedded ogv files (LP: #1043314)
Testing mozilla thunderbird
There is not a QRT script for thunderbird at this point, however there are notes in QRT for how to test thunderbird. Specifically:
- QRT/README.multipurpose-vm: documentation on how to set up an Ubuntu multi-purpose server VM (hardy) for testing various client applications, such as thunderbird
- contains setup information for a mail server for testing:
- POP3, POP3s, POP3/TLS (dovecot)
- IMAP, IMAPS, IMAP/TLS (dovecot)
- SMTP, SMTP+SMTPAUTH, SMTP+SMTPAUTH+TLS (postfix)
- contains setup for SSL CA and how to use it with dovecot and postfix
- contains setup information for a mail server for testing:
- QRT/notes_testing/thunderbird/README: list of things to do to test thunderbird. Eg:
- importing a CA (and related functions)
- various email functionality
- various addressbook functionality
- thunderbird folder views
- news and blogs (RSS) reader
- test upgrades and migrations
- etc
- QRT/notes_testing/nss/README: some additional notes on testing NSS wrt thunderbird, et al. Mostly supplemental information but can provide additional ideas for testing
It is recommended that your testing environment use the security team's vm-tools. Assuming your testing environment is properly setup, thunderbird testing should not take more than 60 minutes per arch/release (Eg, 8 hours total for 4 stable releases on both amd64 and i386, typically less).
Mozilla Regressions
If during testing you find a regression in a Mozilla product, follow this procedure to alert the Ubuntu Desktop team:
- Immediately file a good bug in LP with as much info as possible including exact steps to reproduce and the testing environment. Also include any additional information that might help like if you plan to investigate more fully (when and how), if you are going offline and when you'll be back (for emergency bugs), etc
- If the bug will block publication to our stable releases:
- assign canonical-desktop to the bug
- subscribe ubuntu-security
- ping Chad Miller (chad) on IRC with the bug number to investigate. If he is not available, send Chad an email (CCing seb128 and security@) and ping Chad on IRC when he comes back online
- If the bug isn't serious enough to block publication to Ubuntu, simply ping Chad on IRC with the bug number
Mozilla Publishing
The publication procedure is essentially the same as for regular security updates except:
- Find the placeholder bug number, if one was created, used in the changelog and update the bug.
- Update the title to include the source package name and upstream version. An example is 'Stable update to Firefox 14.0.1'.
- Create tasks for each stable release that will be updated
- Mark any plugins that will be released as affected. For example, if lightning-extension and enigmail are going to be updated alongside Thunderbird, the Thunderbird placeholder bug would need to be opened against thunderbird, lightning-extension and enigmail.
The packages live in the ubuntu-mozilla-security PPA. When calling sis-changes or unembargo.py, you must use --ppa=ubuntu-mozilla-security. Eg:
$ export SRCPKG="firefox-3.0 firefox-3.5 firefox xulrunner-1.9.2" $ $UCT/scripts/sis-changes --action check-build --ppa=ubuntu-mozilla-security $SRCPKG WARN: sparc missing for hardy (Failed to build) (firefox-3.0) BONUS: ia64 found for hardy (firefox-3.0) OK: hardy (firefox-3.0) WARN: sparc missing for karmic (Failed to build) (firefox-3.5) BONUS: ia64 found for karmic (firefox-3.5) OK: karmic (firefox-3.5) BONUS: ia64 found for lucid (firefox) OK: lucid maverick (firefox) WARN: sparc missing for hardy (Failed to build) (xulrunner-1.9.2) WARN: hppa missing for hardy (Failed to build) (xulrunner-1.9.2) BONUS: ia64 found for hardy (xulrunner-1.9.2) BONUS: ia64 found for karmic (xulrunner-1.9.2) BONUS: ia64 found for lucid (xulrunner-1.9.2) OK: hardy karmic lucid maverick (xulrunner-1.9.2)
Unembargoing is similar, (with an up to date git tree as of 2012-08-30):
$ $UQT/security-tools/unembargo --ppa=ubuntu-mozilla-security $SRCPKG Loading Ubuntu Distribution ... Loading Ubuntu Archive ... Loading ubuntu-mozilla-security 'ppa' PPA ... Locating firefox-3.0 ... Publishing firefox-3.0 3.6.14+build3+nobinonly-0ubuntu0.8.04.1 to ubuntu/primary hardy (Security)... Loading Ubuntu Distribution ... Loading Ubuntu Archive ... Loading ubuntu-mozilla-security 'ppa' PPA ... Locating firefox-3.5 ... Publishing firefox-3.5 3.6.14+build3+nobinonly-0ubuntu0.9.10.1 to ubuntu/primary karmic (Security)... Loading Ubuntu Distribution ... Loading Ubuntu Archive ... Loading ubuntu-mozilla-security 'ppa' PPA ... Locating firefox ... Publishing firefox 3.6.14+build3+nobinonly-0ubuntu0.10.04.1 to ubuntu/primary lucid (Security)... Publishing firefox 3.6.14+build3+nobinonly-0ubuntu0.10.10.1 to ubuntu/primary maverick (Security)... Loading Ubuntu Distribution ... Loading Ubuntu Archive ... Loading ubuntu-mozilla-security 'ppa' PPA ... Locating xulrunner-1.9.2 ... Publishing xulrunner-1.9.2 1.9.2.14+build3+nobinonly-0ubuntu0.8.04.1 to ubuntu/primary hardy (Security)... Publishing xulrunner-1.9.2 1.9.2.14+build3+nobinonly-0ubuntu0.10.04.1 to ubuntu/primary lucid (Security)... Publishing xulrunner-1.9.2 1.9.2.14+build3+nobinonly-0ubuntu0.10.10.1 to ubuntu/primary maverick (Security)... Publishing xulrunner-1.9.2 1.9.2.14+build3+nobinonly-0ubuntu0.9.10.1 to ubuntu/primary karmic (Security)...
USN publication follows the standard procedures, with these exceptions:
we may not release until upstream does
new-usn.sh template will not have any CVE references by default. Please refer to the MFSAs for CVE allocations, using '--cve CVE-YYYY-NNNN' in new-usn.sh for the CVEs that affect Ubuntu
list the placeholder bug link, if one was created, in the references section of new-usn.sh using '--cve https://launchpad.net/bugs/NNNNNNN'
since upstream gives little details, it is ok to group like CVEs together in one paragraph in the USN text. Eg:
Jesse Ruderman, Andreas Gal, Nils, Brian Hackett, and Igor Bukanov discovered several memory issues in the browser engine. An attacker could exploit these to crash the browser or possibly run arbitrary code as the user invoking the program. (CVE-2010-3776, CVE-2010-3777, CVE-2010-3778)
after publication, add the new CVEs to UCT with scripts/active_edit
update the placeholder bug's description with a link to the USN. For example, 'USN information: http://www.ubuntu.com/usn/usn-NNNN-1/'
Chromium Browser
Can check for new chromium releases here: https://omahaproxy.appspot.com/
[Old steps] Upload and Publication
chromium packages are currently supported by Chad Miller (chad). These packages are large, so sponsoring should be:
On your local machine, get the new sources with: wget .../ (don't forget the trailing '/'!)
On your local machine, compare the new sources with the the last published sources and verify the new packages follow MicroReleaseExceptions
Once the packages are verified as ok, copy them to the ~/sign directory on chinstrap (you can wget --mirror -np these from chinstrap)
Verify and sign the packages from you local machine (may require setup, see below):
$ $UST/package-tools/u-verify-chinstrap # verify the signatures in ~/sign $ $UST/package-tools/u-sign-chinstrap # sign the packages in ~/sign $ $UST/package-tools/u-verify-chinstrap # reverify the signatures in ~/sign
dput from chinstrap (may require setup, see below):
$ dput security-proposed-lucid chromium-browser_*.10.04.1_source.changes $ dput security-proposed-maverick chromium-browser_*.10.10.1_source.changes
Rather than uploading directly to the security PPA, we instead basically use the Ubuntu Security team's sponsored upload procedures:
Build in ubuntu-security-proposed
Once done building, pocket copy them to -proposed and update the bug in Launchpad
Verify the packages in -proposed using (at least) https://git.launchpad.net/qa-regression-testing:/scripts/test-browser.py and document it in the bug
Pocket copy from -proposed to both -security and -updates. Unlike other packages in -proposed, these do not have to wait 7 days to be pocket copied (because it has an MRE), but part of the condition of the MRE is that the testing must be documented in a bug.
Sponsoring from Osomon updates
Chromium browser is update/build in canonical-chromium-buils
In order to sponsor it follow the steps:
- Copy chromium-browser to your local repo to test it:
- e.g for a specific release: $UST/repo-tools/copy_sppa_to_repos --ppa=canonical-chromium-builds/stage chromium-browser --release=bionic
Test it using this chromium test plans and the following testing section using $QRT.
- After tested, unembargo it. The -n option is used here to first check if the copy will be ok, after check type it without the -n option:
- e.g: $UQT/security-tools/unembargo -n --ppa=canonical-chromium-builds/stage --release=bionic chromium-browser
Then continue with the standard procedures to announce the update
Testing
As mentioned above, use https://git.launchpad.net/qa-regression-testing:/scripts/test-browser.py and document the results in the bug. For best results:
- Setup a VM or new user for the testing
- Launch chromium and choose your search provider
- Close then launch chromium again and choose whether to make chromium the default browser or not
Start the QRT scripts with:
$ ./test-browser.py -v -E -e chromium-browser
- Refer to the 'Notes on test-browser.py' under mozilla browser testing, and keep in mind:
- the java clock plugin only show 4 clocks, not 8 like in gecko, and the 4 clocks may not always display right with openjdk
- the java reload tests don't always display correctly after reload with sun-java6
- the java dithering test page may not display correctly with openjdk (or may take a long time to load)
- ogg audio files don't advance the playback until mouseover (LP: #732976)
- you cannot import a crt system wide (test certificates)
- you cannot permanently allow a self-signed cert-- it can only be allowed for the session (test certificate by IP)
IMPORTANT: be sure to ask #webapps to test the new packages before publication for 12.10 and later
Konqueror browser
Testing
Use https://git.launchpad.net/qa-regression-testing:/scripts/test-browser.py. For best results:
- Setup a VM or new user for the testing
run sudo apt-get install konqueror kmplayer mplayer and install any other items from QRT-Depends in testlib_browser.py
- Launch, then close konqueror (so kdeinit4 is running)
Start the QRT scripts with:
$ ./test-browser.py -v -e konqueror
- Refer to the 'NOTE' under mozilla browser testing, and keep in mind:
- rtf is opened in the browser, not in openoffice.org
- images, graphics and colors look wrong (could fix in KDE control center most likely)
Due to https://bugs.kde.org/show_bug.cgi?id=162485, you cannot import a crt system wide
- kmplayer may not be embedded in konqueror in the multimedia tests
- 401 test on Ubuntu 9.10 never displays a page on 'Cancel' or entering an invalid credential
- java is temperamental:
- if shows not enabled in the tests, go to Settings/Configure Konqueror/Web browsing/Java, then uncheck and recheck 'Enable java globally'
- java reloads don't work very well
- the java clocks don't display
- you have to click the guy with a trident in the upper left for him to move (and thus proving java works on the site)
Rekonq browser
Testing
Use https://git.launchpad.net/qa-regression-testing:/scripts/test-browser.py. For best results:
- Setup a VM or new user for the testing
run sudo apt-get install rekonq kmplayer mplayer and install any other items from QRT-Depends in testlib_browser.py
- Launch, then close rekonq (so kdeinit4 is running)
Start the QRT scripts with:
$ ./test-browser.py -v -e rekonq
- Refer to the 'NOTE' under mozilla browser testing, and keep in mind:
- rtf is opened in the browser, not in openoffice.org
- images, graphics and colors look wrong in Ubuntu 10.10 and ealier (could fix in KDE control center most likely)
Due to https://bugs.kde.org/show_bug.cgi?id=162485, you cannot import a crt system wide
- kmplayer may not be embedded in konqueror in the multimedia tests
- java is temperamental. For now, just the first java test page is tested
- tar file doesn't open (skipped in script)
multimedia files don't work well with file:// (skipped in script)
- on Ubuntu 11.04, the first start of rekonq will have errors.
- on Ubuntu 10.10, rekonq fails to start due to segv
OpenJDK
Package Preparation
The typical approach for OpenJDK updates is:
- Ubuntu Foundations team prepares updates for openjdk-* for Debian
- Ubuntu Foundations syncs/merges the package from Debian into the Ubuntu development release
- Ubuntu Foundations pings the Ubuntu Security team if they are unable to prepare backported packages for stable releases
- If needed, Ubuntu Security creates backports for stable releases
- Ubuntu Security uploads/sponsors the stable release packages to one of the security PPAs
- Ubuntu Security verifies testsuite results between the new packages and what is currently in the archive
- Ubuntu Security smoketests the new packages
- If everything looks ok, Ubuntu Security publishes the USN
The points of contact on Ubuntu Foundations are(as of 2015/01/27) doko (falling back to slangasek).
Package backports to stable releases
If have the package from the development release or Debian, you will need to update the debian/control file. This file is regenerated by using debian/rules within an i386 schroot with full build-depends installed. For example, for openjdk-7 (substitute 'openjdk-6' when preparing packages for it):
$ dpkg-source -x ./openjdk-7*.dsc # unpack the new package $ cd ./openjdk-7*/ $ dch -i # adjust for stable release as appropriate $ schroot -c <release>-i386 -u root (<release>-i386)# apt-get install lsb-release (<release>-i386)# apt-get build-dep openjdk-7 (<release>-i386)# su <your username> (<release>-i386)$ touch debian/rules && debian/rules debian/control ... debian/control did change, please restart the build make: *** [debian/control] Error 1 (<release>-i386)$ exit (<release>-i386)# exit
Then proceed to build the package as usual. Things to watch out for:
Different patches are used for different releases. By sure to check that 'fakeroot debian/rules patch && cd build && make patch' works in the schroot for this release before building to catch problems before building (tip: do this in a different directory than your pristine source otherwise autoconf files will be needlessly updated in your debdiff)
- If there are missing dependencies, adjust debian/rules as necessary for this release and forward the changes to doko
IMPORTANT: be sure to verify the following:
the testsuite is enabled in the build. Look in debian/rules and make sure 'with_check = disabled for this build' is commented out
The following are listed for historical reference in case something gets dropped and to see how to deal with various issues. They should no longer be needed as of 2015/01/27:
(openjdk-7 only) for stable releases that have icedtea-web < 1.4-2 (as of 2013-07-17, Ubuntu 13.04 and lower), be sure to adjust the Breaks line in debian/control and debian/control.in on icedtea-netx to use the updated versions found in USN-1907-2. Specifically:
12.04 should use: Breaks: icedtea-netx (<< 1.2.3-0ubuntu0.12.04.3)
12.10 should use: Breaks: icedtea-netx (<< 1.3.2-1ubuntu0.12.10.2)
13.04 should use: Breaks: icedtea-netx (<< 1.3.2-1ubuntu1.1)
(openjdk-7 precise only). Be sure this change is applied: Fix quoting of configure args for the zero build. Apply patch from 7u55-2.4.7-1ubuntu1~0.12.04.1_7u55-2.4.7-1ubuntu1~0.12.04.2 to debian/patches/it-aarch64-zero-default.diff
Patch backports to stable releases
While using new upstream releases with package backports is preferred, sometimes backporting an isolated patch is needed.
OpenJDK 6
Upstream Vcs can be found here:
http://icedtea.classpath.org/hg/release/icedtea6-1.11 (RedHat)
http://icedtea.classpath.org/hg/release/icedtea6-1.12 (Ubuntu)
USN-2033-1 cherrypicked patches from trunk and 1.11 because the 1.12 branch wasn't updated yet. Due to how IcedTea patches work, the patches could not be applied as simple distribution patches. Following upstream patching, the Makefile.* files must be modified before running the IcedTea configure step. As such, USN-2033-1 adjusted debian/rules to apply debian/patches/ubuntu-security-NNNN-* before running configure:
- added ubuntu-security and stamps/ubuntu-security-stamp targets to apply patches in debian/patches/ubuntu-security-*
- had stamps/icedtea-configure depend on stamps/ubuntu-security-stamp
- adjusted debian-clean to unapply ubuntu-security patches
As such, future updates can follow this procedure. On the latest stable release (or devel release):
$ debian/rules ubuntu-security # applies all the ubuntu-security-* patches $ cd .. ; cp -a openjdk-6-6b27-1.12.6 openjdk-6-6b27-1.12.6.orig <apply patches> $ diff -Naurp openjdk-6-6b27-1.12.6.orig openjdk-6-6b27-1.12.6 > /tmp/ubuntu-security-NNNN-....patch $ rm -rf openjdk-6-6b27-1.12.6 $ cp -a openjdk-6-6b27-1.12.6.orig openjdk-6-6b27-1.12.6 $ cp /tmp/ubuntu-security-NNNN-....patch debian/patches $ fakeroot debian/rules patch && cd build && make patch # if this works, add to package and try to build it
The above may fail and you might have to adjust the following as needed:
- atk-wrapper-security.patch (all releases)
- java-access-bridge-security.patch (lucid)
If Makefile.am changes (which it most certainly will), need to run autotools to update Makefile.in. icedtea 1.13 needs automake 1.14, but this only exists in trusty and later, so on that release, use 'fakeroot debian/rules patch && cd build && make patch', then create a patch to update ubuntu-security-9999-Makefile.in.patch
New package generation
The Foundations team normally handles generating new upstream versions so this isn't normally needed (and may be out of date).
For openjdk updates, generally we take the icedtea tarballs that Andrew Hughes produces. These will also be announced to the openjdk distro-pkg-dev list. To incorporate them, do the following:
- grab the 1.8.x (for armel), 1.11.x (most arches, lucid and newer) release tarballs
use the script debian/generate-debian-orig.sh to generate the openjdk .orig tarball:
copy and unpack (dpkg-source -x) the current version of the source package you're going to update in a temporary directory
unpack (tar xvpf icedtea-1.x.x.tar.gz) the appropriate icedtea tarball into the same temporary working directory; i.e. not into the unpacked source directory below.
- OpenJDK 6
cd into the unpacked source dir and adjust the versions and paths in the debian/generate-debian-orig.sh script to point to the right places:
version= should be adjusted to refer to the new icedtea version (e.g. version=6b20-1.9.8)
hotspot= and cacaotb= may need to be adjusted to refer to the existing tarballs (e.g. hotspot=hotspot-hs19.tar.gz)
point icedtea_checkout= to the unpacked update directory (e.g. icedtea_checkout=../icedtea6-1.9.8)
debian_checkout= should point to the debian/ directory.
ensure that the bits to cp -p $hotspot $pkgdir.orig/ and cp -p $cacaotb $pkgdir.orig/ are enabled (i.e. not commented out)
for the armel packages (i.e. the 1.8.x version), ensure that base=openjdk-6b18 is set
run the script: sh debian/generate-debian-orig.sh; this creates two directories, one with and one without the copied in debian/ directory
- OpenJDK 7
adjust the versions and paths in openjdk-7*/debian/generate-debian-orig.sh script to point to the right places:
tarballdir= should be adjust to refer to the new OpenJDK version. Eg '7u9'. This version should be announced with the upstream IcedTea release
version= should be adjusted to refer to the new OpenJDK and icedtea version (e.g. version=7u9-2.3.4)
jamvmtb= and cacaotb= may need to be adjusted to refer to the existing tarballs (e.g. jamvmtb=jamvm-0972452d441544f7dd29c55d64f1ce3a5db90d82.tar.gz)
point icedtea_checkout= to the unpacked update directory (e.g. icedtea_checkout=icedtea-2.3.4)
debian_checkout= should point to openjdk7 directory and copy the current unpacked source debian/ to openjdk7. Eg:
$ cp -a openjdk-7-7u9-2.3.3/debian openjdk7
create and populate the tarballdir. This is mostly what is in debian/README.source, but with a few changes since it is incomplete. Eg, if tarballdir=7u9:
$ export BUILD=7u9 $ mkdir -p $BUILD/ $ tar -zxvpf icedtea-2.3.4.tar.gz $ cd icedtea-2.3.4 $ ./configure --enable-jamvm $ make download # this downloads all the tarballs $ mv ./hotspot.tar.gz ./hotspot-default.tar.gz $ cp ./*tar.gz ../$BUILD $ ./configure --enable-jamvm --enable-zero $ make download # this overwrites hotspot.tar.gz with zero's, which is why this is two steps $ cp ./hotspot.tar.gz ./hotspot-zero.tar.gz $ cp ./hotspot-zero.tar.gz ../$BUILD $ cd .. $ bash openjdk7/generate-dfsg-zip.sh $BUILD/jdk.tar.gz # create jdk-dfsg.tar.gz $ bash openjdk7/generate-dfsg-zip.sh $BUILD/langtools.tar.gz # create langtools-dfsg.tar.gz
NOTE: make download will handle finding the tarballs for you, but the upstream tarballs can be found at http://icedtea.classpath.org/hg/release/. The openjdk.tar.gz tarball is renamed from icedtea7-forest-2.3-hgrev.tar.gz.
run the script in the parent directory of the current unpacked source: chmod 755 openjdk7/update-shasum.sh && sh ./openjdk7/generate-debian-orig.sh; this creates two directories, one with and one without the copied in debian/ directory. The one with the debian/ directory has its sha256sums updated for jdk-dfsg.tar.gz and langtools-dfsg.tar.gz (using update-shasum.sh), then it will apply debian/patches/icedtea-patch.diff, run autogen.sh and remove autom4te.cache. See generate-debian-orig.sh for details
create a new orig.tar.gz: tar cvpzf ../openjdk-6_6b20-1.9.8.orig.tar.gz openjdk-6-6b20-1.9.8.orig. If you only want the orig.tar.gz and don't plan to use the changes in the debian/ directory, can instead purge the fake unpacked package and rename the the .orig directory in place; e.g. rm -rf openjdk-6-6b20-1.9.8 && mv openjdk-6-6b20-1.9.8.orig/ openjdk-6-6b20-1.9.8 then generate the the orig tarball; e.g. tar cvpzf ../openjdk-6_6b20-1.9.8.orig.tar.gz openjdk-6-6b20-1.9.8
- due to the way patching happens in the openjdk packages, the packages have local changes (e.g. changes outside of the debian/ directory). You'll need to pull those forward as well.
Building
If you receive a orig.tar.gz from the Foundations team for you to apply to a security update (as opposed to a full source package), be sure that you run debian/update-shasum.sh, apply debian/patches/icedtea-patch.diff, run sh autogen.sh then rm -rf autom4te.cache before building. Check debian/generate-debian-orig.sh to make sure you've done everythng required before building.
When building openjdk locally with UMT, depending on your build system's hardware configuration, you'll likely need to pass -C pkgbuild_ulimit_v="5242880" to umt build.
IMPORTANT: for some reason armel builds on lucid and oneiric sometimes will FTBFS with random segmentation faults (search for 'Segmentation' in the build log). If this happens, retry the build. Some armel builders don't seem to be able to handle openjdk well.
Testing
Review QRT/notes_testing/openjdk-6/README (it contains information on both openjdk-6 and openjdk-7). Basically, use test-openjdk.py --jdk=openjdk-[67] -v from QRT as well as QRT/notes_testing/openjdk-6/extract-test-results.sh. test-openjdk.py will test eclipse and netbeans as well as the web plugin on firefox and chromium-browser. extract-test-results.sh helps compare test suite runs in LP build logs. Running tests from the testsuite is documented in QRT/build_testing/openjdk/README.txt.
If find issues, use the upstream bug tracker.
Distro Patching
To patch openjdk, add patches to debian/patches, adjust the file paths in the patch, then add the patch to debian/rules. IcedTea has a couple of different targets:
DISTRIBUTION_PATCHES: these are applied first and affect both the bootstrap (ecj) build and normal build
DISTRIBUTION_ECJ_PATCHES: these are applied after DISTRIBUTION_PATCHES and only affect the bootstrap (ecj) build
PRECONFIGURE_DEBIAN_PATCHES: this is not an official IcedTea target and is used only within debian/rules. This was added to support patching things such as autoconf files (see 6b30-1.13.1-1ubuntu2 for details).
tzdata
tzdata update mostly and in general is done by upgrading to new versions of orig.tar.gz upstream. There are two ways to do that:
- Grab the orig.tar.gz from upstream, check it is for sure the code with the shamsums. Untar it and copy the old version debian folder to that page. Rename the orig folder to match the package name+version and umt build it.
- Or do it based on the core-devs upgrade:
go to tzdata clone it.
git fetch origin <branch_you_want>.
- You can either use gdp into the reporitory you cloned (not explaining in here)
- or copying the .orig.tar.bz of the new version, untar it and copy the debian folder from the git/branch you cloned into that folder. Once done that check the folder names as the previous one+version and just umt build as you normally do.
Update Manager
Update Manager updates have some special post-publication steps that need to get done by an Archive Admin. See special instructions here.
Secure Boot
Please see the special instructions for performing and testing Secure Boot key databases.
Partner
After uploading to partner, an archive admin must process the upload.
Sponsoring MariaDB Security Updates
The MariaDB packages in Ubuntu receive consistent security support from Otto Kekäläinen. Otto provides the MariaDB security updates to the Ubuntu Security Team in the form a git tree that he maintains for use with git-buildpackage. It is easy to take Otto's changes and consume them into the typical umt-based workflow.
The example instructions below will focus on mariadb-10.0 packages in Ubuntu 16.04 LTS. Adjust the Ubuntu release and MariaDB package name accordingly.
Download the existing MariaDB package to create a directory for working and to compare against the provided upload:
$ umt download -r xenial mariadb-10.0 $ cd mariadb-10.0/
Take note of Otto's git tree location and branch name. Clone his git tree, while specifying the branch name, move into the git tree, and make sure you're in the right branch:
$ gbp clone --debian-branch=ubuntu-16.04 https://salsa.debian.org/mariadb-team/mariadb-10.0.git xenial-sponsoring $ cd xenial-sponsoring/ $ git branch master pristine-tar * ubuntu-16.04 upstream
Navigate to the mariadb-10.0 Launchpad source page to check if Otto has already uploaded the orig source tarball to Launchpad. He may have already uploaded it in an upload to the Ubuntu dev release.
Build the source package and tell gbp to use umt source. If Otto has already uploaded the orig source tarball then use the --force-no-orig option:
$ gbp buildpackage --git-builder="umt source --force-no-orig"
If Otto has not already uploaded the orig source tarball, you'll need to include it in your source package build and verify that the build fetched the tarball blessed by MariaDB upstream:
$ gbp buildpackage --git-builder="umt source"
While the source package is building, navigate to the MariaDB download page.
- Click the "Download X.Y.Z Stable" button where X.Y.Z is the version of the upstream MariaDB release that you're sponsoring.
- Find the "mariadb-X.Y.Z.tar.gz" row and click on the "Checksum" button. Save the hashes and detached PGP signature to a file (../mariadb-X.Y.Z.tar.gz.asc)
- Once the source package has been built, verify the SHA256 hash from ../mariadb-X.Y.Z.tar.gz.asc with the "orig.tar.gz" SHA256 hash in the "Checksums-Sha256" section of ../source/*.changes
Verify the signature. The signing key's fingerprint is 1993 69E5 404B D5FC 7D2F E43B CBCB 082A 1BB9 43DB.
$ gpg --verify ../mariadb-10.0.28.tar.gz.asc ../source/mariadb-*.orig.tar.gz
View the changes to the debian/ directory by comparing the dsc of the current mariadb package with the dsc of the package being sponsored:
$ debdiff ../xenial/*.dsc ../source/*.dsc | filterdiff --include "*/debian/*" | view -
If the changes look acceptable, sign and check the upload:
$ umt sign -k $ umt check -S
Upload to the security ppa:
$ umt upload
Sponsoring on private-fileshare.canonical.com
Sometimes developers or teams will provide packages on private-fileshare.canonical.com. When these source packages are large, the packages can be remotely signed and uploaded from private-fileshare (you do not need nor should have your ~/.gnupg on private-fileshare!). The basic process is:
If it doesn't already exist, create the ~/sign directory on private-fileshare with:
$ test -d ~/sign/ || mkdir -m 0750 ~/sign/ $ chgrp ubuntu_security ~/sign/
Copy the packages to sponsor into the ~/sign directory
- On local system (may require setup, see below), verify and sign the packages in ~/sign on private-fileshare:
$ $UST/package-tools/u-verify-private-fileshare # verify the signatures in ~/sign $ $UST/package-tools/u-sign-private-fileshare # sign the packages in ~/sign $ $UST/package-tools/u-verify-private-fileshare # reverify the signatures in ~/sign
- dput the packages from private-fileshare (may require setup, see below)
Setup
In order to upload packages from private-fileshare, you need to either use '--unchecked' with dput or import your signing key into your keyring on private-fileshare. Since you do not want your actual keys there, we can create a keyring specifically for private-fileshare. On your local machine:
$ mkdir -m 700 ~/gnupg.private-fileshare $ gpg --keyserver keyserver.ubuntu.com --homedir=~/gnupg.private-fileshare --recv-keys 0x<your key> $ scp -pr ~/gnupg.private-fileshare/ private-fileshare.canonical.com:~<your username on private-fileshare>/.gnupg
On private-fileshare, be sure to check ~/.gnupg so that you have imported the key and that you didn't accidentally copy over the wrong files:$ gpg --list-keys $ test ! -s ~/.gnupg/secring.gpg || echo 'WARNING!!! secring.gpg is not empty'
Depending on how your ssh_config is setup on your local machine, u-sign-private-fileshare and u-verify-private-fileshare may not work immediately. However, you can create these symlinks and it should work ok:
$ ln -s $UST/package-tools/u-sign-private-fileshare ~/bin/u-sign-private-fileshare.canonical.com $ ln -s $UST/package-tools/u-verify-private-fileshare ~/bin/u-verify-private-fileshare.canonical.com
Now instead of using $UST/package-tools/u-sign-private-fileshare, just use ~/bin/u-sign-private-fileshare.canonical.com
On private-fileshare, setup your ~/.dput.cf file as in SecurityTeam/UpdateProcedures. Eg:
[security-trusty] fqdn = ppa.launchpad.net incoming = ~ubuntu-security/ppa/ubuntu/trusty login = anonymous [security-precise] fqdn = ppa.launchpad.net incoming = ~ubuntu-security/ppa/ubuntu/precise login = anonymous [security-lucid] fqdn = ppa.launchpad.net incoming = ~ubuntu-security/ppa/ubuntu/lucid login = anonymous
SecurityTeam/PublicationNotes (last edited 2023-04-13 16:58:52 by leosilvab)