|Deletions are marked like this.||Additions are marked like this.|
|Line 392:||Line 392:|
|$ qemu-img convert -f raw <path to>/my-vm/disk0.qcow2 -O qcow2 $HOME/machines/my-vm.qcow2||$ qemu-img convert -f raw <path to>/my-vm/disk0.img -O qcow2 $HOME/machines/my-vm.qcow2|
When testing security updates, it is important to test the update in a full Ubuntu environment for the release being tested. Put simply, an update for an Ubuntu 10.04 LTS package should be tested in a full install of Lucid. The Ubuntu Security team has created some scripts and put them into ubuntu-qa-tools. These tools use kvm and libvirt, the preferred virtualization technology in Ubuntu. KVM requires the virtualization extensions to be available and enabled in your BIOS. You can test to see if you have these by using the kvm-ok command. QEMU is an alternative and can be used with libvirt and vm-tools, but it is slow. If you cannot use kvm, then it is worth looking at another virtualization technology such as virtualbox.
vm-tools are essentially wrapper scripts for virsh and virt-install for both making VM creation repeatable and to help batch commands to multiple VMs. Using vm-tools should not be considered mandatory as all this can be achieved via other means (though use of vm new is encouraged and should save you time).
Setting up vm-tools on Precise
Much of this (and more) can be found in the vm-tools/README file.
Install the necessary software:
$ sudo apt-get install ubuntu-virt-mgmt ubuntu-virt-server genisoimage xorriso
Add your user to the libvirtd group:
$ sudo adduser <username> libvirtd
Download the ubuntu-qa-tools branch:
$ bzr branch lp:ubuntu-qa-tools
Add the UQT_VM_TOOLS environment variable to your startup scripts (eg ~/.bashrc) and have it point to the ubuntu-qa-tools branch:
update your PATH to include the vm-tools directory (eg via ~/.bashrc):
Make sure $HOME/.uqt-vm-tools.conf has:
vm_release_list="hardy lucid natty oneiric precise quantal"
[OPTIONAL] While uvt will use reasonable defaults without a config file, you may also use uvt config to create $HOME/.uvt.conf. It can then be customized to have something like:
# # Configuration file for uvt tool # # # vm_dir_iso: sets the location where .iso images are to be found # and downloaded to. This defaults to $HOME/iso if unset. #vm_dir_iso="<path>/isos" # # vm_dir_iso_cache: sets the location where preseeded iso images will be # cached. This defaults to $HOME/iso/cache if unset. #vm_dir_iso_cache="<path>/isos/cache" # # vm_path: sets the location where the libvirt virtual machines will be # stored. This defaults to $HOME/machines if unset #vm_path="<path>/machines" # # vm_locale: This sets the default locale in the virtual machines. Default # setting is inherited from current user. #vm_locale="en_US.UTF-8" # # vm_setkeyboard: If this is set to "true", keyboard configuration will # be setup in the virtual machines using the four following # options. Default is "true", with settings inherited from # host. #vm_setkeyboard="false" #vm_xkbmodel="pc105" #vm_xkblayout="ca" #vm_xkbvariant="" #vm_xkboptions="lv3:ralt_switch" # # vm_image_size: Default size for virtual machine images (hard disk size). # Default is 8GB vm_image_size="8" # # vm_memory: Default size for virtual machine RAM. A minimum of 384MB is # needed for desktops, 256MB for servers. Default is 512MB. vm_memory="768" # # vm_username: Username of the initial user that is set up in the virtual # machines. Default is current user. #vm_username="awesomedude" # # vm_password: Password of the initial user that is set up in the virtual # machines. Default is hardcoded to "ubuntu". #vm_password="ubuntu" # # vm_timezone: Timezone that is set up in the virtual machines. Default # is inherited from the host. #vm_timezone="UTC" # # vm_ssh_key: SSH public key to copy over to the virtual machines. This # sets up SSH public key authentication to the VMs by default. # Default is $HOME/.ssh/id_rsa.pub #vm_ssh_key="$HOME/.ssh/id_rsa.pub" # # vm_aptproxy: If set, this sets up an apt proxy in the virtual machine. # No default setting. Can be used to set up apt-cacher-ng, for # example. #vm_aptproxy="" # # vm_mirror: This is used as the default mirror in the sources.list file. # Default is main archive. #vm_mirror="http://archive.ubuntu.com/ubuntu" # # vm_security_mirror: This is used as the default security mirror in the # sources.list file. Default is main security archive. #vm_security_mirror="http://security.ubuntu.com/ubuntu" # # vm_src_mirror: This is used to override the archive used for source # packages. By default, source packages are obtained from # the archives set by vm_mirror and vm_security_mirror. #vm_src_mirror="http://myownsources/ubuntu" # # vm_mirror_host: FIXME: used with mini iso #vm_mirror_host="archive.ubuntu.com" # # vm_mirror_dir: FIXME: used with mini iso #vm_mirror_dir="/ubuntu" # # vm_repo_url: This is used by the 'uvt repo' command to add or remove # a local software repository to a VM. Defaults to a repo # on the default libvirt host address: # http://192.168.122.1/debs/testing # #vm_repo_url="http://192.168.122.1/debs/testing" # # vm_latecmd: This allows specifying an additional latecommand that is # executed by the installer. #vm_latecmd="" # # vm_extra_packages: A list of extra packages can be specified here to be # installed in the VM, for example, "screen". Empty by # default. #vm_extra_packages="" vm_extra_packages="screen vim"
- Download the desktop CD images for each release and put them in the directory specified in the vm_dir_iso configuration option.
Virtual machines for testing
The security team should have at least one virtual machine per release and one for the development release. The recommended method is to use kvm with libvirt, which is what is documented here.
kvm on 64-bit will allow 32-bit OS guests, so if running a 64-bit host OS, then can have both i386 and amd64 installs. It's often useful to spend some time getting a pristine image exactly how you want it, then you clone/snapshot off of it for development work and/or testing.
WARNING: in this configuration you need to take special care to isolate the virtual machines from hostile network traffic, since anyone could login as root or your user to a VM. One way to do this is by using NAT with libvirt (the default).
Networking with libvirt
In order to perform name lookups for virtual machines, you must do two system changes:
Put a line into /etc/dhcp/dhclient.conf like so:
prepend domain-name-servers 192.168.122.1;
Disable the system dnsmasq to prevent it from looping with libvirt's dnsmasq by modifying /etc/NetworkManager/NetworkManager.conf to comment out the following line:
Then you will be able to ssh into the machines with:
$ ssh sec-lucid-amd64
$ ssh sec-lucid-amd64.
or if avahi is installed in the guest:
$ ssh sec-lucid-amd64.local
Notice the '.' at the end of the second command. This may be needed due to a bug in dnsmasq when using NAT with some versions of Ubuntu.
IMPORTANT: When using the tools, keep in mind that you may connect to the same VM with different hostnames. Eg, you could connect to the sec-lucid-amd64 VM as 'sec-lucid-amd64.', 'sec-lucid-amd64.local' or 'sec-lucid-amd64'. uvt tests if the VM is up by testing hostnames in this order and it is possible for the first to fail and the second to succeed. Therefore, you should login via ssh to at least the first two (if not all three), so that you have the host keys for the host. You may be having this problem if you see something like:
----- sec-maverick-amd64 ----- Starting sec-maverick-amd64 (snapshotted) ... Waiting for 'sec-maverick-amd64' to come up host is up Command: ssh -t -o BatchMode=yes -l root sec-maverick-amd64.local "ls /tmp" Host key verification failed.
In this case, the ssh host key was in ~/.ssh/known_hosts for 'sec-maverick-amd64.', but not for 'sec-maverick-amd64.local'.
vm-tools will by default try '.local' mDNS (avahi) addresses when connecting to VMs. This may not work in all environments and can be disabled by using the following in $HOME/.uqt-vm-tools.conf:
# Set to 'no' to disable '.local' mDNS (avahi) lookups for VMs vm_host_use_avahi="no"
You'll know you are encountering this problem when host name lookups fail for <machine>.local. This can also be caused be restrictive firewalls. If you have ufw enabled, then instead of disabling avahi lookups you can do something similar to:
$ sudo ufw allow in on virbr0
If you also have IPv6 enabled (the default in Ubuntu 11.10 and above), you can add something like the following to /etc/ufw/before6.rules before the 'COMMIT' line:
# allow MULTICAST mDNS for service discovery for libvirt -A ufw6-before-forward -p udp -d ff02::fb --dport 5353 -j ACCEPT -A ufw6-before-forward -p icmpv6 --icmpv6-type destination-unreachable -j ACCEPT -A ufw6-before-forward -p icmpv6 --icmpv6-type packet-too-big -j ACCEPT -A ufw6-before-forward -p icmpv6 --icmpv6-type time-exceeded -j ACCEPT -A ufw6-before-forward -p icmpv6 --icmpv6-type parameter-problem -j ACCEPT -A ufw6-before-forward -p icmpv6 --icmpv6-type echo-request -j ACCEPT
Be sure to use sudo ufw reload after making these changes. Note, you may need to stop and start any VMs after making changes to your firewall.
Snapshotted virtual machines
With this method, we have VMs which are initially created with a pristine libvirt snapshot. Eg, might have the following virtual machines:
Create your VMs (best to do this sequentially, even though it takes a while):
. $HOME/.uqt-vm-tools.conf for i in $vm_release_list ; do uvt new $i i386 sec uvt new $i amd64 sec done
The basic idea is as follows:
uvt new after installing the OS will create the pristine snapshot
using uvt start <vm> starts the image normally, and uvt stop <vm> shuts it down cleanly. Changes to the VM are preserved across reboots
using uvt start -r <vm> reverts all changes made to the VM since the last snapshot, then starts the VM in the pristine state. Note that if you know you are going to revert to the previous snapshot, you can use uvt stop -f <vm> which does an unclean shutdown akin to pulling the plug.
revert to pristine snapshot and discard:
$ uvt start -r <vm> ... do your stuff ... $ uvt stop -f <vm>
snapshot with persistence across stops:
$ uvt start -r <vm> # revert all changes and start with a clean slate ... do your stuff ... $ uvt stop <vm> # no '-f' so a clean shutdown is performed $ uvt start <vm> # notice no '-r', so changes are not reverted ... do more stuff ... $ uvt stop <vm> ... do even more stuff ... $ uvt stop -f <vm> # done with work, so pull the plug for a quick shutdown (assumes -r on next start)
IMPORTANT: Changes made in a snapshot will be lost if you use '-r' with uvt start or otherwise remove the snapshots.
To update the pristine image and make a new snapshot:
$ uvt start -r <vm> # revert all changes and start with a clean slate ... make changes to the VM ... $ uvt stop <vm> # cleanly shut it down $ uvt snapshot <vm> # update the pristine snapshot
As a convenience, you can perform package upgrades using:
$ uvt update --autoremove <vm> # reverts to pristine snapshot, starts the VM, dist-upgrades, cleans up, then updates the pristine snapshot
Cloned virtual machines (deprecated)
NOTE: with the new uvt snapshot method, using cloning is less useful.
With this method, can have a set of clean VMs and another set that are cloned. Eg, might have the following virtual machines:
Then clone the above (eg with `uvt clone1 (see below)) and have:
The 'clean' machines should only ever be accessed for updates or fine-tuning while the 'sec' machines can be used, updated, mangled, discarded and recreated as needed.
To create the clean machines:
. $HOME/.uqt-vm-tools.conf for i in $vm_release_list ; do uvt new $i i386 clean uvt new $i amd64 clean done
If you use dnsmasq as above, you can clone these with:
$ uvt clone -p clean sec # all in one go $ uvt clone <oldvm> <newvm> # individually
This creates 'sec-<release>-<arch>' machines that can be updated, tested etc and keeps a pristine copy of the virtual machines in 'clean-<release>-<arch>'. The uvt clone command is simply a wrapper for virt-clone which also updates the following files:
uvt clone is kind of brittle because it currently uses ssh commands, so if something goes wrong, just make sure the above files get updated.
If using dnsmasq as above, you can also use uvt cmd to do batch commands for the virtual machines, like so:
$ uvt cmd -p sec 'uname -a' $ uvt cmd -r -p sec "apt-get update && apt-get -y upgrade"
uvt cmd uses 'release_list' in $HOME/.uqt-vm-tools.conf and will ssh in to all running sec-*-* machines and perform the specified command. Specifying -r to uvt cmd will login to the machine and run the command as root, otherwise it runs as non-root (ie your username in the guest).
Other useful commands:
uvt start: start a single VM of a group of VMs. Eg:
$ uvt start -r sec-lucid-amd64 # start a single VM, reverting to the last snapshot $ uvt start -p sec -a i386 # start all i386 VMs starting with 'sec' $ uvt start -v -p sec -a i386 # start all i386 VMs starting with 'sec' without virt-viewer
uvt stop: stop a single running VM or a group of running VMs. Eg:
$ uvt stop sec-lucid-amd64 # stop a single VM via ACPI $ uvt stop -r sec-lucid-amd64 # stop a single VM via ACPI, and revert to pristine snapshot $ uvt stop -f -p sec # hard stop all VMs starting with 'sec'
uvt snapshot: snapshot a single running VM or a group of running VMs. Eg:
$ uvt snapshot sec-lucid-amd64 # snapshot a single VM $ uvt snapshot -p sec # snapshot all VMs starting with 'sec'
uvt revert: reverts a single running VM or a group of running VMs to pristine snapshot. Eg:
$ uvt revert sec-lucid-amd64 # snapshot a single VM $ uvt revert -p sec # snapshot all VMs starting with 'sec'
uvt update: update and snapshot a single running VM or a group of running VMs. Eg:
$ uvt update sec-lucid-amd64 # dist-upgrade and snapshot a single VM $ uvt update -p sec # dist-upgrade and snapshot all VMs starting with 'sec'
uvt remove: remove a single VM or a group of VMs. Eg:
$ uvt remove sec-lucid-amd64 # delete a single VM $ uvt remove -p sec # delete all VMs starting with 'sec'
uvt repo: toggle the local repo (eg where 'umt repo' puts stuff) for a single running VM or a group of running VMs. Eg:
$ uvt repo -e sec-lucid-amd64 # enable the local repo for a single VM $ uvt repo -d sec-lucid-amd64 # disable the local repo for a single VM $ uvt repo -e -p sec # enable the local repo for all running VMs starting with 'sec'
uvt view: connect to the VNC console of a single VM or a group of VMs using virt-viewer. Eg:
$ uvt view sec-lucid-amd64 $ uvt view -p sec
If you use uvt repo you may notice that while your apt archive is signed, the VM doesn't know about your key. This can be solved with:
$ gpg --armor --export <your key id> | ssh root@<vm> sudo apt-key add -
Many VM operations can be quite slow on spinning-metal hard drives; be sure your ssh key is added to your ssh-agent for a long enough time to handle all operations. Consider adding the key without a timeout:
You can migrate VMs to use uvt in the following manner:
copy the disk image into vm_path (by default, $HOME/machines) as <domain>.qcow2. Eg, for a machine named 'my-vm':
$ cp <path to>/my-vm/disk0.qcow2 $HOME/machines/my-vm.qcow2
uvt only supports qcow2 images (due to its use of snapshots), so if the VM uses a raw image, do something like:
$ qemu-img convert -f raw <path to>/my-vm/disk0.img -O qcow2 $HOME/machines/my-vm.qcow2
update the libvirt XML to use the new path to the disk in vm_path (eg, virsh edit my-vm). Eg:
... <disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='<vm_path>/my-vm.qcow2'/> ...
create the pristine snapshot:
$ uvt snapshot my-vm
You can verify that the snapshot was created with virsh snapshot-list <domain>. Eg:
$ virsh snapshot-list my-vm Name Creation Time State ------------------------------------------------------------ pristine 2012-10-09 11:09:27 -0500 shutoff