Differences between revisions 7 and 8
Revision 7 as of 2010-10-21 15:01:39
Size: 12434
Editor: pool-71-114-237-8
Comment: add note on vm-tools possibly connecting under different hostnames
Revision 8 as of 2011-01-05 22:09:58
Size: 12434
Editor: pool-71-114-230-161
Deletions are marked like this. Additions are marked like this.
Line 43: Line 43:
vm_memory="384" vm_memory="512"

When testing security updates, it is important to test the update in a full Ubuntu environment for the release being tested. Put simply, an update for an Ubuntu 10.04 LTS package should be tested in a full install of Lucid. The Ubuntu Security team has created some scripts and put them into ubuntu-qa-tools. These tools use kvm and libvirt, the preferred virtualization technology in Ubuntu. KVM requires the virtualization extensions to be available and enabled in your BIOS. You can test to see if you have these by using the kvm-ok command. QEMU is an alternative and can be used with the libvirt, vm-tools and a newer vmbuilder (0.12.3-0ubuntu2 in Lucid is not enough), but it is slow. If you cannot use kvm, then it is worth looking at another virtualization technology such as virtualbox.

vm-tools are essentially wrapper scripts for virsh and vmbuilder for both making VM creation repeatable and to help batch commands to multiple VMs. Using vm-tools should not be considered mandatory as all this can be achieved via other means (though use of vm-new is encouraged and should save you time).

Setting up vm-tools on Lucid

Much of this (and more) can be found in the vm-tools/README file.

  1. Install the necessary software:

    $ sudo apt-get install ubuntu-virt-mgmt ubuntu-virt-server python-vm-builder
  2. Download the ubuntu-qa-tools branch:

    $ bzr branch lp:ubuntu-qa-tools
  3. Add the UQT_VM_TOOLS environment variable to your startup scripts (eg ~/.bashrc) and have it point to the ubuntu-qa-tools branch:

    export UQT_VM_TOOLS="$HOME/bzr-pulls/ubuntu-qa-tools/vm-tools"
  4. update your PATH to include the vm-tools directory (eg via ~/.bashrc):

    export PATH="$PATH:$UQT_VM_TOOLS"
  5. Create $HOME/.uqt-vm-tools.conf to have something like:

    # list of all active releases (included devel)
    vm_release_list="dapper hardy jaunty karmic lucid maverick"
    # used by vm-repo (ie 'umt repo' puts stuff in /var/www/debs/testing/..., so
    # vm_repo_url should be the URL to those files. The IP of the host is by
    # default, and guests are
    # vm-tools specific settings (normal: root_size:5120, swap_size:1024, ram:384)
    vm_path="/home/<username>/vms/kvm"      # where to store the VM images
    #vm_path="/dev/shm/<username>"          # shared memory
    #vm_mirror="http://<local mirror>/ubuntu"
    #vm_security_mirror="http://<local mirror>/ubuntu"
    vm_ssh_key=""                   # defaults to $HOME/.ssh/id_rsa.pub
    vm_flavor=""                    # blank for default, set to override (eg 'rt')
    vm_archs="amd64 i386"           # architectures to use when use '-p PREFIX'
    # list of packages to also install via postinstall.sh
    vm_extra_packages="screen ubuntu-desktop vim openoffice.org"
    # vm-new locale
    # vm-new keyboard layout
    #  Settings for Feisty+, and Dapper's xorg
    # Settings for Dapper's console

Virtual machines for testing

The security team should have at least one virtual machine per release and one for the development release. The recommended method is to use kvm with libvirt, which is what is documented here.

kvm on 64-bit will allow 32-bit OS guests, so if running a 64-bit host OS, then can have both i386 and amd64 installs. It's often useful to spend some time getting a pristine image exactly how you want it, then you clone/snapshot off of it for development work and/or testing.

WARNING: in this configuration you need to take special care to isolate the virtual machines from hostile network traffic, since anyone could login as root or your user to a VM. One way to do this is by using NAT with libvirt (the default).

Networking with libvirt

If you also setup up your host's resolv.conf to have:

nameserver ...
nameserver ...

Then you will be able to ssh into the machines with:

$ ssh sec-lucid-amd64.

Or if avahi is installed in the guest:

$ ssh sec-lucid-amd64.local

Notice the '.' at the end of the first command. This is due to a bug in dnsmasq when using NAT.

IMPORTANT: When using the tools, keep in mind that you may connect to the same VM with different hostnames. Eg, you could connect to the sec-lucid-amd64 VM as 'sec-lucid-amd64.', 'sec-lucid-amd64.local' or 'sec-lucid-amd64'. vm_ping tests if the VM is up by testing hostnames in this order and it is possible for the first to fail and the second to succeed. Therefore, you should login via ssh to at least the first two (if not all three), so that you have the host keys for the host. You may be having this problem if you see something like:

----- sec-maverick-amd64 -----
Starting sec-maverick-amd64 (snapshotted) ...
Waiting for 'sec-maverick-amd64' to come up host is up
Command: ssh -t -o BatchMode=yes -l root sec-maverick-amd64.local "ls /tmp"
Host key verification failed.

In this case, the ssh host key was in ~/.ssh/known_hosts for 'sec-maverick-amd64.', but not for 'sec-lucid-amd64.local'.

Cloned virtual machines

With this method, can have a set of clean VMs and another set that are cloned. Eg, might have the following virtual machines:

  • clean-dapper-i386
  • clean-dapper-amd64
  • clean-hardy-i386
  • clean-hardy-amd64
  • clean-jaunty-i386
  • clean-jaunty-amd64
  • clean-karmic-i386
  • clean-karmic-amd64
  • clean-lucid-i386
  • clean-lucid-amd64
  • clean-maverick-i386
  • clean-maverick-amd64

Then clone the above (eg with virt-clone or vm-clone (see below)) and have:

  • sec-dapper-i386
  • sec-dapper-amd64
  • sec-hardy-i386
  • sec-hardy-amd64
  • sec-jaunty-i386
  • sec-jaunty-amd64
  • sec-karmic-i386
  • sec-karmic-amd64
  • sec-lucid-i386
  • sec-lucid-amd64
  • sec-maverick-i386
  • sec-maverick-amd64

The 'clean' machines should only ever be accessed for updates or fine-tuning while the 'sec' machines can be used, updated, mangled, discarded and recreated as needed.

To create the clean machines:

. $HOME/.uqt-vm-tools.conf
for i in $vm_release_list ; do
    vm-new $i i386 clean
    vm-new $i amd64 clean

After creating the machines, run 'sudo /postinstall.sh' in the new VMs (if it wasn't run already-- the file is deleted after a successful run). You can login with:

$ ssh clean-lucid-amd64.local
Password: ubuntu

You can login as root with:

$ ssh root@clean-lucid-amd64.local
Password: ubuntu

Get them exactly the way you want them (eg, install ubuntu-desktop, disable tracker, disable screensave, etc), and then they can be used for cloning.

If you use dnsmasq as above, you can clone these with:

. $HOME/.uqt-vm-tools.conf
for i in $release_list ; do
    vm-clone clean-${i}-i386 sec-${i}-i386
    vm-clone clean-${i}-amd64 sec-${i}-amd64

This creates 'sec-<release>-<arch>' machines that can be updated, tested etc and keeps a pristine copy of the virtual machines in 'clean-<release>-<arch>'. The vm-clone command is simply a wrapper for virt-clone which also updates the following files:

  • /etc/hostname
  • /etc/dhcp3/dhclient.conf
  • /etc/hosts

vm-clone is kind of brittle because it currently uses ssh commands, so if something goes wrong, just make sure the above files get updated.

Snapshotted virtual machines

With this method, just have a pristine set of VMs that are snapshotted. Eg, might have the following virtual machines:

  • sec-dapper-i386
  • sec-dapper-amd64
  • sec-hardy-i386
  • sec-hardy-amd64
  • sec-jaunty-i386
  • sec-jaunty-amd64
  • sec-karmic-i386
  • sec-karmic-amd64
  • sec-lucid-i386
  • sec-lucid-amd64
  • sec-maverick-i386
  • sec-maverick-amd64

The basic idea is as follows:

  1. the pristine image is in <path>/disk0.pristine.qcow2

  2. the libvirt XML uses the disk at <path>/disk0.qcow2

  3. when using 'vm-start -s ...', <path>/disk0.qcow2 is created using qemi-img

    • as a snapshot of <path>/disk0.pristine.qcow2. If <path>/disk0.qcow2 already exists, it is discarded

  4. 'vm-stop -u ...' will commit changes to any snapshots. 'vm-stop -f ...'
    • will remove any existing snapshots ('-f' uses virsh destroy, which implies not caring about the contents). 'vm-stop ...' shutdown the machine without removing or committing existing snapshots.

Typical uses:

  • snapshot and discard:

    $ vm-start -s foo
    ... do your stuff ...
    $ vm-stop -f foo
  • snapshot with persistence across stops:

    $ vm-start -s foo
    ... do your stuff ...
    $ vm-stop foo                # no '-f' so snapshot is not removed
    $ vm-start foo               # notice no '-s', so existing snapshot is used
    ... do more stuff ...
    $ vm-stop foo
    ... do even more stuff ...
    $ vm-stop -f foo             # done with work, discard the snapshot with '-f'

Adjusting VMs to use snapshots:

$ vm-use-snapshots <vmname>

For a new VM:

$ vm-new ...
$ vm-use-snapshots <vmname>

So, to create your testing VMs for snapshots:

. $HOME/.uqt-vm-tools.conf
for i in $vm_release_list ; do
    vm-new $i i386 sec
    vm-use-snapshots sec-$i-i386
    vm-new $i amd64 sec
    vm-use-snapshots sec-$i-amd64

IMPORTANT: Changes made in a snapshot will be lost if you use '-f' with vm-stop or otherwise remove the snapshots. Also, because the libvirt XML references the snapshot name and not the pristine image, these machines cannot be started with virsh or virt-manager until the snapshot is created (because the disk appears to be missing).

To use in virt-manager, start a vm with a snapshot using vm-start, but don't use the vnc viewer:

$ vm-start -v -s foo

Then access the already started VM from within virt-manager.

You can also manually create the snapshot with qemu-img, like so:

$ qemu-img create -F qcow2 -b <pristine> -f qcow2 <snapshot>

And manually commit the changes with:

$ qemu-img commit <snapshot>

Batch commands

If using dnsmasq as above, you can also use vm-cmd to do batch commands for the virtual machines, like so:

$ vm-cmd -p sec uname -a
$ vm-cmd -r -p sec "apt-get update && apt-get -y upgrade"

vm_cmd uses 'release_list' in $HOME/.uqt-vm-tools.conf and will ssh in to all running sec-*-* machines and perform the specified command. Specifying -r to vm-cmd will login to the machine and run the command as root, otherwise it runs as non-root (ie your username in the guest).

Other useful commands:

  • vm-start: start a single VM of a group of VMs. Eg:

    $ vm-start -s sec-lucid-amd64           # start a single VM, with a new snapshot
    $ vm-start -p sec -a i386               # start all i386 VMs starting with 'sec'
    $ vm-start -v -p sec -a i386            # start all i386 VMs starting with 'sec' without virt-viewer
  • vm-stop: stop a single running VM of a group of running VMs. Eg:

    $ vm-stop sec-lucid-amd64               # stop a single VM via ACPI
    $ vm-stop -u sec-lucid-amd64            # stop a VM via ACPI and commit the snapshot
    $ vm-stop -f -p sec                      # hard stop all VMs starting with 'sec'
  • vm-remove: remove a single VM of a group of VMs. Eg:

    $ vm-remove sec-lucid-amd64             # delete a single VM
    $ vm-remove -p sec                      # delete all VMs starting with 'sec'
  • vm-repo: toggle the local repo (eg where 'umt repo' puts stuff) for a single running VM of a group of running VMs. Eg:

    $ vm-repo -e -r lucid sec-lucid-amd64   # enable the local repo for a single VM
    $ vm-repo -d -r lucid sec-lucid-amd64   # disable the local repo for a single VM
    $ vm-repo -e -p sec                     # enable the local repo for all running VMs starting with 'sec'
  • vm-view: connect to the VNC console of a single VM of a group of VMs using virt-viewer. Eg:

    $ vm-view sec-lucid-amd64
    $ vm-view -p sec

SecurityTeam/TestingEnvironment (last edited 2020-06-29 17:13:38 by jdstrand)