TestingEnvironment

Differences between revisions 24 and 26 (spanning 2 versions)
Revision 24 as of 2012-05-01 22:45:46
Size: 15032
Editor: sbeattie
Comment: update for maverick EoL and quantal dev opening
Revision 26 as of 2012-10-05 15:56:49
Size: 13979
Editor: jdstrand
Comment: update for uvt (part 2)
Deletions are marked like this. Additions are marked like this.
Line 9: Line 9:
vm-tools are essentially wrapper scripts for virsh and virt-install for both making VM creation repeatable and to help batch commands to multiple VMs. Using vm-tools should not be considered mandatory as all this can be achieved via other means (though use of ```vm-new``` is encouraged and should save you time).

== Setting up vm-tools on Lucid ==
vm-tools are essentially wrapper scripts for virsh and virt-install for both making VM creation repeatable and to help batch commands to multiple VMs. Using vm-tools should not be considered mandatory as all this can be achieved via other means (though use of ```vm new``` is encouraged and should save you time).

== Setting up vm-tools on Precise ==
Line 26: Line 26:
 0. Create $HOME/.uqt-vm-tools.conf to have something like:{{{  0. [OPTIONAL] While `uvt` will use reasonable defaults without a config file, you may also create $HOME/.uqt-vm-tools.conf to have something like:{{{
Line 30: Line 30:
# used by vm-repo (ie 'umt repo' puts stuff in /var/www/debs/testing/..., so
# vm_repo_url should be the URL to those files. The IP of the host is by
# default 192.168.122.1, and guests are 192.168.122.2-254.
Line 35: Line 32:
# vm-tools specific settings (normal: root_size:5120, swap_size:1024, ram:384)
vm_path="/home/<username>/vms/kvm" # where to store the VM images
#vm_path="/dev/shm/<username>" # shared memory
vm_mirror="http://archive.ubuntu.com/ubuntu"
vm_security_mirror="http://security.ubuntu.com/ubuntu"
#vm_mirror="http://<local mirror>/ubuntu"
#vm_security_mirror="http://<local mirror>/ubuntu"

vm_dir_iso="/home/<username>/iso" # Where the desktop iso images are located
vm_dir_iso_cache="/home/<username>/iso/cache" # Where the preseeded iso images will be cached

vm_image_size="8" # Size of disk images, in GB
vm_memory="512"

vm_ssh_key="" # defaults to $HOME/.ssh/id_rsa.pub
#vm_path="<path to vm directory>"
#vm_mirror="http://debmirror/ubuntu"
#vm_security_mirror="http://debmirror/ubuntu"
#vm_dir_iso="<path to isos directory>"
#vm_dir_iso_cache="<path to iso cache directory>"

vm_image_size="8" # in GB
vm_memory="784"
Line 51: Line 42:
vm_flavor="" # blank for default, set to override (eg 'rt')
vm_archs="amd64 i386" # architectures to use when use '-p PREFIX'

# list of packages to also install via postinstall.sh
Line 57: Line 44:
# vm-new locale
vm_locale="en_US.UTF-8"

# vm-new keyboard layout
vm_setkeyboard="false" # Set to true to enable the custom settings below
vm_xkbmodel="pc105"
vm_xkblayout="ca"
vm_xkbvariant=""
vm_xkboptions="lv3:ralt_switch"

# Set to 'no' to disable '.local' mDNS (avahi) lookups for VMs
# Set to 'no' to disable '.local' lookups for VMs
Line 95: Line 72:
$ ssh sec-lucid-amd64
}}}
or:{{{
Line 97: Line 77:
Or if avahi is installed in the guest:{{{ or if avahi is installed in the guest:{{{
Line 101: Line 81:
Notice the '.' at the end of the first command. This is due to a bug in dnsmasq when using NAT with some versions of Ubuntu. Notice the '.' at the end of the second command. This may be needed due to a bug in dnsmasq when using NAT with some versions of Ubuntu.
Line 113: Line 93:
'''IMPORTANT:''' When using the tools, keep in mind that you may connect to the same VM with different hostnames. Eg, you could connect to the `sec-lucid-amd64` VM as '`sec-lucid-amd64.`', '`sec-lucid-amd64.local`' or '`sec-lucid-amd64`'. `vm_ping` tests if the VM is up by testing hostnames in this order and it is possible for the first to fail and the second to succeed. Therefore, you should login via ssh to at least the first two (if not all three), so that you have the host keys for the host. You may be having this problem if you see something like:{{{ '''IMPORTANT:''' When using the tools, keep in mind that you may connect to the same VM with different hostnames. Eg, you could connect to the `sec-lucid-amd64` VM as '`sec-lucid-amd64.`', '`sec-lucid-amd64.local`' or '`sec-lucid-amd64`'. `uvt` tests if the VM is up by testing hostnames in this order and it is possible for the first to fail and the second to succeed. Therefore, you should login via ssh to at least the first two (if not all three), so that you have the host keys for the host. You may be having this problem if you see something like:{{{
Line 121: Line 101:
In this case, the ssh host key was in ~/.ssh/known_hosts for '`sec-maverick-amd64.`', but not for '`sec-lucid-amd64.local`'. In this case, the ssh host key was in ~/.ssh/known_hosts for '`sec-maverick-amd64.`', but not for '`sec-maverick-amd64.local`'.
Line 142: Line 122:
=== Cloned virtual machines ===
With this method, can have a set of clean VMs and another set that are cloned.
Eg, might have the following virtual machines:
=== Snapshotted virtual machines ===
With this method, we have VMs which are initially created with a pristine libvirt snapshot. Eg, might have the following virtual machines:
 * sec-hardy-i386
 * sec-hardy-amd64
 * sec-lucid-i386
 * sec-lucid-amd64
 * sec-natty-i386
 * sec-natty-amd64
 * sec-oneiric-i386
 * sec-oneiric-amd64
 * sec-precise-i386
 * sec-precise-amd64
 * sec-quantal-i386
 * sec-quantal-amd64

Create your VMs (best to do this sequentially, even though it takes a while):{{{
. $HOME/.uqt-vm-tools.conf
for i in $vm_release_list ; do
    uvt new $i i386 sec
    uvt new $i amd64 sec
done
}}}
The basic idea is as follows:
 0. `uvt new` after installing the OS will create the `pristine` snapshot
 0. using `uvt start <vm>` starts the image normally, and `uvt stop <vm>` shuts it down cleanly. Changes to the VM are preserved across reboots
 0. using `uvt start -r <vm>` reverts all changes made to the VM since the last snapshot, then starts the VM in the pristine state. Note that if you know you are going to revert to the previous snapshot, you can use `uvt stop -f <vm>` which does an unclean shutdown akin to pulling the plug.

Typical uses:
 * revert to pristine snapshot and discard:{{{
$ uvt start -r <vm>
... do your stuff ...
$ uvt stop -f <vm>
}}}

 * snapshot with persistence across stops:{{{
$ uvt start -r <vm> # revert all changes and start with a clean slate
... do your stuff ...
$ uvt stop <vm> # no '-f' so a clean shutdown is performed
$ uvt start <vm> # notice no '-r', so changes are not reverted
... do more stuff ...
$ uvt stop <vm>
... do even more stuff ...
$ uvt stop -f <vm> # done with work, so pull the plug for a quick shutdown (assumes -r on next start)
}}}

'''IMPORTANT:''' Changes made in a snapshot will be lost if you use '-r' with `uvt start` or otherwise remove the snapshots.

To up the pristine image and make a new snapshot:{{{
$ uvt start -r <vm> # revert all changes and start with a clean slate
... make changes to the VM ...
$ uvt stop <vm> # cleanly shut it down
$ uvt snapshot <vm> # update the pristine snapshot
}}}

As a convenience, you can perform package upgrades using:{{{
$ uvt update --autoremove <vm> # starts the VM, dist-upgrades, cleans up, then updates the pristine snapshot
}}}
'''IMPORTANT''': make sure the VM was properly shutdown before using this
command because `uvt update` does not revert to the previous snapshot.

=== Cloned virtual machines (deprecated) ===
'''NOTE:''' with the new `uvt snapshot` method, using cloning is less useful.

With this method, can have a set of clean VMs and another set that are cloned. Eg, might have the following virtual machines:
Line 158: Line 199:
Then clone the above (eg with virt-clone or vm-clone (see below)) and have: Then clone the above (eg with `uvt clone1 (see below)) and have:
Line 179: Line 220:
    vm-new $i i386 clean
    vm-new $i amd64 clean
    uvt new $i i386 clean
    uvt new $i amd64 clean
Line 184: Line 225:
After creating the machines, run 'sudo /postinstall.sh' in the new VMs (if it
wasn't run already-- the file is deleted after a successful run). You can login
with: {{{
$ ssh clean-lucid-amd64.local
Password: ubuntu
}}}

You can login as root with:{{{
$ ssh root@clean-lucid-amd64.local
Password: ubuntu
}}}


Get them exactly the way you want them (eg, install ubuntu-desktop, disable
tracker, disable screensave, etc), and then they can be used for cloning.
Line 201: Line 226:
. $HOME/.uqt-vm-tools.conf
for i in $release_list ; do
    vm-clone clean-${i}-i386 sec-${i}-i386
    vm-clone clean-${i}-amd64 sec-${i}-amd64
done
$ uvt clone -p clean sec # all in one go
$ uvt clone <oldvm> <newvm> # individually
Line 210: Line 232:
The vm-clone command is simply a wrapper for virt-clone which also updates The `uvt clone` command is simply a wrapper for virt-clone which also updates
Line 213: Line 235:
 * /etc/dhcp3/dhclient.conf  * /etc/dhcp[3]/dhclient.conf
Line 216: Line 238:
vm-clone is kind of brittle because it currently uses ssh commands, so if `uvt clone` is kind of brittle because it currently uses ssh commands, so if
Line 219: Line 241:

=== Snapshotted virtual machines ===
With this method, just have a pristine set of VMs that are snapshotted.
Eg, might have the following virtual machines:
 * sec-dapper-i386
 * sec-dapper-amd64
 * sec-hardy-i386
 * sec-hardy-amd64
 * sec-jaunty-i386
 * sec-jaunty-amd64
 * sec-karmic-i386
 * sec-karmic-amd64
 * sec-lucid-i386
 * sec-lucid-amd64
 * sec-maverick-i386
 * sec-maverick-amd64

The basic idea is as follows:
 0. the pristine image is in <path>/disk0.pristine.qcow2
 0. the libvirt XML uses the disk at <path>/disk0.qcow2
 0. when using 'vm-start -s ...', <path>/disk0.qcow2 is created using qemi-img
   as a snapshot of <path>/disk0.pristine.qcow2. If <path>/disk0.qcow2 already
   exists, it is discarded
 0. 'vm-stop -u ...' will commit changes to any snapshots. 'vm-stop -f ...'
   will remove any existing snapshots ('-f' uses virsh destroy, which implies
   not caring about the contents). 'vm-stop ...' shutdown the machine without
   removing or committing existing snapshots.

Typical uses:
 * snapshot and discard:{{{
$ vm-start -s foo
... do your stuff ...
$ vm-stop -f foo
}}}

 * snapshot with persistence across stops:{{{
$ vm-start -s foo
... do your stuff ...
$ vm-stop foo # no '-f' so snapshot is not removed
$ vm-start foo # notice no '-s', so existing snapshot is used
... do more stuff ...
$ vm-stop foo
... do even more stuff ...
$ vm-stop -f foo # done with work, discard the snapshot with '-f'
}}}

Adjusting VMs to use snapshots:{{{
$ vm-use-snapshots <vmname>
}}}

For a new VM:{{{
$ vm-new ...
$ vm-use-snapshots <vmname>
}}}

So, to create your testing VMs for snapshots:{{{
. $HOME/.uqt-vm-tools.conf
for i in $vm_release_list ; do
    vm-new $i i386 sec
    vm-use-snapshots sec-$i-i386
    vm-new $i amd64 sec
    vm-use-snapshots sec-$i-amd64
done
}}}

'''IMPORTANT:''' Changes made in a snapshot will be lost if you use '-f' with
vm-stop or otherwise remove the snapshots. Also, because the libvirt XML
references the snapshot name and not the pristine image, these machines cannot
be started with virsh or virt-manager until the snapshot is created (because
the disk appears to be missing).

To use in virt-manager, start a vm with a snapshot using vm-start, but don't use the vnc
viewer:{{{
$ vm-start -v -s foo
}}}

Then access the already started VM from within virt-manager.

You can also manually create the snapshot with qemu-img, like so:{{{
$ qemu-img create -F qcow2 -b <pristine> -f qcow2 <snapshot>
}}}

And manually commit the changes with:{{{
$ qemu-img commit <snapshot>
}}}
Line 306: Line 242:
If using dnsmasq as above, you can also use vm-cmd to do batch commands for the If using dnsmasq as above, you can also use `uvt cmd` to do batch commands for the
Line 308: Line 244:
$ vm-cmd -p sec uname -a
$ vm-cmd -r -p sec "apt-get update && apt-get -y upgrade"
}}}

vm_cmd uses 'release_list' in $HOME/.uqt-vm-tools.conf and will ssh in to all running sec-*-* machines and perform the specified command. Specifying ```-r``` to vm-cmd will login to the machine and run the command as root, otherwise it runs as non-root (ie your username in the guest).
$ uvt cmd -p sec 'uname -a'
$ uvt cmd -r -p sec "apt-get update && apt-get -y upgrade"
}}}

`uvt cmd` uses 'release_list' in $HOME/.uqt-vm-tools.conf and will ssh in to all running sec-*-* machines and perform the specified command. Specifying ```-r``` to `uvt cmd` will login to the machine and run the command as root, otherwise it runs as non-root (ie your username in the guest).
Line 315: Line 251:
 * vm-start: start a single VM of a group of VMs. Eg:{{{
$ vm-start -s sec-lucid-amd64 # start a single VM, with a new snapshot
$ vm-start -p sec -a i386 # start all i386 VMs starting with 'sec'
$ vm-start -v -p sec -a i386 # start all i386 VMs starting with 'sec' without virt-viewer
}}}
 * vm-stop: stop a single running VM of a group of running VMs. Eg:{{{
$ vm-stop sec-lucid-amd64 # stop a single VM via ACPI
$ vm-stop -u sec-lucid-amd64  # stop a VM via ACPI and commit the snapshot
$ vm-stop -f -p sec # hard stop all VMs starting with 'sec'
}}}
 * vm-remove
: remove a single VM of a group of VMs. Eg:{{{
$ vm-remove sec-lucid-amd64  # delete a single VM
$ vm-remove -p sec # delete all VMs starting with 'sec'
}}}
 * vm-repo: toggle the local repo (eg where 'umt repo' puts stuff) for a single running VM of a group of running VMs. Eg:{{{
$ vm-repo -e -r lucid sec-lucid-amd64 # enable the local repo for a single VM
$ vm-repo -d -r lucid sec-lucid-amd64 # disable the local repo for a single VM
$ vm-repo -e -p sec # enable the local repo for all running VMs starting with 'sec'
}}}
 * vm-view: connect to the VNC console of a single VM of a group of VMs using virt-viewer. Eg:{{{
$ vm-view sec-lucid-amd64
$ vm-view -p sec
}}}
 * uvt start: start a single VM of a group of VMs. Eg:{{{
$ uvt start -r sec-lucid-amd64 # start a single VM, reverting to the last snapshot
$ uvt start -p sec -a i386 # start all i386 VMs starting with 'sec'
$ uvt start -v -p sec -a i386 # start all i386 VMs starting with 'sec' without virt-viewer
}}}
 * uvt stop: stop a single running VM or a group of running VMs. Eg:{{{
$ uvt stop sec-lucid-amd64 # stop a single VM via ACPI
$ uvt stop -f -p sec # hard stop all VMs starting with 'sec'
}}}
 * uvt snapshot: snapshot a single running VM or a group of running VMs. Eg:{{{
$ uvt snapshot
sec-lucid-amd64 # snapshot a single VM
$ uvt snapshot -p sec # snapshot all VMs starting with 'sec'
}}}
 * uvt update: update and snapshot a single running VM or a group o
f running VMs. Eg:{{{
$ uvt update sec-lucid-amd64 # dist-upgrade and snapshot a single VM
$ uvt update -p sec # dist-upgrade and snapshot all VMs starting with 'sec'
}}}
 * uvt re
move: remove a single VM or a group of VMs. Eg:{{{
$ uvt remove sec-lucid-amd64
# delete a single VM
$ uvt remove -p sec # delete a
ll VMs starting with 'sec'
}}}
 * uvt repo: toggle the local repo (eg where 'umt repo' puts stuff) for a single running VM or a group of running VMs. Eg:{{{
$ uvt repo -e -r lucid sec-lucid-amd64 # enable the local repo for a single VM
$ uvt repo -d -r lucid sec-lucid-amd64 # disable the local repo for a single VM
$ uvt repo -e -p sec # enable the local repo for all running VMs starting with 'sec'
}}}
 * uvt view: connect to the VNC console of a single VM or a group of VMs using virt-viewer. Eg:{{{
$ uvt view sec-lucid-amd64
$ uvt view -p sec
}}}

When testing security updates, it is important to test the update in a full Ubuntu environment for the release being tested. Put simply, an update for an Ubuntu 10.04 LTS package should be tested in a full install of Lucid. The Ubuntu Security team has created some scripts and put them into ubuntu-qa-tools. These tools use kvm and libvirt, the preferred virtualization technology in Ubuntu. KVM requires the virtualization extensions to be available and enabled in your BIOS. You can test to see if you have these by using the kvm-ok command. QEMU is an alternative and can be used with libvirt and vm-tools, but it is slow. If you cannot use kvm, then it is worth looking at another virtualization technology such as virtualbox.

vm-tools are essentially wrapper scripts for virsh and virt-install for both making VM creation repeatable and to help batch commands to multiple VMs. Using vm-tools should not be considered mandatory as all this can be achieved via other means (though use of vm new is encouraged and should save you time).

Setting up vm-tools on Precise

Much of this (and more) can be found in the vm-tools/README file.

  1. Install the necessary software:

    $ sudo apt-get install ubuntu-virt-mgmt ubuntu-virt-server genisoimage xorriso
  2. Download the ubuntu-qa-tools branch:

    $ bzr branch lp:ubuntu-qa-tools
  3. Add the UQT_VM_TOOLS environment variable to your startup scripts (eg ~/.bashrc) and have it point to the ubuntu-qa-tools branch:

    export UQT_VM_TOOLS="$HOME/bzr-pulls/ubuntu-qa-tools/vm-tools"
  4. update your PATH to include the vm-tools directory (eg via ~/.bashrc):

    export PATH="$PATH:$UQT_VM_TOOLS"
  5. [OPTIONAL] While uvt will use reasonable defaults without a config file, you may also create $HOME/.uqt-vm-tools.conf to have something like:

    # list of all active releases (included devel)
    vm_release_list="hardy lucid natty oneiric precise quantal"
    
    vm_repo_url="http://192.168.122.1/debs/testing"
    
    #vm_path="<path to vm directory>"
    #vm_mirror="http://debmirror/ubuntu"
    #vm_security_mirror="http://debmirror/ubuntu"
    #vm_dir_iso="<path to isos directory>"
    #vm_dir_iso_cache="<path to iso cache directory>"
    
    vm_image_size="8" # in GB
    vm_memory="784"
    
    vm_connect="qemu:///system"
    vm_extra_packages="screen vim"
    
    # Set to 'no' to disable '.local' lookups for VMs
    vm_host_use_avahi="yes"
  6. Download the desktop CD images for each release and put them in the directory specified in the vm_dir_iso configuration option.

Virtual machines for testing

The security team should have at least one virtual machine per release and one for the development release. The recommended method is to use kvm with libvirt, which is what is documented here.

kvm on 64-bit will allow 32-bit OS guests, so if running a 64-bit host OS, then can have both i386 and amd64 installs. It's often useful to spend some time getting a pristine image exactly how you want it, then you clone/snapshot off of it for development work and/or testing.

WARNING: in this configuration you need to take special care to isolate the virtual machines from hostile network traffic, since anyone could login as root or your user to a VM. One way to do this is by using NAT with libvirt (the default).

Networking with libvirt

If you also setup up your host's /etc/resolv.conf to have:

nameserver 192.168.122.1
nameserver ...
nameserver ...

Then you will be able to ssh into the machines with:

$ ssh sec-lucid-amd64

or:

$ ssh sec-lucid-amd64.

or if avahi is installed in the guest:

$ ssh sec-lucid-amd64.local

Notice the '.' at the end of the second command. This may be needed due to a bug in dnsmasq when using NAT with some versions of Ubuntu.

resolvconf

If you use DHCP then /etc/resolv.conf gets overwritten, which is inconvenient. To make sure that 192.168.122.1 (dnsmasq) is always used first, even when there is no IP address, you can install the resolvconf package, then adjust /etc/resolvconf/resolv.conf.d/head (/etc/resolvconf/resolv.conf.d/base on Ubuntu 11.10 and earlier) to have:

# Make sure dnsmasq server is first (from /etc/resolvconf/resolv.conf.d/head)
nameserver 192.168.122.1

Alternatively, you may be able to simply put a line into /etc/dhcp/dhclient.conf like:

prepend domain-name-servers 192.168.122.1;

IMPORTANT: When using the tools, keep in mind that you may connect to the same VM with different hostnames. Eg, you could connect to the sec-lucid-amd64 VM as 'sec-lucid-amd64.', 'sec-lucid-amd64.local' or 'sec-lucid-amd64'. uvt tests if the VM is up by testing hostnames in this order and it is possible for the first to fail and the second to succeed. Therefore, you should login via ssh to at least the first two (if not all three), so that you have the host keys for the host. You may be having this problem if you see something like:

----- sec-maverick-amd64 -----
Starting sec-maverick-amd64 (snapshotted) ...
Waiting for 'sec-maverick-amd64' to come up host is up
Command: ssh -t -o BatchMode=yes -l root sec-maverick-amd64.local "ls /tmp"
Host key verification failed.

In this case, the ssh host key was in ~/.ssh/known_hosts for 'sec-maverick-amd64.', but not for 'sec-maverick-amd64.local'.

mDNS (avahi)

vm-tools will by default try '.local' mDNS (avahi) addresses when connecting to VMs. This may not work in all environments and can be disabled by using the following in $HOME/.uqt-vm-tools.conf:

# Set to 'no' to disable '.local' mDNS (avahi) lookups for VMs
vm_host_use_avahi="no"

You'll know you are encountering this problem when host name lookups fail for <machine>.local. This can also be caused be restrictive firewalls. If you have ufw enabled, then instead of disabling avahi lookups you can do something similar to:

$ sudo ufw allow in on virbr0

If you also have IPv6 enabled (the default in Ubuntu 11.10 and above), you can add something like the following to /etc/ufw/before6.rules before the 'COMMIT' line:

# allow MULTICAST mDNS for service discovery for libvirt
-A ufw6-before-forward -p udp -d ff02::fb --dport 5353 -j ACCEPT
-A ufw6-before-forward -p icmpv6 --icmpv6-type destination-unreachable -j ACCEPT
-A ufw6-before-forward -p icmpv6 --icmpv6-type packet-too-big -j ACCEPT
-A ufw6-before-forward -p icmpv6 --icmpv6-type time-exceeded -j ACCEPT
-A ufw6-before-forward -p icmpv6 --icmpv6-type parameter-problem -j ACCEPT
-A ufw6-before-forward -p icmpv6 --icmpv6-type echo-request -j ACCEPT

Be sure to use sudo ufw reload after making these changes. Note, you may need to stop and start any VMs after making changes to your firewall.

Snapshotted virtual machines

With this method, we have VMs which are initially created with a pristine libvirt snapshot. Eg, might have the following virtual machines:

  • sec-hardy-i386
  • sec-hardy-amd64
  • sec-lucid-i386
  • sec-lucid-amd64
  • sec-natty-i386
  • sec-natty-amd64
  • sec-oneiric-i386
  • sec-oneiric-amd64
  • sec-precise-i386
  • sec-precise-amd64
  • sec-quantal-i386
  • sec-quantal-amd64

Create your VMs (best to do this sequentially, even though it takes a while):

. $HOME/.uqt-vm-tools.conf
for i in $vm_release_list ; do
    uvt new $i i386 sec
    uvt new $i amd64 sec
done

The basic idea is as follows:

  1. uvt new after installing the OS will create the pristine snapshot

  2. using uvt start <vm> starts the image normally, and uvt stop <vm> shuts it down cleanly. Changes to the VM are preserved across reboots

  3. using uvt start -r <vm> reverts all changes made to the VM since the last snapshot, then starts the VM in the pristine state. Note that if you know you are going to revert to the previous snapshot, you can use uvt stop -f <vm> which does an unclean shutdown akin to pulling the plug.

Typical uses:

  • revert to pristine snapshot and discard:

    $ uvt start -r <vm>
    ... do your stuff ...
    $ uvt stop -f <vm>
  • snapshot with persistence across stops:

    $ uvt start -r <vm>            # revert all changes and start with a clean slate
    ... do your stuff ...
    $ uvt stop <vm>                # no '-f' so a clean shutdown is performed
    $ uvt start <vm>               # notice no '-r', so changes are not reverted
    ... do more stuff ...
    $ uvt stop <vm>
    ... do even more stuff ...
    $ uvt stop -f <vm>             # done with work, so pull the plug for a quick shutdown (assumes -r on next start)

IMPORTANT: Changes made in a snapshot will be lost if you use '-r' with uvt start or otherwise remove the snapshots.

To up the pristine image and make a new snapshot:

$ uvt start -r <vm>            # revert all changes and start with a clean slate
... make changes to the VM ...
$ uvt stop <vm>                # cleanly shut it down
$ uvt snapshot <vm>            # update the pristine snapshot

As a convenience, you can perform package upgrades using:

$ uvt update --autoremove <vm> # starts the VM, dist-upgrades, cleans up, then updates the pristine snapshot

IMPORTANT: make sure the VM was properly shutdown before using this command because uvt update does not revert to the previous snapshot.

Cloned virtual machines (deprecated)

NOTE: with the new uvt snapshot method, using cloning is less useful.

With this method, can have a set of clean VMs and another set that are cloned. Eg, might have the following virtual machines:

  • clean-dapper-i386
  • clean-dapper-amd64
  • clean-hardy-i386
  • clean-hardy-amd64
  • clean-jaunty-i386
  • clean-jaunty-amd64
  • clean-karmic-i386
  • clean-karmic-amd64
  • clean-lucid-i386
  • clean-lucid-amd64
  • clean-maverick-i386
  • clean-maverick-amd64

Then clone the above (eg with `uvt clone1 (see below)) and have:

  • sec-dapper-i386
  • sec-dapper-amd64
  • sec-hardy-i386
  • sec-hardy-amd64
  • sec-jaunty-i386
  • sec-jaunty-amd64
  • sec-karmic-i386
  • sec-karmic-amd64
  • sec-lucid-i386
  • sec-lucid-amd64
  • sec-maverick-i386
  • sec-maverick-amd64

The 'clean' machines should only ever be accessed for updates or fine-tuning while the 'sec' machines can be used, updated, mangled, discarded and recreated as needed.

To create the clean machines:

. $HOME/.uqt-vm-tools.conf
for i in $vm_release_list ; do
    uvt new $i i386 clean
    uvt new $i amd64 clean
done

If you use dnsmasq as above, you can clone these with:

$ uvt clone -p clean sec      # all in one go
$ uvt clone <oldvm> <newvm>   # individually

This creates 'sec-<release>-<arch>' machines that can be updated, tested etc and keeps a pristine copy of the virtual machines in 'clean-<release>-<arch>'. The uvt clone command is simply a wrapper for virt-clone which also updates the following files:

  • /etc/hostname
  • /etc/dhcp[3]/dhclient.conf
  • /etc/hosts

uvt clone is kind of brittle because it currently uses ssh commands, so if something goes wrong, just make sure the above files get updated.

Batch commands

If using dnsmasq as above, you can also use uvt cmd to do batch commands for the virtual machines, like so:

$ uvt cmd -p sec 'uname -a'
$ uvt cmd -r -p sec "apt-get update && apt-get -y upgrade"

uvt cmd uses 'release_list' in $HOME/.uqt-vm-tools.conf and will ssh in to all running sec-*-* machines and perform the specified command. Specifying -r to uvt cmd will login to the machine and run the command as root, otherwise it runs as non-root (ie your username in the guest).

Other useful commands:

  • uvt start: start a single VM of a group of VMs. Eg:

    $ uvt start -r sec-lucid-amd64           # start a single VM, reverting to the last snapshot
    $ uvt start -p sec -a i386               # start all i386 VMs starting with 'sec'
    $ uvt start -v -p sec -a i386            # start all i386 VMs starting with 'sec' without virt-viewer
  • uvt stop: stop a single running VM or a group of running VMs. Eg:

    $ uvt stop sec-lucid-amd64               # stop a single VM via ACPI
    $ uvt stop -f -p sec                     # hard stop all VMs starting with 'sec'
  • uvt snapshot: snapshot a single running VM or a group of running VMs. Eg:

    $ uvt snapshot sec-lucid-amd64           # snapshot a single VM
    $ uvt snapshot -p sec                    # snapshot all VMs starting with 'sec'
  • uvt update: update and snapshot a single running VM or a group of running VMs. Eg:

    $ uvt update sec-lucid-amd64            # dist-upgrade and snapshot a single VM
    $ uvt update -p sec                     # dist-upgrade and snapshot all VMs starting with 'sec'
  • uvt remove: remove a single VM or a group of VMs. Eg:

    $ uvt remove sec-lucid-amd64             # delete a single VM
    $ uvt remove -p sec                      # delete all VMs starting with 'sec'
  • uvt repo: toggle the local repo (eg where 'umt repo' puts stuff) for a single running VM or a group of running VMs. Eg:

    $ uvt repo -e -r lucid sec-lucid-amd64   # enable the local repo for a single VM
    $ uvt repo -d -r lucid sec-lucid-amd64   # disable the local repo for a single VM
    $ uvt repo -e -p sec                     # enable the local repo for all running VMs starting with 'sec'
  • uvt view: connect to the VNC console of a single VM or a group of VMs using virt-viewer. Eg:

    $ uvt view sec-lucid-amd64
    $ uvt view -p sec

SecurityTeam/TestingEnvironment (last edited 2024-04-18 11:53:23 by 5tev3)