The Ubuntu Hypervisor stack consists of qemu-kvm and libvirt at its core. QEMU provides the userspace emulation, KVM provides the kernel acceleration, and libvirt provides an abstraction layer for applications to interface with various hypervisors at an API level.

This page is dedicated to enumerating and tracking the testing of some of the basic and advanced features of this hypervisor stack.

For basic documentation, see:


Feature: Nested KVM

Nested KVM is possible for AMD machines, but not yet for Intel ones. Further (to my surprise), on AMD, you can only nest amd64 on amd64, or i386 on i386. You cannot nest an i386 VM inside an amd64 VM.

To start a VM in which you wish to nest another vm, use the -enable-nesting flag to kvm. Then just call kvm as usual inside that VM.

Nesting is not yet supported in libvirt by default. To use it, I did the following:

* Created a new wrapper called kvm.nested

cat > /usr/bin/kvm.nested << EOF
/usr/bin/kvm $* -enable-nesting

* Allow libvirt to use that wrapper

cat >> /lib/apparmor.d/abstractions/libvirt-qemu << EOF
  /usr/bin/kvm.nested rmix,
/etc/init.d/apparmor restart

* Then in your vm.xml, use:


instead of the usual


* Finally, while not necessary, I used LVM partitions for the nested guest rather than container files.

QEMU Feature: Serial Console

  • Command line:

    kvm -serial stdio
  • Additional setup: Add console=ttyS0 to the kernel boot parameter

  • Result: Serial console input/output is on stdio of the shell that launched the VM

libvirt serial console

  • Use the following snipped in your VM definition:

      <serial type='tcp'>
          <source mode='bind' host='' service='2445'/>
          <protocol type='telnet'/>
          <target port='0'/>
        <console type='tcp'>
          <source mode='bind' host='' service='2445'/>
          <protocol type='telnet'/>
          <target type='serial' port='0'/>
  • Then telnet to port 2445 to see the console

QEMU Feature: VNC

  • Command line:

    kvm -vnc :1
  • Additional setup: Run vncviewer :1 from another command prompt

  • Result: VM's graphical display should be in a VNC window, rather than SDL

QEMU Feature: virtio disks

  • Command line:

    kvm -drive file=maverick.img,if=virtio,index=0,boot=on
  • Result: Image boots

QEMU Feature: virtio net

  • Command line:

    kvm -net nic,virtio -net user -redir tcp:2224::22
  • Result: Image is able to boot and access network, and host can ssh into guest using 'ssh -p 2224 localhost'


  • Fedora:

    • Installed 32-bit and 64-bit fedora 13, from livecds
  • Debian:

    • Installed 32-bit and 64-bit, from netboot cd images
  • CentOS 5.5:

    • Boots of livecd and dvd
    • Installs and boots from dvd
    • Install with 'Virtualization' option installs a xen kernel
      • This fails to boot
        • first needs 'noapic' boot argument to get past bios 'bug'
        • then appears to fail at device creation
      • Note this is only for an install as virtualization host, normal install boots fine


  • Windows 7 Ultimate x64 - passed
    • No sound (no soundcards emulated by qemu are supported in 64-bit windows 7)
    • Otherwised worked well

libvirt save/restore VM

  • succeeded with 0.8.1
  • save takes 50 seconds, restore 1 (512M ram, 100M save image)

libvirt+qemu hot-add

  • In guest:

    modprobe acpiphp
  • On host:

    'virsh attach-disk 13 --type disk /home/serge/newdisk.img --mode shareable --driver file vda
  • Result:

    • Success (can fdisk, format, and mount new disk)
    • Cannot choose index (uses next available, i.e. can't use vdb if vfa unused)
    • Note there is a bug that can cause the loss of virtio NIC
      • this will be fixed with 0.8.2 libvirt merge

live migration

  • I created two maverick-server kvm guests (on a lucid host)
  • exported /srv/export from the host to the guests over NFS
  • In /srv/export, I placed a small (1G) debian.img I'd installed on the host
  • On the first maverick guest, I created a d32.xml d32.xml and

    • virsh define d32.xml
      virsh start d32
      d=`virsh list | grep d32 | awk '{ print $1 '}`
      virsh migrate --live $d qemu+ssh://secondguest/session
      Where secondguest is of course the name or ip address for the second guest.
  • Result:

    • Success - the host continued perfectly on the second guest
    • Caveat
      • This had to be done using qemu.kvm, not the maverick package
      • The maverick package has a bug (LP: #591423) which prevented
        • non-kvm qemu booting (at all) in my guests.
      • This means there may be other bugs in the maverick package
        • preventing live migration.
      • When 0.13.0 qemu is merged, that should pull in all fixes.

libvirt NAT network

  • This is the default in Ubuntu libvirt setups, so it is
    • always being tested. To confirm, check 'iptables -L' and 'ifconfig -a' output after installing and starting libvirt.

libvirt private network

  • Created /etc/libvirt/qemu/networks/private.xml as the
    • default:
      cat >> /etc/libvirt/qemu/networks/private.xml << EOF
              <bridge name="virbr%d" />
              <ip address="" netmask="">
                  <range start="" end="" />
      (cd $d; rm -f *; ln -s ../private.xml)
  • Rebooted, created a VM.
  • ifconfig -a shows virbr0 exists
  • iptables -L shows no forwarding or NAT rules
  • Starting up the VM, it can ping the host and other guests on
    • the same network, but not the outside world.

kvm bridged network

  • Works.
  • (as root):

apt-get remove network-manager wicd

cat > /etc/network/interfaces << EOF
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet manual

auto br0
iface br0 inet dhcp
        bridge_ports eth0
        bridge_stp off
        bridge_fd 0
        bridge_maxwait 0
  • (reboot)
  • MACADDR="52:54:$(dd if=/dev/urandom count=1 2>/dev/null | md5sum | sed 's/^\(..\)\(..\)\(..\)\(..\).*$/\1:\2:\3:\4/')"; echo $MACADDR

  • kvm -drive file=server1.img,if=scsi,index=0,boot=on -m 1G -smp 2 -net nic,macaddr=$MACADDR,model=virtio -net tap,ifname=tap1,script=no,downscript=no
  • Hook the tap device into the bridge on the host:

/sbin/ifconfig tap1 up
brctl addif br0 tap1
  • And request an address on the guest:

dhclient eth0

save / restore


  • Took an installed qcow2 VM. Created a testing clone using
     qemu-img create -f qcow -b orig.img new.img
  • Ran it under kvm with a monitor on stdio:
     kvm -drive file=new.img,if=virtio,index=0,boot=on -m 512M -monitor stdio
  • Saved one snaphost, created a file, saved another snapshot, restored the
    • first snapshot, confirmed the file was gone, restored the second snapshot, confirmed the file was there.
    <kvm monitor on terminal>               <VM console>
       savevm p1
                                             echo ab > ab
       savevm p2
       loadvm p1
                                             cat ab #(no such file)
       loadvm p2
                                             cat ab # (shows contents of ab)
  • Quit kvm, restarted with the same image, restored the second snapshot
    • again
    loadvm p2
  • Everything worked as expected. Save is slow, a known bug which should be
    • fixed in upstream.

==== libvirt ===

As of libvirt 0.8.1, a new snapshot API is supported. The backing file must be qcow2, and the disk definition for the VM must include a line defining it as type qcow2, i.e.:

<driver name'qemu' type='qcow2' cache='writethrough'>

Then, while the machine is running, you can do

virsh snapshot-create VM-name

That command returns a snapshot ID, i.e. '12798656811'. You can list all snapshots with 'virsh snapshot-list VM-name', and you can restore from a snapshot with 'virsh snapshot-revert VM-name snapshot-name'. This however only snapshots memory, not disk. The most promising route for doing full snapshots from libvirt is the soon-to-come ability to send arbitrary qemu monitor commands through libvirt. This will not likely be available in libvirt 0.8.3.


Started up several KVM instances, and looked at


which went up.


  • Success
  • I tested it somewhat ugly, as follows. First, installed
    • syslinux tftp on the host.
  • Set up a pxelinux.cfg on the host:

mkdir /var/lib/tftpboot/pxelinux.cfg
cat > /var/lib/tftpboot/pxelinux.cfg/default << eof
label default
  kernel vmlinuz-2.6.35-9-generic
  initrd initrd.img-2.6.35-9-generic
  append root=/dev/sda ro

cp /boot/{vmlinuz-2.6.35-9-generic,initrd.img-2.6.35-9-generic} \
cp /usr/lib/syslinux/{memdisk,menu.c32,pxelinux.0,vesainfo.c32,vesamenu.c32} \
  • Create a dummy VM and set it up to boot from network (PXE).
  • On the host, kill libvirt's default dnsmasq, and restart it willing to do tftp:

dnsmasq --strict-order --bind-interfaces --pid-file=/var/run/libvirt/network/ --conf-file= --listen-address --except-interface lo --dhcp-range, --dhcp-lease-max=253 --enable-tftp --tftp-root=/var/lib/tftpboot --dhcp-boot=pxelinux.0
  • The VM will boot, and land in busybox with an '(initramfs)' prompt,

since it was unable to find a root fs. (Installing an nfs server to serve a root fs would get us past that)

GPXE (Etherboot)

etherboot package (on lucid and maverick)

  • apt-get install etherboot
  • cp /usr/lib/etherboot/rtl8139.dsk.gz .; gunzip rtl8139.dsk.gz
  • kvm -fda rtl8139.dsk -net nic,model=rtl8139 -net user -bootp

  • FAIL (on lucid and maverick)
    • infinite loop at bios

gpxe upstream (on lucid)


git clone git://
cd gpxe/src
kvm -fda bin/gpxe.dsk -net nic -net user -bootp
  • boots.


  • (Have contacted the Debian maintainer to find out merge plans)

iscsi boot

apt-get install tgt open-iscsi-utils open-iscsi
dd if=/dev/zero of=/var/lib/tgtd/kvmguests/a.img bs=1M seek=10240 count=0
dd if=/dev/zero of=/var/lib/tgtd/kvmguests/shareddata.img bs=1M count=512
tgtadm --lld iscsi --op new --mode target --tid 1 --targetname iqn.2004-04.fedora:fedora13:iscsi.kvmguests
tgtadm --lld iscsi --op new --mode logicalunit --tid 1 --lun 1 --backing-store /var/lib/tgtd/kvmguests/a.img 
tgtadm --lld iscsi --op new --mode logicalunit --tid 1 --lun 2 --backing-store /var/lib/tgtd/kvmguests/shareddata.img 
tgtadm --lld iscsi --op bind --mode target --tid 1 --initiator-address ALL
cat > /root/pool.xml << EOF
        <pool type='iscsi'>
          <host name='localhost'/>
          <device path='iqn.2004-04.fedora:fedora13:iscsi.kvmguests'/>
virsh pool-define /root/pool.xml 
virsh pool-start kvmguests
  • Used virt-viewer to create a VM with backing store on the iscsi pool
  • Installed ubuntu 10.04 server, booted a few times
  • Relevant xml from VM definition:

    <disk type='block' device='disk'>
      <driver name='qemu' type='raw'/>
      <source dev='/dev/disk/by-path/ip-'/>
      <target dev='hda' bus='ide'/>
      <address type='drive' controller='0' bus='0' unit='0'/>


Not yet tested, but came across this page, which might be helpful:

USB passthrough

Using Libvirt

  • fired up a pre-existing vm

        virsh start maverick2
  • plugged in a usb drive
  • found the usb address using lsusb, which gave me

{{ Bus 002 Device 006: ID 1058:1023 Western Digital Technologies, Inc. }}}

  • defined a xml file with the device info:

<hostdev mode='subsystem' type='usb'>
                <vendor id='0x1058'/>
                <product id='0x1023'/>
  • passed the usb drive to the vm

sudo virsh attach-device maverick2 /tmp/a.xml
  • HOWEVER this does not work with apparmor enabled. You must either
    • disable apparmor, or add

/dev/bus/usb/*/[0-9]* rw,

to either /etc/apparmor.d/libvirt-qemu (which gives all guests full access to physical host devices) or to


which will give only the one guest that access. (Thanks to jdstrand for help getting that straight.)

Using KVM

Make sure to start kvm with the '-usb' flag and opening a monitor (say using '-monitor stdio'). Then pass the usb device using the monitor command:

 usb_add host:1058:1023

using the same vendor/product ids as above, or by invoking on the command line like this:

  kvm -usb -usbdevice host:1390:0001 ...

VirtFeatureVerification (last edited 2011-01-22 15:36:50 by cpe-66-69-252-85)