VirtFeatureVerification

Differences between revisions 9 and 10
Revision 9 as of 2010-07-07 21:37:26
Size: 3583
Editor: cpe-70-120-198-24
Comment:
Revision 10 as of 2010-07-15 16:52:59
Size: 4569
Editor: cpe-70-120-198-24
Comment:
Deletions are marked like this. Additions are marked like this.
Line 101: Line 101:

=== libvirt NAT network ===

 * This is the default in Ubuntu libvirt setups, so it is
   always being tested. To confirm, check 'iptables -L'
   and 'ifconfig -a' output after installing and starting
   libvirt.

=== libvirt private network ===

 * Created /etc/libvirt/qemu/networks/private.xml as the
    default:
    {{{
cat >> /etc/libvirt/qemu/networks/private.xml << EOF
<network>
        <name>default</name>
        <bridge name="virbr%d" />
        <ip address="192.168.152.1" netmask="255.255.255.0">
          <dhcp>
            <range start="192.168.152.2" end="192.168.152.254" />
          </dhcp>
        </ip>
</network>
EOF
d=/etc/libvirt/qemu/networks/autostart/
(cd $d; rm -f *; ln -s ../private.xml)
     }}}
 * Rebooted, created a VM.
 * ifconfig -a shows virbr0 exists
 * iptables -L shows no forwarding or NAT rules
 * Starting up the VM, it can ping the host and other guests on
   the same network, but not the outside world.

The Ubuntu Hypervisor stack consists of qemu-kvm and libvirt at its core. QEMU provides the userspace emulation, KVM provides the kernel acceleration, and libvirt provides an abstraction layer for applications to interface with various hypervisors at an API level.

This page is dedicated to enumerating and tracking the testing of some of the basic and advanced features of this hypervisor stack.

For basic documentation, see:

Results

QEMU Feature: Serial Console

  • Command line:

    kvm -serial stdio
  • Additional setup: Add console=ttyS0 to the kernel boot parameter

  • Result: Serial console input/output is on stdio of the shell that launched the VM

QEMU Feature: VNC

  • Command line:

    kvm -vnc :1
  • Additional setup: Run vncviewer :1 from another command prompt

  • Result: VM's graphical display should be in a VNC window, rather than SDL

QEMU Feature: virtio disks

  • Command line:

    kvm -drive file=maverick.img,if=virtio,index=0,boot=on
  • Result: Image boots

QEMU Feature: virtio net

  • Command line:

    kvm -net nic,virtio -net user -redir tcp:2224::22
  • Result: Image is able to boot and access network, and host can ssh into guest using 'ssh -p 2224 localhost'

Distributions

  • Fedora:

    • Installed 32-bit and 64-bit fedora 13, from livecds
  • Debian:

    • Installed 32-bit and 64-bit, from netboot cd images
  • CentOS 5.5:

    • Boots of livecd and dvd
    • Installs and boots from dvd
    • Install with 'Virtualization' option installs a xen kernel
      • This fails to boot
        • first needs 'noapic' boot argument to get past bios 'bug'
        • then appears to fail at device creation
      • Note this is only for an install as virtualization host, normal install boots fine

libvirt save/restore VM

  • succeeded with 0.8.1
  • save takes 50 seconds, restore 1 (512M ram, 100M save image)
  • QEMU_MONITOR_MIGRATE_TO_FILE_BS fix needed

libvirt+qemu hot-add

  • In guest:

    modprobe acpiphp
  • On host:

    'virsh attach-disk 13 --type disk /home/serge/newdisk.img --mode shareable --driver file vda
  • Result:

    • Success (can fdisk, format, and mount new disk)
    • Cannot choose index (uses next available, i.e. can't use vdb if vfa unused)
    • Note there is a bug that can cause the loss of virtio NIC
      • this will be fixed with 0.8.2 libvirt merge

live migration

  • I created two maverick-server kvm guests (on a lucid host)
  • exported /srv/export from the host to the guests over NFS
  • In /srv/export, I placed a small (1G) debian.img I'd installed on the host
  • On the first maverick guest, I created a d32.xml and
    • virsh define d32.xml
      virsh start d32
      d=`virsh list | grep d32 | awk '{ print $1 '}`
      virsh migrate --live $d qemu+ssh://secondguest/session
      Where secondguest is of course the name or ip address for the second guest.
  • Result:

    • Success - the host continued perfectly on the second guest
    • Caveat
      • This had to be done using qemu.kvm, not the maverick package
      • The maverick package has a bug (LP: #591423) which prevented
        • non-kvm qemu booting (at all) in my guests.
      • This means there may be other bugs in the maverick package
        • preventing live migration.
      • When 0.13.0 qemu is merged, that should pull in all fixes.

libvirt NAT network

  • This is the default in Ubuntu libvirt setups, so it is
    • always being tested. To confirm, check 'iptables -L' and 'ifconfig -a' output after installing and starting libvirt.

libvirt private network

  • Created /etc/libvirt/qemu/networks/private.xml as the
    • default:
      cat >> /etc/libvirt/qemu/networks/private.xml << EOF
      <network>
              <name>default</name>
              <bridge name="virbr%d" />
              <ip address="192.168.152.1" netmask="255.255.255.0">
                <dhcp>
                  <range start="192.168.152.2" end="192.168.152.254" />
                </dhcp>
              </ip>
      </network>
      EOF
      d=/etc/libvirt/qemu/networks/autostart/
      (cd $d; rm -f *; ln -s ../private.xml)
  • Rebooted, created a VM.
  • ifconfig -a shows virbr0 exists
  • iptables -L shows no forwarding or NAT rules
  • Starting up the VM, it can ping the host and other guests on
    • the same network, but not the outside world.

VirtFeatureVerification (last edited 2011-01-22 15:36:50 by cpe-66-69-252-85)