OpenStackTestPlan
Overview
The purpose of this Topic is to provide details of the Test Plan for testing OpenStack deployments during the Oneiric Ocelot release cycle.
Assumptions
- Single-node topology will be initially tested manually.
Multi-node topology will be tested using the ensemble formulas for OpenStack.
- Ensemble services will be unit/relation tested using the formula-tester formula
- Deploy topologies will be tested functionally using XXXX
Topologies
Single Node
Overview
The purpose of this topology is to allow easy enablement for OpenStack development and is not intended for use in production deployments.
A single node deployment consists of the following components:
- Nova (api, scheduler, compute, network)
- Hypervisor support: KVM and LXC (compute-kvm, compute-lxc)
- glance
- rabbitmq-server
- API Server
nova-network will be configured to use the default VLANManager.
In-scope
- Tests that will be validated against this topology
Out-of-scope
- Support for Xen hypervisor
- Use of MySQL database
- Deployment of swift
- Deployment of keystone
OpenStack-SingleNode-KVM/LXC
Basic Oneiric Server Installation
Perform a standard Oneiric Server amd64 installation; recommendation is that openssh-server is installed to make configuring the node easier.
Test Procedure
STEP 1: Install bridge-utils:
sudo apt-get install bridge-utils
STEP 2: Edit /etc/network/interfaces (IP addresses will need to be changed depending on the network configuration of the testing environment), setup LXC if required:
# The loopback network interface auto lo iface lo inet loopback # The primary network interface #auto eth0 #iface eth0 inet dhcp # This is an autoconfigured IPv6 interface #iface eth0 inet6 auto auto br0 iface br0 inet static address 192.168.0.10 network 192.168.0.0 netmask 255.255.255.0 broadcast 192.168.0.255 gateway 192.168.0.1 bridge_ports eth0 bridge_fd 9 bridge_hello 2 bridge_maxage 12 bridge_stp off
Setup LXC containers if required:
sudo mkdir /cgroups sudo vi /etc/fstab
Add to end of /etc/fstab:
none /cgroups cgroup cpuacct,memory,devices,cpu,freezer,blkio 0 0
- Reboot
STEP 3: Validate that br0 is running
ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br0 state UP qlen 1000 link/ether 00:26:b9:14:09:67 brd ff:ff:ff:ff:ff:ff 3: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP link/ether 00:26:b9:14:09:67 brd ff:ff:ff:ff:ff:ff inet 192.168.0.10/24 brd 192.168.0.255 scope global br0 inet6 2a01:348:2ff:0:222:ffff:ffff:ffff/64 scope global dynamic valid_lft 86297sec preferred_lft 14297sec inet6 fe80::226:b9ff:fe14:967/64 scope link valid_lft forever preferred_lft forever
STEP 4: Install OpenStack packages
sudo apt-get install nova-compute nova-compute-kvm nova-scheduler nova-objectstore nova-network nova-api glance rabbitmq-server unzip
For the LXC test change nova-compute-kvm to nova-compute-lxc
sudo apt-get install nova-compute nova-compute-lxc nova-scheduler nova-objectstore nova-network nova-api glance rabbitmq-server unzip
TEST: Validate that relevant nova processes are running
prompt> pgrep -u nova -l | sort -k 2 1128 nova-api 1123 nova-compute 1112 nova-network 1122 nova-objectstor 1129 nova-scheduler 1109 su 1119 su 1121 su 1125 su 1127 su
TEST: Validate that relevant rabbitmq processes are running
prompt> pgrep -u rabbitmq -l | sort -k 2 1443 beam.smp 1592 cpu_sup 1423 epmd 1594 inet_gethost 1595 inet_gethost 1441 sh 1438 su
TEST: Validate that relevant glance processes are running
prompt> pgrep -u glance -l | sort -k 2 1089 glance-api 1086 glance-registry 1084 su 1088 su
STEP 5: Setup nova components
sudo nova-manage db sync sudo nova-manage user admin admin sudo nova-manage project create test-cloud-01 admin sudo nova-manage network create private 10.0.0.0/8 3 16 --bridge_interface=br0 sudo nova-manage floating create 192.168.0.220/27
The final command should be adjusted to create an appropriate network range within the test network configuration for access of public IP addresses.
This might be helpful - http://www.subnet-calculator.com/subnet.php?net_class=C.
TEST: Validate that appropriate private network ranges have been created
prompt> sudo nova-manage network list id IPv4 IPv6 start address DNS1 DNS2 VlanID project uuid 1 10.0.0.0/28 None 10.0.0.3 None None 100 None None 2 10.0.0.16/28 None 10.0.0.19 None None 101 None None 3 10.0.0.32/28 None 10.0.0.35 None None 102 None None
TEST: Validate that floating 'public' network addresses have been allocated
prompt> sudo nova-manage floating list None 192.168.0.192 None None 192.168.0.193 None None 192.168.0.194 None None 192.168.0.195 None None 192.168.0.196 None ... None 192.168.0.223 None
STEP 6: Download project access credentials and unzip
sudo nova-manage project zipfile test-cloud-01 admin unzip nova.zip . novarc
TEST: Zip file named 'nova.zip' created in current working directory containing
prompt> unzip -l nova.zip Archive: nova.zip Length Date Time Name --------- ---------- ----- ---- 1184 2011-09-06 08:55 novarc 887 2011-09-06 08:55 pk.pem 2520 2011-09-06 08:55 cert.pem 1029 2011-09-06 08:55 cacert.pem --------- ------- 5620 4 files
Length, Date and Time will vary from the above
STEP 7: Install cloud-utils, download a cloud image and upload to OpenStack
First:
sudo apt-get install cloud-utils
Then download the relevant test image from cloud-images.ubuntu.com and publish to OpenStack:
ARCH=amd64 CLOUD_TGZ=ubuntu-11.10-beta1-server-cloudimg-$ARCH.tar.gz URL=http://cloud-images.ubuntu.com/releases/oneiric/beta-1/ TYPE=m1.tiny BUCKET="cloudtest-$(date +%Y%m%d%H%M%S)" [ $ARCH = "amd64" ] && IARCH=x86_64 || IARCH=i386 [ ! -e $CLOUD_TGZ ] && wget $URL/$CLOUD_TGZ EMI=$(cloud-publish-tarball $CLOUD_TGZ $BUCKET $IARCH | awk -F \" '{print $2}') && echo $EMI
TEST: Validate that cloud image is ready for use
prompt> euca-describe-images IMAGE ami-00000002 cloudtest-20110906144331/oneiric-server-cloudimg-amd64.img.manifest.xml untarring private x86_64 machine aki-00000001 instance-store IMAGE aki-00000001 cloudtest-20110906144331/oneiric-server-cloudimg-amd64-vmlinuz-virtual.manifest.xml available private x86_64 kernel instance-store
Then create a keypair (if required), and run an instance, allocating and associating a 'public' ip address:
if [ ! -e mykey.priv ]; then touch mykey.priv chmod 0600 mykey.priv euca-add-keypair mykey > mykey.priv fi euca-authorize default -P tcp -p 22 -s 0.0.0.0/0 euca-authorize default -P icmp -t -1:-1 -s 0.0.0.0/0 INSTANCEID=$(euca-run-instances -k mykey $EMI -t $TYPE | awk '/^INSTANCE/ {print $2}') && echo $INSTANCEID PUBLICIP=$(euca-allocate-address | awk '{print $2}') euca-associate-address -i $INSTANCEID $PUBLICIP
TEST: Check that instance is accessible via SSH locally
IPADDR=$(euca-describe-instances | grep $INSTANCEID | grep running | awk '{print $4}') ssh -i mykey.priv ubuntu@$IPADDR
TEST: Check that instance is accessible via SSH remotely
scp test-host:* . . novarc ssh -i mykey.priv ubuntu@<PUBLICIP>
TEST: Ensure instances terminate correctly
euca-terminate-instances $INSTANCEID
Ensure instance moves to status 'terminated'.
Multi Node
Overview
In-scope
- Tests that will be validated against this topology
Out-of-scope
- Support for Xen hypervisor
ServerTeam/Oneiric/OpenStackTestPlan (last edited 2011-09-07 11:34:48 by james-page)