General Charm Updates
OpenStack Juno support for 14.04 and 14.10
All OpenStack charms now support deployment of OpenStack 2014.2 (Juno) on Ubuntu 14.04 LTS and Ubuntu 14.10; this support includes the following charms:
To deploy OpenStack Juno on Ubuntu 14.04, use the 'openstack-origin' configuration option, for example:
cat > config.yaml << EOF nova-cloud-controller: openstack-origin: cloud:trusty-juno EOF juju deploy --config config.yaml nova-cloud-controller
OpenStack Juno is provided as the default OpenStack release on Ubuntu 14.10 so no additional configuration is required in 14.10 deployments.
Upgrading 14.04 deployments to Juno
WARNING: Upgrading an OpenStack deployment is always a non-trivial process. The OpenStack charms automate alot of the process, however always plan and test your upgrade prior to upgrading production OpenStack environments.
Existing Icehouse deployments of OpenStack on Ubuntu 14.04 can be upgraded to Juno by issuing:
juju upgrade-charm <charm-name> juju set <charm-name> openstack-origin=cloud:trusty-juno
for each OpenStack charm in your deployment.
New Charm Features
Worker Thread Optimization
Where appropriate, the OpenStack charms will automatically configure appropriate worker values for API and RPC processes to optimize use of available CPU resources on deployed units. By default, this is set at twice the number of cores - however it can be tweaked using the worker-multiplier option provided by supporting charms:
juju set neutron-api worker-multiplier=4
the above example increases the default #cores x 2 to #cores x 4.
Network Segregation Configuration
The OpenStack charms feature support for use of multiple networks for separation of traffic; specifically:
- os-data-network: Data network for tenant network traffic supporting instances
- os-admin-network: Admin network - used for Admin endpoint binding and registration in keystone
- os-public-network: Public network - used for Public endpoint binding and registration in keystone
os-internal-network: Internal network - used for internal communication between OpenStack services and for Internal endpoint registration in keystone
in addition the Ceph charms (ceph-osd, ceph) support splitting 'public' access traffic from 'cluster' admin and re-sync traffic, via the ceph-public-network and ceph-cluster-network configuration options.
All network configuration options should be provided in standard CIDR format - for example 10.20.0.0/16.
This feature should also support IPv6 networking as well, although this should be considered a technical preview for this release (see below).
NOTE: this feature only works as described under the Juno OpenStack release and should be considered a technical preview this cycle.
A subset of the OpenStack charms now have a feature to prefer IPv6 networking for binding API endpoints and service-to-service communication:
have been tested and are know to work in IPv6 configurations with the 'prefer-ipv6' configuration option enabled.
also have this flag, but currently require a patched version of swift to function in an IPv6 environment. There are also changes proposed to the mysql, percona-cluster and rabbitmq-server charms which should land soon to enable this feature in other OpenStack supporting services.
Further enablement work will be done next cycle to complete this support across the charms, and hopefully have full upstream support for using IPv6 with OpenStack as well.
The Neutron support in the OpenStack charms has been refactored into two new charms:
- neutron-api: Supporting API and central control operations.
- neutron-openvswitch: Supporting deployment of the Neutron ML2 plugin with Open vSwitch on nova-compute nodes.
These charms can be introduced into an existing OpenStack deployment:
juju deploy neutron-api juju deploy neutron-openvswitch juju add-relation neutron-api mysql juju add-relation neutron-api keystone juju add-relation neutron-api rabbitmq-server juju add-relation neutron-api quantum-gateway juju add-relation neutron-api neutron-openvswitch juju add-relation neutron-api nova-cloud-controller juju add-relation neutron-openvswitch rabbitmq-server juju add-relation neutron-openvswitch nova-compute
Use of these two new charms also allows split of message brokers so that Nova and Neutron can use separate RabbitMQ deployments.
Use of these two new charms supports some additional features not enabled in the deprecated neutron support in nova-cloud-controller, specifically:
- Support for using the l2population driver for ARP table optimization at scale (l2-population configuration option - defaults to True).
- Support for using VXLAN overlay networks instead of GRE (overlay-network-type configuration option - defaults to GRE).
The Nova charms now support deployment in Nova Cell configurations using the new nova-cell charm; See the nova-cell charm for details of how this works and how to use in a OpenStack deployment. A complete guide to this feature with example juju-deployer configurations will be posted soon.
The hacluster charm has gone through some significant re-factoring to support changing configuration options post deployment, supporting upgrades of existing single network, clustered deployments to multi-network clustered deployments.
This charm also now supports direct configuration of the corosync bindiface and port in preference over any configuration provided from the principle charm its deployed with. Configuration of these options via the principle charm will be removed during the 15.04 cycle, users need to migrate to using the direct configuration options prior to the next stable release alongside 15.04.