The 16.04 OpenStack Charm release includes updates for the following charms:

  • ceilometer
  • ceilometer-agent
  • ceph
  • ceph-mon
  • ceph-osd
  • ceph-radosgw
  • cinder
  • cinder-backup
  • cinder-ceph
  • glance
  • hacluster
  • heat
  • keystone
  • neutron-api
  • neutron-openvswitch
  • nova-cloud-controller
  • nova-compute
  • openstack-dashboard
  • neutron-gateway
  • rabbitmq-server
  • swift-proxy
  • swift-storage
  • percona-cluster
  • neutron-api-odl
  • openvswitch-odl
  • odl-controller

New Charm Features

Full Ubuntu 16.04 support

The OpenStack charms have been validated for Ubuntu 16.04 (Xenial). The Xenial series charms are available in the charm-store via For example,

juju deploy cs:xenial/nova-compute

OpenStack Mitaka Support on 14.04 and 16.04

The charms provide full support for OpenStack Mitaka. For further details and documentation on Openstack Mitaka, please check out

To deploy OpenStack Mitaka on Ubuntu 14.04, use the 'openstack-origin' configuration option, for example:

cat > config.yaml << EOF
  openstack-origin: cloud:trusty-mitaka
juju deploy --config config.yaml nova-cloud-controller

OpenStack Mitaka is part of the Ubuntu 16.04 release, so no additional configuration is required for deployment:

juju deploy cs:xenial/nova-cloud-controller

To upgrade an existing Liberty based deployment on Ubuntu 14.04 to the Mitaka release, simple re-configure the charm with a new openstack-origin configuration:

juju set nova-cloud-controller openstack-origin=cloud:trusty-mitaka

Please ensure that ceph services are upgraded before services that consume ceph resources, such as cinder, glance and nova-compute.

Ceph MON charm

The Ceph charm set has been refactored to split the Ceph MON personality into its own charm; for new deployments, please use the ceph-mon and ceph-osd charms:

juju deploy -n 3 ceph-mon
juju deploy -n 100 ceph-osd
juju add-relation ceph-mon ceph-osd

Its possible to install the ceph-mon charm in LXC/LXD (with Juju 2.0) containers under MAAS deployments.

The ceph-mon charm will automatically generate monitor keys and an 'fsid' if not provided via configuration (this is a change in behaviour from the ceph charm).

The ceph charm is still part of the Xenial charm release. However, the ceph charm will be deprecated once the migration path from ceph -> ceph-mon has been fully designed, coded, and tested.

The ceph-mon charm now has many new actions! Some highlights include creating and removing cache tiers. There are create, rename and delete pool actions for replicated and erasure coded pools. Erasure profile create/get/list/delete actions are now available. Setting quotas on pools by an action is also now available. Snapshots create/remove and pool get/set actions . Please see the actions.yaml file for complete details and descriptions of everything that is now possible.

ceph-mon now provides rolling upgrades to upgrade ceph. This functionality is triggered by changing the source setting for the charm. Once started the monitor cluster will start upgrading the underlying ceph service one by one and wait until the previous one is finished before proceeding to the next. Should the previous monitor get stuck in the upgrade process for some reason the waiting monitor will wait 10 minutes and then move on with its own upgrade.

Ceph OSD charm

The ceph-osd charm now supports encryption of osd disks using Ceph's dm-crypt.

ceph-osd now provides rolling upgrades to upgrade ceph. This functionality is triggered by changing the source setting for the charm. Once started the osd cluster will start upgrading the underlying ceph service one by one and wait until the previous one is finished before proceeding to the next. Should the previous osd get stuck in the upgrade process for some reason the waiting osd will wait 10 minutes and then move on with its own upgrade.

Cinder Backup charm

We've added support for deploying the cinder-backup service using a new cinder-backup subordinate charm. This will allow cinder volumes to be backed up and restored to/from a number of backend storage devices. Initially, we only have support for the Ceph backup driver [0] which allows backup/restore to any Ceph cluster and supports incremental (differential) backups.

To use this charm, simply relate it to an existing Cinder and Ceph service and use the cinder backup-* commands to manage your backups.


Nova/Neutron separation

Previous releases of the nova-cloud-controller and nova-compute charms provided legacy support for Nova charms managing Neutron components. This support has now been deprecated in the Xenial release. The deployments must make use of the neutron-api and neutron-openvswitch charms for Neutron support.

The following configuration options have been deprecated as a result of this change:



Bundles will need to be updated to support these changes.

Pause/Resume actions

The majority of charms now support pause and resume actions. These actions can be used to place units of a charm into a state where maintenance operations can be carried out:

Executing the 'pause' action will shutdown the services and be disabled as appropriate for each charm.

juju action do nova-cloud-controller/0 pause

Executing the 'resume' action will restore services to a running state:

juju action do nova-cloud-controller/0 resume

A unit that has been 'paused' will reflect its current state in 'juju status' output. Note: the Ceph charms does not stop their running services. The Ceph charm sets them out of the ceph cluster.

Internal API endpoint usages support

All OpenStack API services register Public and Internal endpoints; combined with either network space support with Juju 2.0 or use of the os-*-network configuration options, these may be on different subnets. By default, OpenStack services will use the Public endpoint for any internal API calls.

Its now possible to switch internal API calls to use internal endpoints using the 'use-internal-endpoints' configuration option:

juju set nova-compute use-internal-endpoints=True

As this is a behavioural change and in its first release, this is not currently enabled by default; this may be changed in a future charm release.

Juju 2.0 Network Spaces support

The OpenStack charms provide initial support for Juju 2.0 Network Spaces; Network Spaces allow users to model network topologies and binding of network spaces to charms using MAAS and Juju, rather than directly injecting network cidr information via charm configuration options.

Existing deployments already using configuration options for binding services to different network segments will continue to function using the latest charm release - any user provided configuration is preferred over Juju managed network space bindings.

See individual charm documentation for details on how each charm supports Network Spaces.

Keystone v3 API support

The keystone charm has a new preferred-api-version option. Setting preferred-api-version to 3 will trigger Keystone to enforce the v3 ACLs and create default and admin domains. The default value for preferred-api-version is 2 which preserves the current behaviour.

Note: If Keystone v3 is being used then a token appropriate to the activity being performed needs to be acquired. This can be controlled by the credentials passed to the client tools. e.g.

Use admin_domain scoped token to administer domains/users/projects etc:

export OS_AUTH_URL=${OS_AUTH_PROTOCOL:-http}://`juju-deployer -f keystone`:5000/v3
export OS_USERNAME=admin
export OS_PASSWORD=openstack
export OS_REGION_NAME=RegionOne
export OS_DOMAIN_NAME=admin_domain
export OS_USER_DOMAIN_NAME=admin_domain
# Swift needs this:

Use project scoped token to administer resources in a project:

export OS_AUTH_URL=${OS_AUTH_PROTOCOL:-http}://`juju-deployer -f keystone`:5000/v3
export OS_USERNAME=admin
export OS_PASSWORD=openstack
export OS_USER_DOMAIN_NAME=admin_domain
export OS_PROJECT_NAME=admin
export OS_REGION_NAME=RegionOne
# Swift needs this:

Keystone + mod_wsgi

For Liberty and above the Keystone interface is now served using the Apache mod_wsgi module rather than as a standalone service.

Nova LXD support

The OpenStack charms now support deployment of the Mitaka release on Ubuntu 16.04 using the Nova LXD driver; to deploy:

juju deploy cs:xenial/nova-compute
juju set nova-compute virt-type=lxd
juju deploy cs:xenial/lxd
juju add-relation nova-compute lxd

The lxd charm has a number of options around storage for containers - please check the charm configuration options for full details.

OpenDayLight support

OpenDayLight support is provided as part of this charm release using the following charms:

  • neutron-api-odl
  • openvswitch-odl
  • odl-controller

These charms have been validated as part of the JOID option by the OPNFV project. Please see the individual charms for details of use.

NOTE: The odl-controller charm does not yet provide full HA support.


A number of the OpenStack charms now support additional security hardening of both the base operating system, core services and workloads being managed by the charm.

Hardening profiles are taken from Github Dev-Sec Project. The hardening application of profiles is supported by using the 'harden' configuration option. For example,

juju set nova-compute harden="os ssh"

Supported profiles include:

  • os
  • ssh
  • apache
  • mysql



Please ensure that the keystone charm is upgraded first.

To upgrade an existing deployment to the latest charm version simply use the 'upgrade-charm' command, e.g.:

juju upgrade-charm cinder

Note: The networking endpoint in keystone has been renamed from quantum to neutron in-line with upstream name changes.

Deprecation Notices

Minimum Juju version

OpenStack Charm CI testing no longer validates the OpenStack charms against versions of Juju (< 1.24) which don't have the leader election feature, used to determine leadership between peer units within a service.

Legacy leadership support will be removed from the charms over the next few development cycles, so please ensure that you are running on Juju >= 1.24.

Port MTU Settings

Using network-device-mtu config option with the neutron-api charm to set MTU on physical nics is deprecated on trusty deployments and will not work on xenial deployments. The prefered method is to use MAAS to set MTU.

neutron-gateway shared-db relation removed

The neutron-gateway shared-db relation has been removed. Please update bundles to reflect this.

Known Issues

SSL on Precise/Icehouse

No Ceilometer alarms on Mitaka

Bugs Fixed

For the full list of bugs resolved for the 16.04 release please refer to

OpenStack/OpenStackCharms/ReleaseNotes1604 (last edited 2016-06-24 11:03:06 by james-page)