ReleaseNotes1507

Differences between revisions 1 and 18 (spanning 17 versions)
Revision 1 as of 2014-04-16 09:48:06
Size: 4522
Editor: james-page
Comment:
Revision 18 as of 2014-10-24 10:13:54
Size: 6763
Editor: james-page
Comment:
Deletions are marked like this. Additions are marked like this.
Line 1: Line 1:
## page was copied from TrustyTahr/ReleaseNotes/OpenStackCharms
## page was renamed from TrustyTahr/ReleaseNote/OpenStackCharms
Line 3: Line 5:
=== Icehouse support for 12.04 and 14.04 === === OpenStack Juno support for 14.04 and 14.10 ===
Line 5: Line 7:
All OpenStack charms now support deployment of OpenStack 2014.1 (Icehouse) on Ubuntu 12.04 LTS and Ubuntu 14.04 LTS; this support includes the following charms: All !OpenStack charms now support deployment of !OpenStack 2014.2 (Juno) on Ubuntu 14.04 LTS and Ubuntu 14.10; this support includes the following charms:
Line 18: Line 20:
 * neutron-api
 * neutron-openvswitch
 * nova-cell
Line 19: Line 24:
To deploy OpenStack Icehouse on Ubuntu 12.04, use the 'openstack-origin' configuration option, for example: To deploy !OpenStack Juno on Ubuntu 14.04, use the 'openstack-origin' configuration option, for example:
Line 24: Line 29:
  openstack-origin: cloud:precise-icehouse   openstack-origin: cloud:trusty-juno
Line 29: Line 34:
OpenStack Icehouse is provided as the default OpenStack release on Ubuntu 14.04 so no additional configuration is required in 14.04 deployments. !OpenStack Juno is provided as the default !OpenStack release on Ubuntu 14.10 so no additional configuration is required in 14.10 deployments.
Line 31: Line 36:
=== Active/Active and SSL RabbitMQ === === Upgrading 14.04 deployments to Juno ===
Line 33: Line 38:
All OpenStack charms now feature support for use with the rabbitmq-server charm configured in native active/active HA mode. To deploy RabbitMQ in this mode: '''WARNING''': Upgrading an !OpenStack deployment is always a non-trivial process. The !OpenStack charms automate alot of the process, however always plan and test your upgrade prior to upgrading production !OpenStack environments.
Line 35: Line 40:
{{{
juju deploy rabbitmq-server
juju add-unit -n 1 rabbitmq-server
}}}

Charms should be related to the rabbitmq-server charm as before:

{{{
juju add-relation cinder rabbitmq-server
}}}

The rabbitmq-server charm also now supports use of SSL connections, for example:

{{{
juju set rabbitmq-server ssl=only
}}}

This will force all RabbitMQ connections to operate over SSL. Options for the ssl config flag include:

 * 'only': configure RabbitMQ server over a SSL port only.
 * 'off': configure RabbitMQ server over a non-SSL port only.
 * 'both': configure RabbitMQ server with SSL and non-SSL ports.

'''NOTE''': This feature is not yet compatible with RabbitMQ server in native Active/Active HA mode.

=== SSL MySQL ===

TBC

=== PostgreSQL ===

Support for use of PostgreSQL as the database for OpenStack components has been added to the OpenStack charms. This feature can be used by deploying the postgresql charm and relating services to it - for example:

{{{
juju deploy postgresql
juju add-relation cinder postgresql:db
}}}

The nova-cloud-controller charm requires access to two databases (neutron and nova), so the procedure is slightly different:

{{{
juju add-relation nova-cloud-controller:pgsql-nova-db postgresql:db
juju add-relation nova-cloud-controller:pgsql-neutron-db postgresql:db
}}}

'''NOTE''': Its not possible to switch an existing MySQL deployment to PostgreSQL - the charms will error and fail-safe if you try todo this.

=== Upgrading 12.04 deployments to Icehouse ===

'''WARNING''': Upgrading an OpenStack deployment is always a non-trivial process. The OpenStack charms automate alot of the process, however always plan and test your upgrade prior to upgrading production OpenStack environments.

Existing Havana and Grizzly deployments of OpenStack on Ubuntu 12.04 can be upgraded to Icehouse by issuing:
Existing Icehouse deployments of !OpenStack on Ubuntu 14.04 can be upgraded to Juno by issuing:
Line 90: Line 44:
juju set <charm-name> openstack-origin=cloud:precise-icehouse juju set <charm-name> openstack-origin=cloud:trusty-juno
Line 93: Line 47:
for each OpenStack charm in your deployment. Services which cannot be upgraded directly from Grizzly->Icehouse will be stepped automatically through the Havana release first. for each !OpenStack charm in your deployment.
Line 97: Line 51:
=== Neutron - Modular Layer 2 plugin === === Worker Thread Optimization ===
Line 99: Line 53:
The nova-cloud-controller, nova-compute and quantum-gateway charms will deploy the Modular Layer 2 (ml2) plugin, using the Open vSwitch L2 driver and GRE tunnelling, by default for Icehouse. Where appropriate, the !OpenStack charms will automatically configure appropriate worker values for API and RPC processes to optimize use of available CPU resources on deployed units. By default, this is set at twice the number of cores - however it can be tweaked using the worker-multiplier option provided by supporting charms:
Line 101: Line 55:
Existing Havana deployments using the older Open vSwitch (ovs) plugin will be automatically migrated to the ml2 plugin on upgrade. {{{
juju set neutron-api worker-multiplier=4
}}}
Line 103: Line 59:
=== Neutron - Enabled Extensions === the above example increases the default #cores x 2 to #cores x 4.
Line 105: Line 61:
The following extensions/drivers have been enabled for Icehouse deployments: === Network Segregation Configuration ===
Line 107: Line 63:
 * Metering
 * Load Balancing as-a Service - using the default haproxy implementation.
 * Firewall as-a Service - using the default iptables implementation
 * VPN as-a Service - using OpenSwan IPSec VPN.
The !OpenStack charms feature support for use of multiple networks for separation of traffic; specifically:
Line 112: Line 65:
=== Swift - default middleware ===  * os-data-network: Data network for tenant network traffic supporting instances
 * os-admin-network: Admin network - used for Admin endpoint binding and registration in keystone
 * os-public-network: Public network - used for Public endpoint binding and registration in keystone
 * os-internal-network: Internal network - used for internal communication between OpenStack services and for Internal endpoint registration in keystone
Line 114: Line 70:
The following middleware has been added to the default swift-proxy pipeline for OpenStack Icehouse: in addition the Ceph charms (ceph-osd, ceph) support splitting 'public' access traffic from 'cluster' admin and re-sync traffic, via the ceph-public-network and ceph-cluster-network configuration options.
Line 116: Line 72:
 * gatekeeper
 * staticweb
 * bulk
 * slo - static large objects
 * dlo - dynamic large objects
 * tempurl
 * formpost
 * container-quotas
 * account-quotas
 * container_sync
All network configuration options should be provided in standard CIDR format - for example 10.20.0.0/16.
Line 127: Line 74:
In addition, container versioning has also been enabled in the swift-storage charm. This feature should also support IPv6 networking as well, although this should be considered a technical preview for this release (see below).
Line 129: Line 76:
For more details on these features, please refer to the [[http://docs.openstack.org/developer/swift/middleware.html|upstream documentation]]. === IPv6 Support ===

NOTE: this feature only works as described under the Juno OpenStack release and should be considered a technical preview this cycle.

NOTE: this feature does not currently support IPv6 privacy extensions. In order for the charms to function correctly, privacy extensions must be disabled and a non-temporary address must be configured/available on your network interface.

A subset of the !OpenStack charms now have a feature to prefer IPv6 networking for binding API endpoints and service-to-service communication:

 * nova-cloud-controller
 * nova-compute
 * glance
 * keystone
 * ceph/ceph-osd
 * neutron-api
 * cinder
 * openstack-dashboard

have been tested and are know to work in IPv6 configurations with the 'prefer-ipv6' configuration option enabled.

 * swift-proxy
 * swift-storage

also have this flag, but currently require a patched version of swift to function in an IPv6 environment. There are also changes proposed to the mysql, percona-cluster and rabbitmq-server charms which should land soon to enable this feature in other !OpenStack supporting services.

Further enablement work will be done next cycle to complete this support across the charms, and hopefully have full upstream support for using IPv6 with OpenStack as well.

=== Neutron ===

The Neutron support in the !OpenStack charms has been refactored into two new charms:

 * neutron-api: Supporting API and central control operations.
 * neutron-openvswitch: Supporting deployment of the Neutron ML2 plugin with Open vSwitch on nova-compute nodes.

These charms can be introduced into an existing !OpenStack deployment:

{{{
juju deploy neutron-api
juju deploy neutron-openvswitch
juju add-relation neutron-api mysql
juju add-relation neutron-api keystone
juju add-relation neutron-api rabbitmq-server
juju add-relation neutron-api quantum-gateway
juju add-relation neutron-api neutron-openvswitch
juju add-relation neutron-api nova-cloud-controller
juju add-relation neutron-openvswitch rabbitmq-server
juju add-relation neutron-openvswitch nova-compute
}}}

Use of these two new charms also allows split of message brokers so that Nova and Neutron can use separate RabbitMQ deployments.

Use of these two new charms supports some additional features not enabled in the deprecated neutron support in nova-cloud-controller, specifically:

 * Support for using the l2population driver for ARP table optimization at scale (l2-population configuration option - defaults to True).
 * Support for using VXLAN overlay networks instead of GRE (overlay-network-type configuration option - defaults to GRE).

=== Nova Cells ===

The Nova charms now support deployment in Nova Cell configurations using the new nova-cell charm; See the nova-cell charm for details of how this works and how to use in a !OpenStack deployment. A complete guide to this feature with example juju-deployer configurations will be posted soon.

=== Clustering ===

The hacluster charm has gone through some significant re-factoring to support changing configuration options post deployment, supporting upgrades of existing single network, clustered deployments to multi-network clustered deployments.

This charm also now supports direct configuration of the corosync bindiface and port in preference over any configuration provided from the principle charm its deployed with. Configuration of these options via the principle charm will be removed during the 15.04 cycle, users need to migrate to using the direct configuration options prior to the next stable release alongside 15.04.

General Charm Updates

OpenStack Juno support for 14.04 and 14.10

All OpenStack charms now support deployment of OpenStack 2014.2 (Juno) on Ubuntu 14.04 LTS and Ubuntu 14.10; this support includes the following charms:

  • keystone
  • cinder
  • glance
  • nova-cloud-controller
  • nova-compute
  • quantum-gateway
  • swift-proxy
  • swift-storage
  • ceilometer
  • ceilometer-agent
  • heat
  • neutron-api
  • neutron-openvswitch
  • nova-cell

To deploy OpenStack Juno on Ubuntu 14.04, use the 'openstack-origin' configuration option, for example:

cat > config.yaml << EOF
nova-cloud-controller:
  openstack-origin: cloud:trusty-juno
EOF
juju deploy --config config.yaml nova-cloud-controller

OpenStack Juno is provided as the default OpenStack release on Ubuntu 14.10 so no additional configuration is required in 14.10 deployments.

Upgrading 14.04 deployments to Juno

WARNING: Upgrading an OpenStack deployment is always a non-trivial process. The OpenStack charms automate alot of the process, however always plan and test your upgrade prior to upgrading production OpenStack environments.

Existing Icehouse deployments of OpenStack on Ubuntu 14.04 can be upgraded to Juno by issuing:

juju upgrade-charm <charm-name>
juju set <charm-name> openstack-origin=cloud:trusty-juno

for each OpenStack charm in your deployment.

New Charm Features

Worker Thread Optimization

Where appropriate, the OpenStack charms will automatically configure appropriate worker values for API and RPC processes to optimize use of available CPU resources on deployed units. By default, this is set at twice the number of cores - however it can be tweaked using the worker-multiplier option provided by supporting charms:

juju set neutron-api worker-multiplier=4

the above example increases the default #cores x 2 to #cores x 4.

Network Segregation Configuration

The OpenStack charms feature support for use of multiple networks for separation of traffic; specifically:

  • os-data-network: Data network for tenant network traffic supporting instances
  • os-admin-network: Admin network - used for Admin endpoint binding and registration in keystone
  • os-public-network: Public network - used for Public endpoint binding and registration in keystone
  • os-internal-network: Internal network - used for internal communication between OpenStack services and for Internal endpoint registration in keystone

in addition the Ceph charms (ceph-osd, ceph) support splitting 'public' access traffic from 'cluster' admin and re-sync traffic, via the ceph-public-network and ceph-cluster-network configuration options.

All network configuration options should be provided in standard CIDR format - for example 10.20.0.0/16.

This feature should also support IPv6 networking as well, although this should be considered a technical preview for this release (see below).

IPv6 Support

NOTE: this feature only works as described under the Juno OpenStack release and should be considered a technical preview this cycle.

NOTE: this feature does not currently support IPv6 privacy extensions. In order for the charms to function correctly, privacy extensions must be disabled and a non-temporary address must be configured/available on your network interface.

A subset of the OpenStack charms now have a feature to prefer IPv6 networking for binding API endpoints and service-to-service communication:

  • nova-cloud-controller
  • nova-compute
  • glance
  • keystone
  • ceph/ceph-osd
  • neutron-api
  • cinder
  • openstack-dashboard

have been tested and are know to work in IPv6 configurations with the 'prefer-ipv6' configuration option enabled.

  • swift-proxy
  • swift-storage

also have this flag, but currently require a patched version of swift to function in an IPv6 environment. There are also changes proposed to the mysql, percona-cluster and rabbitmq-server charms which should land soon to enable this feature in other OpenStack supporting services.

Further enablement work will be done next cycle to complete this support across the charms, and hopefully have full upstream support for using IPv6 with OpenStack as well.

Neutron

The Neutron support in the OpenStack charms has been refactored into two new charms:

  • neutron-api: Supporting API and central control operations.
  • neutron-openvswitch: Supporting deployment of the Neutron ML2 plugin with Open vSwitch on nova-compute nodes.

These charms can be introduced into an existing OpenStack deployment:

juju deploy neutron-api
juju deploy neutron-openvswitch
juju add-relation neutron-api mysql
juju add-relation neutron-api keystone
juju add-relation neutron-api rabbitmq-server
juju add-relation neutron-api quantum-gateway
juju add-relation neutron-api neutron-openvswitch
juju add-relation neutron-api nova-cloud-controller
juju add-relation neutron-openvswitch rabbitmq-server
juju add-relation neutron-openvswitch nova-compute

Use of these two new charms also allows split of message brokers so that Nova and Neutron can use separate RabbitMQ deployments.

Use of these two new charms supports some additional features not enabled in the deprecated neutron support in nova-cloud-controller, specifically:

  • Support for using the l2population driver for ARP table optimization at scale (l2-population configuration option - defaults to True).
  • Support for using VXLAN overlay networks instead of GRE (overlay-network-type configuration option - defaults to GRE).

Nova Cells

The Nova charms now support deployment in Nova Cell configurations using the new nova-cell charm; See the nova-cell charm for details of how this works and how to use in a OpenStack deployment. A complete guide to this feature with example juju-deployer configurations will be posted soon.

Clustering

The hacluster charm has gone through some significant re-factoring to support changing configuration options post deployment, supporting upgrades of existing single network, clustered deployments to multi-network clustered deployments.

This charm also now supports direct configuration of the corosync bindiface and port in preference over any configuration provided from the principle charm its deployed with. Configuration of these options via the principle charm will be removed during the 15.04 cycle, users need to migrate to using the direct configuration options prior to the next stable release alongside 15.04.

OpenStack/OpenStackCharms/ReleaseNotes1507 (last edited 2016-06-20 13:04:30 by james-page)