ReleaseNotes1507

Differences between revisions 20 and 22 (spanning 2 versions)
Revision 20 as of 2014-12-05 17:58:38
Size: 7217
Editor: james-page
Comment:
Revision 22 as of 2015-01-30 11:13:30
Size: 7532
Editor: 10
Comment:
Deletions are marked like this. Additions are marked like this.
Line 1: Line 1:
## page was copied from TrustyTahr/ReleaseNotes/OpenStackCharms
## page was renamed from TrustyTahr/ReleaseNote/OpenStackCharms
== General Charm Updates ==
<<TableOfContents>>
Line 5: Line 3:
=== OpenStack Juno support for 14.04 and 14.10 === == Summary ==
Line 7: Line 5:
All !OpenStack charms now support deployment of !OpenStack 2014.2 (Juno) on Ubuntu 14.04 LTS and Ubuntu 14.10; this support includes the following charms:
 
The 15.01 OpenStack Charm release includes updates for the following charms:

 * ceilometer
 * ceilometer-agent
 * ceph
 * ceph-radosgw
 * cinder
 * cinder-ceph
 * glance
 * hacluster
 * heat
Line 10: Line 17:
 * cinder
 * glance
 * neutron-api
 * neutron-openvswitch
Line 14: Line 21:
 * openstack-dashboard
Line 15: Line 23:
 * rabbitmq-server
Line 17: Line 26:
 * ceilometer
 * ceilometer-agent
 * heat
 * neutron-api
 * neutron-openvswitch
 * nova-cell
Line 24: Line 27:
To deploy !OpenStack Juno on Ubuntu 14.04, use the 'openstack-origin' configuration option, for example:

{{{
cat > config.yaml << EOF
nova-cloud-controller:
  openstack-origin: cloud:trusty-juno
EOF
juju deploy --config config.yaml nova-cloud-controller
}}}

!OpenStack Juno is provided as the default !OpenStack release on Ubuntu 14.10 so no additional configuration is required in 14.10 deployments.

=== Upgrading 14.04 deployments to Juno ===

'''WARNING''': Upgrading an !OpenStack deployment is always a non-trivial process. The !OpenStack charms automate alot of the process, however always plan and test your upgrade prior to upgrading production !OpenStack environments.

Existing Icehouse deployments of !OpenStack on Ubuntu 14.04 can be upgraded to Juno by issuing:

{{{
juju upgrade-charm <charm-name>
juju set <charm-name> openstack-origin=cloud:trusty-juno
}}}

for each !OpenStack charm in your deployment.
This release has mainly been focussed on bug fixing, however some new features have been introduced.
Line 51: Line 31:
=== Worker Thread Optimization === === Clustering ===
Line 53: Line 33:
Where appropriate, the !OpenStack charms will automatically configure appropriate worker values for API and RPC processes to optimize use of available CPU resources on deployed units. By default, this is set at twice the number of cores - however it can be tweaked using the worker-multiplier option provided by supporting charms: The trusty hacluster charm now supports running in multicast (default) and unicast modes, supporting use of this charm in environments where multicast UDP is not supported. To enable this feature:
Line 56: Line 36:
juju set neutron-api worker-multiplier=4 juju set hacluster corosync_transport=unicast
Line 59: Line 39:
the above example increases the default #cores x 2 to #cores x 4. Note that you should ensure that your mysql service can deal with the associated increase in concurrent connections - the mysql charm has a max-connections option that allows you to tweak this. At this point in time the previous node entries for the multicast cluster have to be removed manually - to complete this action, do the following on one of the members of the cluster:
Line 61: Line 41:
=== Network Segregation Configuration === {{{
sudo crm node list
}}}
Line 63: Line 45:
The !OpenStack charms feature support for use of multiple networks for separation of traffic; specifically: New unicast nodes will start at 1001; the original multicast node entries should be deleted using:
Line 65: Line 47:
 * os-data-network: Data network for tenant network traffic supporting instances
 * os-admin-network: Admin network - used for Admin endpoint binding and registration in keystone
 * os-public-network: Public network - used for Public endpoint binding and registration in keystone
 * os-internal-network: Internal network - used for internal communication between OpenStack services and for Internal endpoint registration in keystone
{{{
sudo crm configure
 > delete <id>
}}}
Line 70: Line 52:
in addition the Ceph charms (ceph-osd, ceph) support splitting 'public' access traffic from 'cluster' admin and re-sync traffic, via the ceph-public-network and ceph-cluster-network configuration options. The trusty hacluster charm now supports a new ‘debug’ configuration option to increase the verbosity of logging from corosync and pacemaker.
Line 72: Line 54:
All network configuration options should be provided in standard CIDR format - for example 10.20.0.0/16. The trusty hacluster charm also includes a number of fixes to improve the way that quorum is handled by corosync and pacemaker.
Line 74: Line 56:
This feature should also support IPv6 networking as well, although this should be considered a technical preview for this release (see below). === Ceph ===
Line 76: Line 58:
=== IPv6 Support === The ceph and ceph-osd charms now support setting sysctl options via charm configuration and provide a sensible default for the ‘kernel.pid_max’ sysctl option. This should support faster recovery in the event of a major outage in a Ceph deployment.
Line 78: Line 60:
NOTE: this feature only works as described under the Juno OpenStack release and should be considered a technical preview this cycle. For charm authors, the ceph charm now has a Ceph broker API. This allows Ceph clients to request Ceph cluster actions e.g. create a new pool, via a new api as opposed to performing them on the client side. This will facilitate easily adding new functionality with reduced code impact while reducing the burden on the client to have to elect a leader to perform such actions, avoiding code duplication.
Line 80: Line 62:
NOTE: this feature does not currently support IPv6 privacy extensions. In order for the charms to function correctly, privacy extensions must be disabled and a non-temporary address must be configured/available on your network interface. === Ceph RADOS Gateway ===
Line 82: Line 64:
A subset of the !OpenStack charms now have a feature to prefer IPv6 networking for binding API endpoints and service-to-service communication: The ceph-radosgw charm can now be deployed in a clustered configuration using a VIP as the object storage endpoint in conjunction with the hacluster charm:
Line 84: Line 66:
 * nova-cloud-controller
 * nova-compute
 * glance
 * keystone
 * ceph/ceph-osd
 * neutron-api
 * cinder
 * openstack-dashboard
{{{
juju deploy cs:trusty/hacluster hacluster-radosgw
juju deploy -n3 cs:trusty/ceph-radosgw
juju set ceph-radosgw vip=10.5.100.10
juju add-relation ceph-radosgw hacluster-radosgw
}}}
Line 93: Line 73:
have been tested and are know to work in IPv6 configurations with the 'prefer-ipv6' configuration option enabled. The ceph-radosgw charm now also support using an embedded webcontainer option provide natively in Ceph:
Line 95: Line 75:
 * swift-proxy
 * swift-storage
{{{
juju set ceph-radosgw use-embedded-webserver=true
}}}
Line 98: Line 79:
also have this flag, but currently require a patched version of swift to function in an IPv6 environment. There are also changes proposed to the mysql, percona-cluster and rabbitmq-server charms which should land soon to enable this feature in other !OpenStack supporting services. This avoids using Apache2 and mod_fastcgi, which lacks support for chuck transfer encoding and 100-continue as provided in the Ubuntu archive.
Line 100: Line 81:
Further enablement work will be done next cycle to complete this support across the charms, and hopefully have full upstream support for using IPv6 with OpenStack as well. === Keystone ===

The keystone charm contains a number of improvements to support use of SSL endpoints in highly available deployments.

For charm authors, the keystone charm now has an additional ‘identity-notifications’ relation type; this relation is used by keystone to notify other charms when entries in the keystone service catalog change, and was introduced to support use with Ceilometer.

=== Glance ===

The glance charm now supports use of custom end-user provided configuration flags via the ‘config-flags’ charm option.

=== Ceilometer ===

The ceilometer charm can now be deployed in a clustered configuration using a VIP as the endpoint in conjunction with the hacluster charm:

{{{
juju deploy cs:trusty/hacluster hacluster-ceilometer
juju deploy -n3 cs:trusty/ceilometer
juju set ceilometer vip=10.5.100.20
juju add-relation ceilometer hacluster-ceilometer
}}}

The ceilometer charm now requires to relations to the keystone charm, to support both service catalog registration of the Ceilometer endpoint and notifications to changes to the service catalog:

{{{
juju add-relation ceilometer keystone:identity-service
juju add-relation ceilometer keystone:identity-notifications
}}}

NOTE: relation endpoint types must now be specified to avoid ambiguity.
Line 104: Line 113:
The Neutron support in the !OpenStack charms has been refactored into two new charms:

 * neutron-api: Supporting API and central control operations.
 * neutron-openvswitch: Supporting deployment of the Neutron ML2 plugin with Open vSwitch on nova-compute nodes.

These charms can be introduced into an existing !OpenStack deployment:
The quantum-gateway charm now has a fast failover option of neutron resources when multiple gateway units are used with the hacluster charm:
Line 112: Line 116:
juju deploy neutron-api
juju deploy neutron-openvswitch
juju add-relation neutron-api mysql
juju add-relation neutron-api keystone
juju add-relation neutron-api rabbitmq-server
juju add-relation neutron-api quantum-gateway
juju add-relation neutron-api neutron-openvswitch
juju add-relation neutron-api nova-cloud-controller
juju add-relation neutron-openvswitch rabbitmq-server
juju add-relation neutron-openvswitch nova-compute
juju deploy cs:trusty/hacluster hacluster-ngateway
juju deploy -n3 cs:trusty/quantum-gateway neutron-gateway
juju set neutron-gateway vip=10.5.100.30
juju set neutron-gateway ha-legacy-mode=True
juju add-relation neutron-gateway hacluster-ngateway
Line 124: Line 123:
Use of these two new charms also allows split of message brokers so that Nova and Neutron can use separate RabbitMQ deployments. NOTE: This feature has been introduced to support a level of resilience in Icehouse based deployments. Future charm work will include enablement of native Neutron support for router HA for later OpenStack releases.
Line 126: Line 125:
Use of these two new charms supports some additional features not enabled in the deprecated neutron support in nova-cloud-controller, specifically: The neutron charms now also support VLAN and flat networking in additional to GRE and VXLAN for tenant networks.
Line 128: Line 127:
 * Support for using the l2population driver for ARP table optimization at scale (l2-population configuration option - defaults to True).
 * Support for using VXLAN overlay networks instead of GRE (overlay-network-type configuration option - defaults to GRE).
The quantum-gateway charm now also supports setting sysctl options via charm configuration inline with the ceph and nova-compute charms - see the ‘sysctl’ configuration option for details.
Line 131: Line 129:
NOTE: the quantum-gateway charm now includes two amqp interfaces, one for neutron and one for nova, for use in split broker deployments. In single broker deployments, both neutron and nova will use the default 'amqp' relation for messaging configuration. === Swift ===
Line 133: Line 131:
=== Nova Cells === The swift-proxy charm now automatically rebalances rings on scale out of swift-storage services, allowing swift deployments to be managed more directly using Juju.
Line 135: Line 133:
The Nova charms now support deployment in Nova Cell configurations using the new nova-cell charm; See the nova-cell charm for details of how this works and how to use in a !OpenStack deployment. A complete guide to this feature with example juju-deployer configurations will be posted soon. The swift-proxy charm also includes configuration options to set the minimum period between ring rebalancing and an option to completely disable rebalancing - this should be used when adding a number of new swift-storage service units to a deployment, to avoid re-balancing as every new set of storage is added to the swift deployment.
Line 137: Line 135:
=== Clustering === === Nova ===
Line 139: Line 137:
The hacluster charm has gone through some significant re-factoring to support changing configuration options post deployment, supporting upgrades of existing single network, clustered deployments to multi-network clustered deployments. The nova-cloud-controller charm can now use memcache to store tokens for instance console access, supporting use of instance consoles in HA configuration via the OpenStack dashboard:
Line 141: Line 139:
This charm also now supports direct configuration of the corosync bindiface and port in preference over any configuration provided from the principle charm its deployed with. Configuration of these options via the principle charm will be removed during the 15.04 cycle, users need to migrate to using the direct configuration options prior to the next stable release alongside 15.04. {{{
juju deploy -n3 cs:trusty/memcached
juju deploy -n3 cs:trusty/nova-cloud-controller
juju add-relation nova-cloud-controller memcached
}}}

The nova-cloud-controller charm now includes a ‘nova-alchemy-flags’ to allow Nova database configuration options to be directly tuned by charm users.

The nova-compute charm now has support for different storage backends; specifically it now supports local disk (default), Ceph RBD or LVM backends:

{{{
juju set nova-compute libvirt-image-backend=rbd
juju set nova-compute rbd-pool=nova
juju set nova-compute ceph-osd-replication-count=3
juju add-relation ceph nova-compute
}}}

The nova-compute charm now supports configuring disk cachemodes, allowing configuration of Nova disk usage options - see the ‘disk-cachemodes’ configuration option in the nova-compute charm and the upstream documentation about this feature.

The nova-compute charm now also supports setting sysctl options via charm configuration inline with the ceph and quantum-gateway charms - see the ‘sysctl’ configuration option for details.

=== OpenStack Dashboard ===

The openstack-dashboard charm now has improved support for use with multi-region clouds.

=== RabbitMQ ===

The rabbitmq-server charm now supports deployment in IPv6 only networks:

{{{
juju set rabbitmq-server prefer-ipv6=true
}}}

and use of a specific network for access to the message broker - see the ‘access-network’ configuration option for more details on this option.

== Bugs Fixed ==

For the full list of bugs resolved for the 15.01 release please refer to https://launchpad.net/charms/+milestone/15.01

Summary

The 15.01 OpenStack Charm release includes updates for the following charms:

  • ceilometer
  • ceilometer-agent
  • ceph
  • ceph-radosgw
  • cinder
  • cinder-ceph
  • glance
  • hacluster
  • heat
  • keystone
  • neutron-api
  • neutron-openvswitch
  • nova-cloud-controller
  • nova-compute
  • openstack-dashboard
  • quantum-gateway
  • rabbitmq-server
  • swift-proxy
  • swift-storage

This release has mainly been focussed on bug fixing, however some new features have been introduced.

New Charm Features

Clustering

The trusty hacluster charm now supports running in multicast (default) and unicast modes, supporting use of this charm in environments where multicast UDP is not supported. To enable this feature:

juju set hacluster corosync_transport=unicast

At this point in time the previous node entries for the multicast cluster have to be removed manually - to complete this action, do the following on one of the members of the cluster:

sudo crm node list

New unicast nodes will start at 1001; the original multicast node entries should be deleted using:

sudo crm configure
 >     delete <id>

The trusty hacluster charm now supports a new ‘debug’ configuration option to increase the verbosity of logging from corosync and pacemaker.

The trusty hacluster charm also includes a number of fixes to improve the way that quorum is handled by corosync and pacemaker.

Ceph

The ceph and ceph-osd charms now support setting sysctl options via charm configuration and provide a sensible default for the ‘kernel.pid_max’ sysctl option. This should support faster recovery in the event of a major outage in a Ceph deployment.

For charm authors, the ceph charm now has a Ceph broker API. This allows Ceph clients to request Ceph cluster actions e.g. create a new pool, via a new api as opposed to performing them on the client side. This will facilitate easily adding new functionality with reduced code impact while reducing the burden on the client to have to elect a leader to perform such actions, avoiding code duplication.

Ceph RADOS Gateway

The ceph-radosgw charm can now be deployed in a clustered configuration using a VIP as the object storage endpoint in conjunction with the hacluster charm:

juju deploy cs:trusty/hacluster hacluster-radosgw
juju deploy -n3 cs:trusty/ceph-radosgw
juju set ceph-radosgw vip=10.5.100.10
juju add-relation ceph-radosgw hacluster-radosgw

The ceph-radosgw charm now also support using an embedded webcontainer option provide natively in Ceph:

juju set ceph-radosgw use-embedded-webserver=true

This avoids using Apache2 and mod_fastcgi, which lacks support for chuck transfer encoding and 100-continue as provided in the Ubuntu archive.

Keystone

The keystone charm contains a number of improvements to support use of SSL endpoints in highly available deployments.

For charm authors, the keystone charm now has an additional ‘identity-notifications’ relation type; this relation is used by keystone to notify other charms when entries in the keystone service catalog change, and was introduced to support use with Ceilometer.

Glance

The glance charm now supports use of custom end-user provided configuration flags via the ‘config-flags’ charm option.

Ceilometer

The ceilometer charm can now be deployed in a clustered configuration using a VIP as the endpoint in conjunction with the hacluster charm:

juju deploy cs:trusty/hacluster hacluster-ceilometer
juju deploy -n3 cs:trusty/ceilometer
juju set ceilometer vip=10.5.100.20
juju add-relation ceilometer hacluster-ceilometer

The ceilometer charm now requires to relations to the keystone charm, to support both service catalog registration of the Ceilometer endpoint and notifications to changes to the service catalog:

juju add-relation ceilometer keystone:identity-service
juju add-relation ceilometer keystone:identity-notifications

NOTE: relation endpoint types must now be specified to avoid ambiguity.

Neutron

The quantum-gateway charm now has a fast failover option of neutron resources when multiple gateway units are used with the hacluster charm:

juju deploy cs:trusty/hacluster hacluster-ngateway
juju deploy -n3 cs:trusty/quantum-gateway neutron-gateway
juju set neutron-gateway vip=10.5.100.30
juju set neutron-gateway ha-legacy-mode=True
juju add-relation neutron-gateway hacluster-ngateway

NOTE: This feature has been introduced to support a level of resilience in Icehouse based deployments. Future charm work will include enablement of native Neutron support for router HA for later OpenStack releases.

The neutron charms now also support VLAN and flat networking in additional to GRE and VXLAN for tenant networks.

The quantum-gateway charm now also supports setting sysctl options via charm configuration inline with the ceph and nova-compute charms - see the ‘sysctl’ configuration option for details.

Swift

The swift-proxy charm now automatically rebalances rings on scale out of swift-storage services, allowing swift deployments to be managed more directly using Juju.

The swift-proxy charm also includes configuration options to set the minimum period between ring rebalancing and an option to completely disable rebalancing - this should be used when adding a number of new swift-storage service units to a deployment, to avoid re-balancing as every new set of storage is added to the swift deployment.

Nova

The nova-cloud-controller charm can now use memcache to store tokens for instance console access, supporting use of instance consoles in HA configuration via the OpenStack dashboard:

juju deploy -n3 cs:trusty/memcached
juju deploy -n3 cs:trusty/nova-cloud-controller
juju add-relation nova-cloud-controller memcached

The nova-cloud-controller charm now includes a ‘nova-alchemy-flags’ to allow Nova database configuration options to be directly tuned by charm users.

The nova-compute charm now has support for different storage backends; specifically it now supports local disk (default), Ceph RBD or LVM backends:

juju set nova-compute libvirt-image-backend=rbd
juju set nova-compute rbd-pool=nova
juju set nova-compute ceph-osd-replication-count=3
juju add-relation ceph nova-compute 

The nova-compute charm now supports configuring disk cachemodes, allowing configuration of Nova disk usage options - see the ‘disk-cachemodes’ configuration option in the nova-compute charm and the upstream documentation about this feature.

The nova-compute charm now also supports setting sysctl options via charm configuration inline with the ceph and quantum-gateway charms - see the ‘sysctl’ configuration option for details.

OpenStack Dashboard

The openstack-dashboard charm now has improved support for use with multi-region clouds.

RabbitMQ

The rabbitmq-server charm now supports deployment in IPv6 only networks:

juju set rabbitmq-server prefer-ipv6=true

and use of a specific network for access to the message broker - see the ‘access-network’ configuration option for more details on this option.

Bugs Fixed

For the full list of bugs resolved for the 15.01 release please refer to https://launchpad.net/charms/+milestone/15.01

OpenStack/OpenStackCharms/ReleaseNotes1507 (last edited 2016-06-20 13:04:30 by james-page)