|Deletions are marked like this.||Additions are marked like this.|
|Line 1:||Line 1:|
|## page was copied from ServerTeam/OpenStackCharms/ReleaseNotes1501|
|Line 5:||Line 6:|
|The 15.01 OpenStack Charm release includes updates for the following charms:||The 15.04 OpenStack Charm release includes updates for the following charms:|
|Line 27:||Line 28:|
|This release has mainly been focussed on bug fixing, however some new features have been introduced.
|Line 31:||Line 30:|
|=== Clustering ===||=== Deployment from source ===|
|Line 33:||Line 32:|
|The trusty hacluster charm now supports running in multicast (default) and unicast modes, supporting use of this charm in environments where multicast UDP is not supported. To enable this feature:
juju set hacluster corosync_transport=unicast
At this point in time the previous node entries for the multicast cluster have to be removed manually - to complete this action, do the following on one of the members of the cluster:
sudo crm node list
New unicast nodes will start at 1001; the original multicast node entries should be deleted using:
sudo crm configure
> delete <id>
The trusty hacluster charm now supports a new ‘debug’ configuration option to increase the verbosity of logging from corosync and pacemaker.
The trusty hacluster charm also includes a number of fixes to improve the way that quorum is handled by corosync and pacemaker.
=== Ceph ===
The ceph and ceph-osd charms now support setting sysctl options via charm configuration and provide a sensible default for the ‘kernel.pid_max’ sysctl option. This should support faster recovery in the event of a major outage in a Ceph deployment.
For charm authors, the ceph charm now has a Ceph broker API. This allows Ceph clients to request Ceph cluster actions e.g. create a new pool, via a new api as opposed to performing them on the client side. This will facilitate easily adding new functionality with reduced code impact while reducing the burden on the client to have to elect a leader to perform such actions, avoiding code duplication.
=== Ceph RADOS Gateway ===
The ceph-radosgw charm can now be deployed in a clustered configuration using a VIP as the object storage endpoint in conjunction with the hacluster charm:
juju deploy cs:trusty/hacluster hacluster-radosgw
juju deploy -n3 cs:trusty/ceph-radosgw
juju set ceph-radosgw vip=10.5.100.10
juju add-relation ceph-radosgw hacluster-radosgw
The ceph-radosgw charm now also supports using an embedded web container option provided natively by Ceph:
juju set ceph-radosgw use-embedded-webserver=true
This avoids using Apache2 and mod_fastcgi, which lacks support for chuck transfer encoding and 100-continue as provided in the Ubuntu archive.
=== Keystone ===
The keystone charm contains a number of improvements to support use of SSL endpoints in highly available deployments.
For charm authors, the keystone charm now has an additional ‘identity-notifications’ relation type; this relation is used by keystone to notify other charms when entries in the keystone service catalog change, and was introduced to support use with Ceilometer.
=== Glance ===
The glance charm now supports use of custom end-user provided configuration flags via the ‘config-flags’ charm option.
=== Ceilometer ===
The ceilometer charm can now be deployed in a clustered configuration using a VIP as the endpoint in conjunction with the hacluster charm:
juju deploy cs:trusty/hacluster hacluster-ceilometer
juju deploy -n3 cs:trusty/ceilometer
juju set ceilometer vip=10.5.100.20
juju add-relation ceilometer hacluster-ceilometer
The ceilometer charm now requires to relations to the keystone charm, to support both service catalog registration of the Ceilometer endpoint and notifications to changes to the service catalog:
juju add-relation ceilometer keystone:identity-service
juju add-relation ceilometer keystone:identity-notifications
NOTE: relation endpoint types must now be specified to avoid ambiguity.
=== Neutron ===
The quantum-gateway charm now has a fast failover option of neutron resources when multiple gateway units are used with the hacluster charm:
juju deploy cs:trusty/hacluster hacluster-ngateway
juju deploy -n3 cs:trusty/quantum-gateway neutron-gateway
juju set neutron-gateway ha-legacy-mode=True
juju add-relation neutron-gateway hacluster-ngateway
NOTE: This feature has been introduced to support a level of resilience in Icehouse based deployments. Future charm work will include enablement of native Neutron support for router HA for later OpenStack releases.
The neutron charms now also support VLAN and flat networking in additional to GRE and VXLAN for tenant networks.
The quantum-gateway charm now also supports setting sysctl options via charm configuration inline with the ceph and nova-compute charms - see the ‘sysctl’ configuration option for details.
=== Swift ===
The swift-proxy charm now automatically rebalances rings on scale out of swift-storage services, allowing swift deployments to be managed more directly using Juju.
The swift-proxy charm also includes configuration options to set the minimum period between ring rebalancing and an option to completely disable rebalancing - this should be used when adding a number of new swift-storage service units to a deployment, to avoid re-balancing as every new set of storage is added to the swift deployment.
=== Nova ===
The nova-cloud-controller charm can now use memcache to store tokens for instance console access, supporting use of instance consoles in HA configuration via the OpenStack dashboard:
juju deploy -n3 cs:trusty/memcached
juju deploy -n3 cs:trusty/nova-cloud-controller
juju add-relation nova-cloud-controller memcached
The nova-cloud-controller charm now includes a ‘nova-alchemy-flags’ to allow Nova database configuration options to be directly tuned by charm users.
The nova-compute charm now has support for different storage backends; specifically it now supports local disk (default), Ceph RBD or LVM backends:
juju set nova-compute libvirt-image-backend=rbd
juju set nova-compute rbd-pool=nova
juju set nova-compute ceph-osd-replication-count=3
juju add-relation ceph nova-compute
The nova-compute charm now supports configuring disk cachemodes, allowing configuration of Nova disk usage options - see the ‘disk-cachemodes’ configuration option in the nova-compute charm and the upstream documentation about this feature.
The nova-compute charm now also supports setting sysctl options via charm configuration inline with the ceph and quantum-gateway charms - see the ‘sysctl’ configuration option for details.
=== OpenStack Dashboard ===
The openstack-dashboard charm now has improved support for use with multi-region clouds.
=== RabbitMQ ===
The rabbitmq-server charm now supports deployment in IPv6 only networks:
juju set rabbitmq-server prefer-ipv6=true
and use of a specific network for access to the message broker - see the ‘access-network’ configuration option for more details.
|Line 175:||Line 36:|
|For the full list of bugs resolved for the 15.01 release please refer to https://launchpad.net/charms/+milestone/15.01||For the full list of bugs resolved for the 15.01 release please refer to https://launchpad.net/charms/+milestone/15.04|
The 15.04 OpenStack Charm release includes updates for the following charms:
New Charm Features
Deployment from source
For the full list of bugs resolved for the 15.01 release please refer to https://launchpad.net/charms/+milestone/15.04