ReleaseNotes1607

Differences between revisions 19 and 20
Revision 19 as of 2016-07-28 12:13:33
Size: 14879
Editor: localhost
Comment:
Revision 20 as of 2016-07-28 12:15:59
Size: 15226
Editor: localhost
Comment:
Deletions are marked like this. Additions are marked like this.
Line 283: Line 283:
Aodh provides the Alarming service as part of OpenStack telemetry. To deploy aodh:

{{{
juju deploy cs:~openstack-charmers-next/xenial/aodh
juju add-relation aodh mysql
juju add-relation aodh rabbitmq-server
juju add-relation aodh keystone
}}}

Summary

The 16.07 OpenStack Charm release includes updates for the following charms:

  • ceilometer
  • ceilometer-agent
  • ceph
  • ceph-mon
  • ceph-osd
  • ceph-radosgw
  • cinder
  • cinder-backup
  • cinder-ceph
  • glance
  • hacluster
  • heat
  • keystone
  • neutron-api
  • neutron-openvswitch
  • nova-cloud-controller
  • nova-compute
  • openstack-dashboard
  • neutron-gateway
  • rabbitmq-server
  • swift-proxy
  • swift-storage
  • percona-cluster
  • neutron-api-odl
  • openvswitch-odl
  • odl-controller

New Charm Features

Nova compute apparmor (Preview)

Enable apparmor profiles for nova-compute services. Valid settings: 'complain', 'enforce' or 'disable'. Apparmor is disabled by default.

juju set nova-compute aa-profile-mode enforce

openstack-dashboard + Keystone v3

The openstack-dashboard charm now supports the keystone v3 API. To enable this feature the openstack-dashboard charm needs to be related to a database backend for storing session data.

 juju add-relation openstack-dashboard keystone 

For details on Dashboard and v3 integration see:

https://wiki.openstack.org/wiki/Horizon/DomainWorkFlow

SR-IOV (Preview)

Support for SR-IOV has been added to the neutron-api, nova-cloud-controller and nova-compute charms. This is configured with the following options:

neutron-api (Preview)

enable-sriov:

  • Enable SR-IOV networking support across Neutron and Nova.

nova-compute

pci-passthrough-whitelist:

  • Sets the pci_passthrough_whitelist option in nova.conf with is used to allow pci passthrough to the VM of specific devices, for example for SR-IOV.

reserved-host-memory:

  • Amount of memory in MB to reserve for the host. Defaults to 512MB.

vcpu-pin-set:

  • Sets vcpu_pin_set option in nova.conf which defines which pcpus that instance vcpus can or cannot use. For example '0,2' to reserve two cpus for the host.

Example

In the example below, compute nodes will be configured with 60% of available RAM for hugepage use (decreasing memory fragmentation in virtual machines, improving performance), and Nova will be configured to reserve CPU cores 0 and 2 and 1024M of RAM for host usage and use the supplied PCI device whitelist as PCI devices that as consumable by virtual machines, including any mapping to underlying provider network names (used for SR-IOV VF/PF port scheduling with Nova and Neutron's SR-IOV support).

    juju set neutron-api enable-sriov=True
    juju set nova-compute vcpu-pin-set: "^0,^2"
    juju set nova-compute reserved-host-memory: 1024
    juju set nova-compute pci-passthrough-whitelist: {"vendor_id":"1137","product_id":"0071","address":"*:0a:00.*","physical_network":"physnet1"}

External Network Update

The neutron-gateway charm has been updated to use "new" style external networks when ext-port is not set.

Previously only a single external network could exist on a neutron-gateway unit. Now multiple networks can exist and also complex network configurations can be setup such as VLANs. It is also possible to use the same physical connection with different segmentation IDs for both internal and external networks, as well as multiple external networks.

eg

Alternative configuration with two external networks, one for public instance addresses and one for floating IP addresses. Both networks are on the same physical network connection (but they might be on different VLANs, that is configured later using neutron net-create).

    neutron-gateway:
        bridge-mappings:         physnet1:br-data
        data-port:               br-data:eth1
    neutron-api:
        flat-network-providers:  physnet1

    neutron net-create --provider:network_type vlan \
        --provider:segmentation_id 400 \
        --provider:physical_network physnet1 --shared external
    neutron net-create --provider:network_type vlan \
        --provider:segmentation_id 401 \
        --provider:physical_network physnet1 --shared --router:external=true \
        floating
    neutron router-gateway-set provider floating

This replaces the previous system of using ext-port, which always created a bridge called br-ex for external networks which was used implicitly by external router interfaces.

DNS HA (Preview)

Leverage DNS to provide high availability. To use DNS high availability there are several prerequisites. However, DNS HA does not require the clustered nodes to be on the same subnet. Currently the DNS HA feature is only available on Xenial 16.04 or greater using MAAS 2.0 or greater environments. MAAS 2.0 requires Juju 2.0 or greater. The clustered nodes must have static or "reserved" IP addresses registered in MAAS. The DNS hostname(s) must be pre-registered in MAAS before use with DNS HA.

At a minimum, the config option 'dns-ha' must be set to true and at least one of 'os-public-hostname', 'os-internal-hostname' or 'os-internal-hostname' must be set in order to use DNS HA. One or more of the above hostnames may be set.

juju set $app dns-ha=True
juju set $app os-public-hostname=$app.public.maas
juju set $app os-internal-hostname=$app.internal.maas
juju set $app os-admin-hostname=$app.admin.maas
juju add-relation $app hacluster

Ceph

ceph charms

User provided ceph config:

  • All ceph charms now have a patch that allows user provided config sections in the ceph.conf file. WARNING: this is not the recommended way to configure the underlying services that this charm installs and is used at the user's own risk. This option is mainly provided as a stop-gap for users that either want to test the effect of modifying some config or who have found a critical bug in the way the charm has configured their services and need it fixed immediately. We ask that whenever this is used, that the user consider opening a bug on this charm at

    http://bugs.launchpad.net/charms providing an explanation of why the config was needed so that we may consider it for inclusion as a natively supported config in the the charm.

ceph-osd

Limit OSD object name lengths for jewel + ext4. 53d09832e59b3cb268f7ae2d72335b7780905c7b

  • Limit OSD object name lengths for Jewel + ext4 As of the Ceph Jewel release, certain limitations apply to OSD object name lengths: specifically if ext4 is in use for block devices or a directory based OSD is configured, OSD's must be configured to limit object name length:
    • osd max object name len = 256 osd max object namespace len = 64
    This may cause problems storing objects with long names via the ceph-radosgw charm or for direct users of RADOS.

User provided ceph config: 8f0347d69233bb7fae390dd35d0a03e586948a14

  • Add support for user-provided ceph config Adds a new config-flags option to the charm that supports setting a dictionary of ceph configuration settings that will be applied to ceph.conf. This implementation supports config sections so that settings can be applied to any section supported by the ceph.conf template in the charm.

perf optimizations: 79c6c286498e8235e5d17c17e1a1d63bb0e21259

  • The osd charm is starting down the road to automated performance tuning. It now attempts to identify optimal settings for hard drives and network cards and then persist them for reboots. It is conservative but configurable via config.yaml settings. It should be safe to enable.

hammer to infernalis upgrade bug: 3e465ba4f8e3522a27c20dc74d93f5cb5a57984b

  • A bug was discovered when upgrading from hammer to infernalis where the Ceph user changes from root to ceph. The charm didn't take this into account and rolling upgrades would fail. This is now fixed.

New Charms (Preview)

designate and designate-bind

Designate provides DNSaaS services for OpenStack and can now be deployed using the designate charm. The designate-bind charm provides a bind backend to store DNS records generated by designate. The 'dns-slaves' option to the designate charm can also be used to specify a bind server the exists outside of the model.

To deploy designate with a bind backend:

juju deploy cs:~openstack-charmers-next/xenial/designate-bind
juju deploy cs:~openstack-charmers-next/xenial/designate
juju add-relation designate mysql
juju add-relation designate rabbitmq-server
juju add-relation designate keystone
juju add-relation designate designate-bind
juju add-relation designate nova-compute

When records are created within Designate they are pushed out to the designate bind slaves. The designate-bind units should be used for DNS queries.

The designate charm can automatically generate DNS records when guests are booted or when floating IPs are assigned. To enable these features set the domain information config options in designate charm:

juju set designate nameservers='ns1.mojotest.com.'                                          
juju set designate nova-domain='nova.mojotest.com.'                                         
juju set designate nova-domain-email='test@nova.mojotest.com'                               
juju set designate neutron-domain='neutron.mojotest.com.'                                   
juju set designate neutron-domain-email='test@neutron.mojotest.com'                         
juju set designate nova-record-format='%(hostname)s.%(zone)s'  

Booting a guest will now result in a dns record being created for the guest in the nova.mojotest.com. domain:

nova boot ... guest1

nova_zone_id=$(designate domain-list --all-tenants | awk '/nova.mojotest.com./ {print $2}')
designate record-list $nova_zone_id --all-tenants
+--------------------------------------+------+---------------------------+--------------------------------------------------------------------------+
| id                                   | type | name                      | data                                                                     |
+--------------------------------------+------+---------------------------+--------------------------------------------------------------------------+
| ba81097a-d6ea-41af-8cde-8f8b114450ad | NS   | nova.mojotest.com.        | ns1.mojotest.com.                                                        |
| 352ee45e-b502-4934-a3fd-a4f11a5323ed | SOA  | nova.mojotest.com.        | ns1.mojotest.com. test.nova.mojotest.com. 1469704587 3573 600 86400 3600 |
| 85a05cc7-12fd-48ba-93d3-1156dfb8a685 | A    | guest1.nova.mojotest.com. | 192.168.21.10                                                            |
+--------------------------------------+------+---------------------------+--------------------------------------------------------------------------+

bind_ip=$(juju status --format=oneline designate-bind | awk '{print $3}' | tail -1)
dig +short @${bind_ip} guest1.nova.mojotest.com.
192.168.21.10

Adding a floating IP will result in a guest in the neutron.mojotest.com domain:

neutron_zone_id=$(designate domain-list --all-tenants | awk '/neutron.mojotest.com./ {print $2}')
nova floating-ip-create
+--------------------------------------+------------+-----------+----------+---------+
| Id                                   | IP         | Server Id | Fixed IP | Pool    |
+--------------------------------------+------------+-----------+----------+---------+
| af3575db-eb17-4f0a-803c-0fdd5f97d52b | 10.5.150.8 | -         | -        | ext_net |
+--------------------------------------+------------+-----------+----------+---------+
nova floating-ip-associate 038dc11c-db7b-4fd8-a328-608124234b0c 10.5.150.8
designate record-list $neutron_zone_id --all-tenants
+--------------------------------------+------+----------------------------------+-----------------------------------------------------------------------------+
| id                                   | type | name                             | data                                                                        |
+--------------------------------------+------+----------------------------------+-----------------------------------------------------------------------------+
| 40c1778c-ce69-4130-a62a-12c9143af099 | NS   | neutron.mojotest.com.            | ns1.mojotest.com.                                                           |
| b60c6502-9637-41f5-8293-c5c9dbdb04a4 | SOA  | neutron.mojotest.com.            | ns1.mojotest.com. test.neutron.mojotest.com. 1469705482 3590 600 86400 3600 |
| a21b9c18-532e-44b9-95ea-5f5e246aa82b | A    | 10-5-150-8.neutron.mojotest.com. | 10.5.150.8                                                                  |
+--------------------------------------+------+----------------------------------+-----------------------------------------------------------------------------+
dig +short @${bind_ip} 10-5-150-8.neutron.mojotest.com.
10.5.150.8

Note: The designate charm only works on Mitaka or higher.

aodh

Aodh provides the Alarming service as part of OpenStack telemetry. To deploy aodh:

juju deploy cs:~openstack-charmers-next/xenial/aodh
juju add-relation aodh mysql                                                         
juju add-relation aodh rabbitmq-server                                    
juju add-relation aodh keystone

Upgrading

General

Please ensure that the keystone charm is upgraded first.

To upgrade an existing deployment to the latest charm version simply use the 'upgrade-charm' command, e.g.:

juju upgrade-charm cinder

Note: The networking endpoint in keystone has been renamed from quantum to neutron in-line with upstream name changes.

Deprecation Notices

Minimum Juju version

OpenStack Charm CI testing no longer validates the OpenStack charms against versions of Juju (< 1.24) which don't have the leader election feature, used to determine leadership between peer units within a service.

Legacy leadership support will be removed from the charms over the next few development cycles, so please ensure that you are running on Juju >= 1.24.

Port MTU Settings

Using network-device-mtu config option with the neutron-api charm to set MTU on physical nics is deprecated on trusty deployments and will not work on xenial deployments. The prefered method is to use MAAS to set MTU.

neutron-gateway shared-db relation removed

The neutron-gateway shared-db relation has been removed. Please update bundles to reflect this.

Known Issues

SSL on Precise/Icehouse

No Ceilometer alarms on Mitaka

Bugs Fixed

For the full list of bugs resolved for the 16.07 release please refer to https://launchpad.net/charms/+milestone/16.07

OpenStack/OpenStackCharms/ReleaseNotes1607 (last edited 2016-07-29 11:19:25 by localhost)