OpenStackHA

Differences between revisions 1 and 12 (spanning 11 versions)
Revision 1 as of 2013-04-22 16:32:24
Size: 947
Editor: james-page
Comment:
Revision 12 as of 2013-04-30 14:25:57
Size: 7278
Editor: james-page
Comment:
Deletions are marked like this. Additions are marked like this.
Line 1: Line 1:
Describe ServerTeam/OpenStackHA here. '''WORK IN PROGRESS'''
Line 5: Line 5:
The Ubuntu OpenStack HA reference architecture is a current, best practice deployment of OpenStack on Ubuntu 12.04 using a combination of tools and HA techniques to deliver high availability across an OpenStack deployment.

The Ubuntu OpenStack HA reference architecture has been developed on Ubuntu 12.04 LTS, using the Ubuntu Cloud Archive for OpenStack Grizzly.
Line 6: Line 10:

== Before you start ==

=== Juju + MAAS ===

The majority of OpenStack deployments are implemented on physical hardware; [[http://juju.ubuntu.com|Juju]] uses [[http://maas.ubuntu.com|MAAS]] (Metal-as-a-Service) to deploy Charms onto physical service infrastructure.

Its worth reading up on how to setup MAAS and Juju for your physical server environment prior to trying to deploy the Ubuntu OpenStack HA reference architecture using Juju.

=== Configuration ===

All configuration options should be placed in a file named 'config.yaml'; this is the default file that juju will use from the current working directory.

=== Charms ===

Although all of the charms to support deployment of OpenStack are available from the Juju Charm Store, its worth branching the bzr branches that support them locally; this means that if you need to tweak a charm for your specific deployment, its much easier.

{{{
mkdir precise
(
  cd precise
  bzr branch lp:charms/ceph
  bzr branch lp:charms/ceph-osd
  bzr branch lp:~openstack-charmers/charms/precise/mysql/ha-support
  bzr branch lp:~openstack-charmers/charms/precise/rabbitmq-server/ha-support
  bzr branch lp:~openstack-charmers/charms/precise/hacluster/trunk
)
}}}
Line 9: Line 41:
{{{
{u'ceph': {u'fsid': u'6547bd3e-1397-11e2-82e5-53567c8d32dc',
           u'monitor-count': u'3',
           u'monitor-secret': u'AQCXrnZQwI7KGBAAiPofmKEXKxu5bUzoYLVkbQ==',
           u'osd-devices': u'/dev/vdb',
           u'osd-reformat': u'yes',
           u'source': u'cloud:precise-updates/grizzly'},
 u'mysql': {u'vip': u'192.168.77.8', u'vip_cidr': u'19'},
 u'rabbitmq-server': {u'vip': u'192.168.77.11', u'vip_cidr': u'19'}}
}}
=== Ceph ===

==== Overview ====

[[http://ceph.com|Ceph]] is a key infrastructure component of the Ubuntu OpenStack HA reference architecture; it provides network accessible, resilient block storage to MySQL and RabbitMQ to support HA, as well as providing an natively resilient back-end for block storage (through Cinder) and for image storage (through Glance).

==== Configuration ====

A Ceph deployment will typically consist of both Ceph Monitor (MON) Nodes (responsible for mapping the topology of a Ceph storage cluster) and Ceph Object Storage Device (OSD) Nodes (responsible for storage data on devices). Some basic configuration is required to support deployment of Ceph using the Juju Charms for Ceph:
Line 21: Line 52:
juju deploy -n 3 local:ceph ceph
juju deploy -n 2 local:rabbitmq-server rabbitmq-server
juju deploy -n 2 local:mysql mysql
ceph:
  fsid: '6547bd3e-1397-11e2-82e5-53567c8d32dc'
  monitor-count: 3
  monitor-secret: 'AQCXrnZQwI7KGBAAiPofmKEXKxu5bUzoYLVkbQ=='
  osd-devices: '/dev/vdb'
  osd-reformat: 'yes'
  source: 'cloud:precise-updates/grizzly'
ceph-osd:
  osd-devices: '/dev/vdb'
  osd-reformat: 'yes'
  source: 'cloud:precise-updates/grizzly'
}}}

In this example, Ceph is configured with the provided fsid and secret (these should be unique for your environment) and will use the '/dev/vdb' block device if found for object storage. Ceph is being sourced ('source') from the Ubuntu Cloud Archive for Grizzly to ensure we get the latest features.

The Ceph MON function is provided by the 'ceph' charm; as the monitor-count is set to '3' Ceph will not bootstrap itself and start responding to requests from clients until at least 3 service units have joined the ceph service. Note that the ceph charm will also slurp up and run OSD's on any available storage; for large deployments you might not want todo this but for proof-of-concept work its OK to just run with storage provided directly via the ceph service.

Additional storage is provided by the 'ceph-osd' charm; this allows additional service units to be spun up which purely provide object storage. Recommended for larger deployments.

==== Deployment ====

First, deploy the ceph charm with a unit count of 3 to build the Ceph MON cluster:

{{{
juju deploy -n 3 local:ceph
}}}

and then deploy some additional object storage nodes using the ceph-osd charm and relate them to the cluster.

{{{
juju deploy -n 3 local:ceph-osd
juju add-relation ceph ceph-osd
}}}

All of the above commands can be run in series with no pauses; the charms are clever enough to figure things out in the correct order.

==== Bootnotes ====

By default, the CRUSH map (which tells Ceph where blocks should be stored for resilience etc..) is OSD centric; if you run multiple OSD's on a single server, Ceph will be device failure resilient but not server failure resilient as the default 3 replicas may be mapped onto OSD's on a single host.

Read the upstream documentation on how to tune the CRUSH map for your deployment requirements; this might land as a feature into the charm later on but for now this bit requires manual tuning.

=== MySQL ===

==== Overview ====

MySQL provides persistent data storage for all OpenStack services; to provide MySQL in a highly-available configuration its deployed with Pacemaker and Corosync (HA tools) in an Active/Passive configuration. Shared block storage is provided by Ceph.

NOTE: For 12.04, its worth running with the Quantal LTS kernel (3.5) to pickup improvements in the Ceph rbd kernel driver.

==== Configuration ====

The only additional configuration required by the MySQL charm is a VIP and subnet mask which will be used as the access point for other services to access the MySQL cluster:

{{{
mysql:
  vip: '192.168.77.8'
  vip_cidr: 19
}}}

==== Deployment ====

The MySQL charm is deployed in-conjunction with the HACluster subordinate charm:

{{{
juju deploy -n 2 local:mysql
Line 25: Line 119:
juju deploy local:hacluster rabbitmq-hacluster
Line 28: Line 121:
}}}

After a period of time (it takes a while for all the relations to settle and for the cluster to configure and start), you should have a MySQL cluster listening on 192.168.77.8.

==== BOOTNOTES ====

Various Active/Active MySQL derivatives exist which could be used in place of MySQL; however for the Raring/Grizzly release cycle, only MySQL is in Ubuntu and fully supported by Canonical. Future releases of this architecture may use alternative MySQL solutions.

=== RabbitMQ ===

==== Overview ====

RabbitMQ provides a centralized message broker which the majority of OpenStack components use to communicate control plane requests around an OpenStack deployment. RabbitMQ does provide a native Active/Active architecture; however this is not yet well supported so for the Raring/Grizzly cycle RabbitMQ is deployed in Active/Passive configuration using Pacemaker and Corosync with Ceph providing shared block storage.

==== Configuration ====

The only additional configuration required by the RabbitMQ charm is a VIP and subnet mask which will be used as the access point for other services to access the RabbitMQ cluster:

{{{
rabbitmq-server:
  vip: '192.168.77.11'
  vip_cidr: 19
}}}

==== Deployment ====

The RabbitMQ charm is deployed in-conjunction with the HACluster subordinate charm:

{{{
juju deploy -n 2 local:rabbitmq-server rabbitmq-server
juju deploy local:hacluster rabbitmq-hacluster
Line 31: Line 155:

RabbitMQ will be accessible using the vip provided during configuration.

WORK IN PROGRESS

Overview

The Ubuntu OpenStack HA reference architecture is a current, best practice deployment of OpenStack on Ubuntu 12.04 using a combination of tools and HA techniques to deliver high availability across an OpenStack deployment.

The Ubuntu OpenStack HA reference architecture has been developed on Ubuntu 12.04 LTS, using the Ubuntu Cloud Archive for OpenStack Grizzly.

Juju Deployment

Before you start

Juju + MAAS

The majority of OpenStack deployments are implemented on physical hardware; Juju uses MAAS (Metal-as-a-Service) to deploy Charms onto physical service infrastructure.

Its worth reading up on how to setup MAAS and Juju for your physical server environment prior to trying to deploy the Ubuntu OpenStack HA reference architecture using Juju.

Configuration

All configuration options should be placed in a file named 'config.yaml'; this is the default file that juju will use from the current working directory.

Charms

Although all of the charms to support deployment of OpenStack are available from the Juju Charm Store, its worth branching the bzr branches that support them locally; this means that if you need to tweak a charm for your specific deployment, its much easier.

mkdir precise
(
  cd precise
  bzr branch lp:charms/ceph
  bzr branch lp:charms/ceph-osd
  bzr branch lp:~openstack-charmers/charms/precise/mysql/ha-support
  bzr branch lp:~openstack-charmers/charms/precise/rabbitmq-server/ha-support
  bzr branch lp:~openstack-charmers/charms/precise/hacluster/trunk
)

Base Services

Ceph

Overview

Ceph is a key infrastructure component of the Ubuntu OpenStack HA reference architecture; it provides network accessible, resilient block storage to MySQL and RabbitMQ to support HA, as well as providing an natively resilient back-end for block storage (through Cinder) and for image storage (through Glance).

Configuration

A Ceph deployment will typically consist of both Ceph Monitor (MON) Nodes (responsible for mapping the topology of a Ceph storage cluster) and Ceph Object Storage Device (OSD) Nodes (responsible for storage data on devices). Some basic configuration is required to support deployment of Ceph using the Juju Charms for Ceph:

ceph:
  fsid: '6547bd3e-1397-11e2-82e5-53567c8d32dc'
  monitor-count: 3
  monitor-secret: 'AQCXrnZQwI7KGBAAiPofmKEXKxu5bUzoYLVkbQ=='
  osd-devices: '/dev/vdb'
  osd-reformat: 'yes'
  source: 'cloud:precise-updates/grizzly'
ceph-osd:
  osd-devices: '/dev/vdb'
  osd-reformat: 'yes'
  source: 'cloud:precise-updates/grizzly'

In this example, Ceph is configured with the provided fsid and secret (these should be unique for your environment) and will use the '/dev/vdb' block device if found for object storage. Ceph is being sourced ('source') from the Ubuntu Cloud Archive for Grizzly to ensure we get the latest features.

The Ceph MON function is provided by the 'ceph' charm; as the monitor-count is set to '3' Ceph will not bootstrap itself and start responding to requests from clients until at least 3 service units have joined the ceph service. Note that the ceph charm will also slurp up and run OSD's on any available storage; for large deployments you might not want todo this but for proof-of-concept work its OK to just run with storage provided directly via the ceph service.

Additional storage is provided by the 'ceph-osd' charm; this allows additional service units to be spun up which purely provide object storage. Recommended for larger deployments.

Deployment

First, deploy the ceph charm with a unit count of 3 to build the Ceph MON cluster:

juju deploy -n 3 local:ceph

and then deploy some additional object storage nodes using the ceph-osd charm and relate them to the cluster.

juju deploy -n 3 local:ceph-osd
juju add-relation ceph ceph-osd

All of the above commands can be run in series with no pauses; the charms are clever enough to figure things out in the correct order.

Bootnotes

By default, the CRUSH map (which tells Ceph where blocks should be stored for resilience etc..) is OSD centric; if you run multiple OSD's on a single server, Ceph will be device failure resilient but not server failure resilient as the default 3 replicas may be mapped onto OSD's on a single host.

Read the upstream documentation on how to tune the CRUSH map for your deployment requirements; this might land as a feature into the charm later on but for now this bit requires manual tuning.

MySQL

Overview

MySQL provides persistent data storage for all OpenStack services; to provide MySQL in a highly-available configuration its deployed with Pacemaker and Corosync (HA tools) in an Active/Passive configuration. Shared block storage is provided by Ceph.

NOTE: For 12.04, its worth running with the Quantal LTS kernel (3.5) to pickup improvements in the Ceph rbd kernel driver.

Configuration

The only additional configuration required by the MySQL charm is a VIP and subnet mask which will be used as the access point for other services to access the MySQL cluster:

mysql:
  vip: '192.168.77.8'
  vip_cidr: 19

Deployment

The MySQL charm is deployed in-conjunction with the HACluster subordinate charm:

juju deploy -n 2 local:mysql
juju deploy local:hacluster mysql-hacluster
juju add-relation mysql ceph
juju add-relation mysql mysql-hacluster

After a period of time (it takes a while for all the relations to settle and for the cluster to configure and start), you should have a MySQL cluster listening on 192.168.77.8.

BOOTNOTES

Various Active/Active MySQL derivatives exist which could be used in place of MySQL; however for the Raring/Grizzly release cycle, only MySQL is in Ubuntu and fully supported by Canonical. Future releases of this architecture may use alternative MySQL solutions.

RabbitMQ

Overview

RabbitMQ provides a centralized message broker which the majority of OpenStack components use to communicate control plane requests around an OpenStack deployment. RabbitMQ does provide a native Active/Active architecture; however this is not yet well supported so for the Raring/Grizzly cycle RabbitMQ is deployed in Active/Passive configuration using Pacemaker and Corosync with Ceph providing shared block storage.

Configuration

The only additional configuration required by the RabbitMQ charm is a VIP and subnet mask which will be used as the access point for other services to access the RabbitMQ cluster:

rabbitmq-server:
  vip: '192.168.77.11'
  vip_cidr: 19

Deployment

The RabbitMQ charm is deployed in-conjunction with the HACluster subordinate charm:

juju deploy -n 2 local:rabbitmq-server rabbitmq-server
juju deploy local:hacluster rabbitmq-hacluster
juju add-relation rabbitmq-server ceph
juju add-relation rabbitmq-server rabbitmq-hacluster

RabbitMQ will be accessible using the vip provided during configuration.

ServerTeam/OpenStackHA (last edited 2015-06-10 12:10:47 by mariosplivalo)