OpenStackHA

Differences between revisions 2 and 4 (spanning 2 versions)
Revision 2 as of 2013-04-22 16:32:42
Size: 948
Editor: james-page
Comment:
Revision 4 as of 2013-04-22 16:49:57
Size: 3006
Editor: james-page
Comment:
Deletions are marked like this. Additions are marked like this.
Line 1: Line 1:
Describe ServerTeam/OpenStackHA here. = Overview =
Line 3: Line 3:
= Overview = The Ubuntu OpenStack HA reference architecture is a current, best practice deployment of OpenStack on Ubuntu 12.04 using a combination of tools and HA techniques to deliver high availability across an OpenStack deployment.
Line 9: Line 9:
=== Ceph ===

==== Overview ====

Ceph is a key infrastructure component of the Ubuntu OpenStack HA reference architecture; it provides network accessible block storage to MySQL and RabbitMQ to support HA, as well as providing an natively resilient back-end for block storage (through Cinder) and for image storage (through Glance).

==== Configuration ====

A Ceph deployment will typically consist of both Ceph Monitor (MON) Nodes (responsible for mapping the topology of a Ceph storage cluster) and Ceph Object Storage Device (OSD) Nodes (responsible for storage data on devices). Some basic configuration is required to support deployment of Ceph using the Juju Charms for Ceph:
Line 10: Line 20:
{u'ceph': {u'fsid': u'6547bd3e-1397-11e2-82e5-53567c8d32dc',
           u'monitor-count': u'3',
           u'monitor-secret': u'AQCXrnZQwI7KGBAAiPofmKEXKxu5bUzoYLVkbQ==',
           u'osd-devices': u'/dev/vdb',
           u'osd-reformat': u'yes',
           u'source': u'cloud:precise-updates/grizzly'},
ceph:
  fsid: '6547bd3e-1397-11e2-82e5-53567c8d32dc'
  monitor-count: 3
  monitor-secret: 'AQCXrnZQwI7KGBAAiPofmKEXKxu5bUzoYLVkbQ=='
  osd-devices: '/dev/vdb'
  osd-reformat: 'yes'
  source: 'cloud:precise-updates/grizzly'
ceph-osd:
  osd-devices: '/dev/vdb'
  osd-reformat: 'yes'
  source: 'cloud:precise-updates/grizzly'
}}}

In this example, Ceph is configured with the provided fsid and secret (these should be unique for your environment) and will use the '/dev/vdb' block device if found for object storage. Ceph is being sourced ('source') from the Ubuntu Cloud Archive for Grizzly to ensure we get the latest features.

The Ceph MON function is provided by the 'ceph' charm; as the monitor-count is set to '3' Ceph will not bootstrap itself and start responding to requests from clients until at least 3 service units have joined the ceph service.

Additional storage is provided by the 'ceph-osd' charm; this allows additional service units to be spun up which purely provide object storage.

==== Deployment ====

First, deploy the ceph charm with a unit count of 3 to build the Ceph MON cluster:

{{{
juju deploy -n 3 local:ceph
}}}

and then deploy some additional object storage nodes using the ceph-osd charm and relate them to the cluster.

{{{
juju deploy -n 3 local:ceph-osd
juju add-relation ceph ceph-osd
}}}

All of the above commands can be run in series with no pauses; the charms are clever enough to figure things out in the correct order.

== MySQL ==

{{{
Line 21: Line 64:
juju deploy -n 3 local:ceph ceph

Overview

The Ubuntu OpenStack HA reference architecture is a current, best practice deployment of OpenStack on Ubuntu 12.04 using a combination of tools and HA techniques to deliver high availability across an OpenStack deployment.

Juju Deployment

Base Services

Ceph

Overview

Ceph is a key infrastructure component of the Ubuntu OpenStack HA reference architecture; it provides network accessible block storage to MySQL and RabbitMQ to support HA, as well as providing an natively resilient back-end for block storage (through Cinder) and for image storage (through Glance).

Configuration

A Ceph deployment will typically consist of both Ceph Monitor (MON) Nodes (responsible for mapping the topology of a Ceph storage cluster) and Ceph Object Storage Device (OSD) Nodes (responsible for storage data on devices). Some basic configuration is required to support deployment of Ceph using the Juju Charms for Ceph:

ceph:
  fsid: '6547bd3e-1397-11e2-82e5-53567c8d32dc'
  monitor-count: 3
  monitor-secret: 'AQCXrnZQwI7KGBAAiPofmKEXKxu5bUzoYLVkbQ=='
  osd-devices: '/dev/vdb'
  osd-reformat: 'yes'
  source: 'cloud:precise-updates/grizzly'
ceph-osd:
  osd-devices: '/dev/vdb'
  osd-reformat: 'yes'
  source: 'cloud:precise-updates/grizzly'

In this example, Ceph is configured with the provided fsid and secret (these should be unique for your environment) and will use the '/dev/vdb' block device if found for object storage. Ceph is being sourced ('source') from the Ubuntu Cloud Archive for Grizzly to ensure we get the latest features.

The Ceph MON function is provided by the 'ceph' charm; as the monitor-count is set to '3' Ceph will not bootstrap itself and start responding to requests from clients until at least 3 service units have joined the ceph service.

Additional storage is provided by the 'ceph-osd' charm; this allows additional service units to be spun up which purely provide object storage.

Deployment

First, deploy the ceph charm with a unit count of 3 to build the Ceph MON cluster:

juju deploy -n 3 local:ceph

and then deploy some additional object storage nodes using the ceph-osd charm and relate them to the cluster.

juju deploy -n 3 local:ceph-osd
juju add-relation ceph ceph-osd

All of the above commands can be run in series with no pauses; the charms are clever enough to figure things out in the correct order.

MySQL

 u'mysql': {u'vip': u'192.168.77.8', u'vip_cidr': u'19'},
 u'rabbitmq-server': {u'vip': u'192.168.77.11', u'vip_cidr': u'19'}}

juju deploy -n 2 local:rabbitmq-server rabbitmq-server
juju deploy -n 2 local:mysql mysql
juju deploy local:hacluster mysql-hacluster
juju deploy local:hacluster rabbitmq-hacluster
juju add-relation mysql ceph
juju add-relation mysql mysql-hacluster
juju add-relation rabbitmq-server ceph
juju add-relation rabbitmq-server rabbitmq-hacluster

ServerTeam/OpenStackHA (last edited 2015-06-10 12:10:47 by mariosplivalo)