OpenStackHA

Differences between revisions 3 and 5 (spanning 2 versions)
Revision 3 as of 2013-04-22 16:35:52
Size: 1386
Editor: james-page
Comment:
Revision 5 as of 2013-04-22 16:55:30
Size: 3765
Editor: james-page
Comment:
Deletions are marked like this. Additions are marked like this.
Line 4: Line 4:

The Ubuntu OpenStack HA reference architecture has been developed on Ubuntu 12.04 LTS, using the Ubuntu Cloud Archive for OpenStack Grizzly.

= Juju Deployment =

== Before you start ==

=== Juju + MAAS ===

The majority of OpenStack deployments are implemented on physical hardware; Juju uses [[MAAS|http://maas.ubuntu.com]] (Metal-as-a-Service) to deploy Charms onto physical service infrastructure.

Its worth reading up on how to setup MAAS and Juju for your physical server environment prior to trying to deploy the Ubuntu OpenStack HA reference architecture using Juju.

=== Configuration ===

All configuration options should be placed in a file named 'config.yaml'; this is the default file that juju will use from the current working directory.
Line 9: Line 25:
=== MySQL === ==== Overview ====
Line 11: Line 27:
=== RabbitMQ === Ceph is a key infrastructure component of the Ubuntu OpenStack HA reference architecture; it provides network accessible, resilient block storage to MySQL and RabbitMQ to support HA, as well as providing an natively resilient back-end for block storage (through Cinder) and for image storage (through Glance).
Line 13: Line 29:
== OpenStack Services == ==== Configuration ====
Line 15: Line 31:
=== Keystone ===

=== Glance ===

=== Cinder ===

=== Nova Cloud Controller ===

=== Nova Compute ===

=== Quantum ===

=== Swift ===

= Juju Deployment =

== Base Services ==
A Ceph deployment will typically consist of both Ceph Monitor (MON) Nodes (responsible for mapping the topology of a Ceph storage cluster) and Ceph Object Storage Device (OSD) Nodes (responsible for storage data on devices). Some basic configuration is required to support deployment of Ceph using the Juju Charms for Ceph:
Line 34: Line 34:
{u'ceph': {u'fsid': u'6547bd3e-1397-11e2-82e5-53567c8d32dc',
           u'monitor-count': u'3',
           u'monitor-secret': u'AQCXrnZQwI7KGBAAiPofmKEXKxu5bUzoYLVkbQ==',
           u'osd-devices': u'/dev/vdb',
           u'osd-reformat': u'yes',
           u'source': u'cloud:precise-updates/grizzly'},
ceph:
  fsid: '6547bd3e-1397-11e2-82e5-53567c8d32dc'
  monitor-count: 3
  monitor-secret: 'AQCXrnZQwI7KGBAAiPofmKEXKxu5bUzoYLVkbQ=='
  osd-devices: '/dev/vdb'
  osd-reformat: 'yes'
  source: 'cloud:precise-updates/grizzly'
ceph-osd:
  osd-devices: '/dev/vdb'
  osd-reformat: 'yes'
  source: 'cloud:precise-updates/grizzly'
}}}

In this example, Ceph is configured with the provided fsid and secret (these should be unique for your environment) and will use the '/dev/vdb' block device if found for object storage. Ceph is being sourced ('source') from the Ubuntu Cloud Archive for Grizzly to ensure we get the latest features.

The Ceph MON function is provided by the 'ceph' charm; as the monitor-count is set to '3' Ceph will not bootstrap itself and start responding to requests from clients until at least 3 service units have joined the ceph service.

Additional storage is provided by the 'ceph-osd' charm; this allows additional service units to be spun up which purely provide object storage.

==== Deployment ====

First, deploy the ceph charm with a unit count of 3 to build the Ceph MON cluster:

{{{
juju deploy -n 3 local:ceph
}}}

and then deploy some additional object storage nodes using the ceph-osd charm and relate them to the cluster.

{{{
juju deploy -n 3 local:ceph-osd
juju add-relation ceph ceph-osd
}}}

All of the above commands can be run in series with no pauses; the charms are clever enough to figure things out in the correct order.

== MySQL ==

{{{
Line 45: Line 78:
juju deploy -n 3 local:ceph ceph

Overview

The Ubuntu OpenStack HA reference architecture is a current, best practice deployment of OpenStack on Ubuntu 12.04 using a combination of tools and HA techniques to deliver high availability across an OpenStack deployment.

The Ubuntu OpenStack HA reference architecture has been developed on Ubuntu 12.04 LTS, using the Ubuntu Cloud Archive for OpenStack Grizzly.

Juju Deployment

Before you start

Juju + MAAS

The majority of OpenStack deployments are implemented on physical hardware; Juju uses http://maas.ubuntu.com (Metal-as-a-Service) to deploy Charms onto physical service infrastructure.

Its worth reading up on how to setup MAAS and Juju for your physical server environment prior to trying to deploy the Ubuntu OpenStack HA reference architecture using Juju.

Configuration

All configuration options should be placed in a file named 'config.yaml'; this is the default file that juju will use from the current working directory.

Base Services

Ceph

Overview

Ceph is a key infrastructure component of the Ubuntu OpenStack HA reference architecture; it provides network accessible, resilient block storage to MySQL and RabbitMQ to support HA, as well as providing an natively resilient back-end for block storage (through Cinder) and for image storage (through Glance).

Configuration

A Ceph deployment will typically consist of both Ceph Monitor (MON) Nodes (responsible for mapping the topology of a Ceph storage cluster) and Ceph Object Storage Device (OSD) Nodes (responsible for storage data on devices). Some basic configuration is required to support deployment of Ceph using the Juju Charms for Ceph:

ceph:
  fsid: '6547bd3e-1397-11e2-82e5-53567c8d32dc'
  monitor-count: 3
  monitor-secret: 'AQCXrnZQwI7KGBAAiPofmKEXKxu5bUzoYLVkbQ=='
  osd-devices: '/dev/vdb'
  osd-reformat: 'yes'
  source: 'cloud:precise-updates/grizzly'
ceph-osd:
  osd-devices: '/dev/vdb'
  osd-reformat: 'yes'
  source: 'cloud:precise-updates/grizzly'

In this example, Ceph is configured with the provided fsid and secret (these should be unique for your environment) and will use the '/dev/vdb' block device if found for object storage. Ceph is being sourced ('source') from the Ubuntu Cloud Archive for Grizzly to ensure we get the latest features.

The Ceph MON function is provided by the 'ceph' charm; as the monitor-count is set to '3' Ceph will not bootstrap itself and start responding to requests from clients until at least 3 service units have joined the ceph service.

Additional storage is provided by the 'ceph-osd' charm; this allows additional service units to be spun up which purely provide object storage.

Deployment

First, deploy the ceph charm with a unit count of 3 to build the Ceph MON cluster:

juju deploy -n 3 local:ceph

and then deploy some additional object storage nodes using the ceph-osd charm and relate them to the cluster.

juju deploy -n 3 local:ceph-osd
juju add-relation ceph ceph-osd

All of the above commands can be run in series with no pauses; the charms are clever enough to figure things out in the correct order.

MySQL

 u'mysql': {u'vip': u'192.168.77.8', u'vip_cidr': u'19'},
 u'rabbitmq-server': {u'vip': u'192.168.77.11', u'vip_cidr': u'19'}}

juju deploy -n 2 local:rabbitmq-server rabbitmq-server
juju deploy -n 2 local:mysql mysql
juju deploy local:hacluster mysql-hacluster
juju deploy local:hacluster rabbitmq-hacluster
juju add-relation mysql ceph
juju add-relation mysql mysql-hacluster
juju add-relation rabbitmq-server ceph
juju add-relation rabbitmq-server rabbitmq-hacluster

ServerTeam/OpenStackHA (last edited 2015-06-10 12:10:47 by mariosplivalo)