OpenStackHA
7013
Comment:
|
13758
|
Deletions are marked like this. | Additions are marked like this. |
Line 2: | Line 2: |
Line 28: | Line 27: |
<VOID 1> script to branch all required charms. | mkdir precise ( cd precise bzr branch lp:charms/ceph bzr branch lp:charms/ceph-osd bzr branch lp:~openstack-charmers/charms/precise/mysql/ha-support bzr branch lp:~openstack-charmers/charms/precise/rabbitmq-server/ha-support bzr branch lp:~openstack-charmers/charms/precise/hacluster/trunk bzr branch lp:~openstack-charmers/charms/precise/keystone/ha-support bzr branch lp:~openstack-charmers/charms/precise/nova-cloud-controller/ha-support bzr branch lp:~openstack-charmers/charms/precise/cinder/ha-support bzr branch lp:~openstack-charmers/charms/precise/glance/ha-support bzr branch lp:~openstack-charmers/charms/precise/quantum-gateway/ha-support bzr branch lp:~openstack-charmers/charms/precise/swift-proxy/ha-support bzr branch lp:~openstack-charmers/charms/precise/swift-storage/ha-support ) |
Line 149: | Line 163: |
== OpenStack Services == === Keystone === ==== Overview ==== Keystone provides central authentication and authorization servers for all OpenStack services. Keystone is generally stateless; in the reference architecture it can be scaled horizontally - requests are load balanced across all avaliable service units. ==== Configuration ==== The keystone charm requires basic configuration to be deployed in HA mode: {{{ keystone: admin-user: 'admin' admin-password: 'openstack' admin-token: 'ubuntutesting' vip: '192.168.77.1' vip_cidr: 19 }}} user/password/token should be specific to your deployment; the VIP and subnet mask are in-line with other charms and will form the access point for keystone requests. Keystone requests will be load balanced across all avaliable service units. ==== Deployment ==== The Keystone charm is deployed in-conjunction with the HACluster subordinate charm: {{{ juju deploy -n 2 local:keystone juju deploy local:hacluster keystone-hacluster juju add-relation keystone keystone-hacluster juju add-relation keystone mysql }}} ==== BOOTNOTES ==== The keystone charm uses the stateless API HA model (see below). Some state is stored on local disk (specifically service usernames and passwords). These are synced between services units during hook execution using SSH + unison. === Cloud Controller === ==== Overview ==== The Cloud Controller provides the API endpoints for Nova (Compute) and Quantum (Networking) services; The API's are stateless; in the reference architecture this service can be scaled horizontally with API requests load balanced across all avaliable service units. ==== Configuration ==== The nova-cloud-controller charm has a large number of configuration options; in-line with other HA services, a VIP and subnet mask must be provided to host the API endpoints. In addition, configuration options for Quantum networking are also provided. {{{ nova-cloud-controller: vip: '192.168.77.2' vip_cidr: 19 network-manager: 'Quantum' conf-ext-net: 'no' ext-net-cidr: '192.168.64.0/19' ext-net-gateway: '192.168.64.1' pool-floating-start: '192.168.90.1' pool-floating-end: '192.168.95.254' }}} Note that the conf-ext-net option is current disabled; unfortunately configuring this during service build proved a bit racey but the external (public) network can be configured post deployment of the charms: {{{ juju set nova-cloud-controller conf-ext-net=yes }}} ==== Deployment ==== The nova-cloud-controller charm is deployed in-conjunction with the HACluster subordinate charm: {{{ juju deploy -n 2 local:nova-cloud-controller juju deploy local:hacluster ncc-hacluster juju add-relation nova-cloud-controller ncc-hacluster juju add-relation nova-cloud-controller mysql juju add-relation nova-cloud-controller keystone juju add-relation nova-cloud-controller rabbitmq-server }}} ==== BOOTNOTES ==== The nova-cloud-controller charm uses the stateless API HA model (see below). === Image Storage (Glance) === ==== Overview ==== Glance provides multi-tenant image storage services for an OpenStack deployment; By default, Glance uses local storage to store uploaded images. The HA reference architecture uses Ceph in conjunction with Glance to provide highly-avaliable object storage; the design relegates Glance to being a stateless API and image registry service. ==== Configuration ==== Inline with other OpenStack charms, Glance simply requires a VIP and subnet mask to host the Glance HA API endpoint: {{{ glance: vip: '192.168.77.4' vip_cidr: 19 }}} ==== Deployment ==== {{{ juju deploy -n 2 local:glance juju deploy local:hacluster glance-hacluster juju add-relation glance glance-hacluster juju add-relation glance mysql juju add-relation glance nova-cloud-controller juju add-relation glance ceph juju add-relation glance keystone }}} ==== BOOTNOTES ==== The glance charm uses the stateless API HA model (see below). === Block Storage (Cinder) === ==== Overview ==== Cinder provides block storage to tenant instances running with an OpenStack cloud. By default, Cinder uses local storage exposed via iSCSI which is inherently not highly-avaliable. The HA reference architecture used Ceph in conjunction with Cinder to provide highly-avaliable, massively scalable block storage for tenant instances. Ceph block devices are accessed directly from compute nodes; this design relegates Cinder to being a stateless API and storage allocation service. ==== Configuration ==== Inline with other OpenStack charms, Cinder requires a VIP and subnet mask to host the HA API endpoint. In addition, Cinder itself is explicitly configured not to use local block storage: {{{ cinder: block-device: 'None' vip: '192.168.77.3' vip_cidr: 19 }}} ==== Deployment ==== {{{ juju deploy -n 2 local:cinder juju deploy local:hacluster cinder-hacluster juju add-relation cinder cinder-hacluster juju add-relation cinder mysql juju add-relation cinder keystone juju add-relation cinder nova-cloud-controller juju add-relation cinder rabbitmq-server juju add-relation cinder ceph juju add-relation cinder glance }}} ==== BOOTNOTES ==== The cinder charm uses the stateless API HA model (see below). === Networking === ==== Overview ==== ==== Configuration ==== ==== Deployment ==== ==== BOOTNOTES ==== === Swift === ==== Overview ==== ==== Configuration ==== ==== Deployment ==== ==== BOOTNOTES ==== === Compute === ==== Overview ==== ==== Configuration ==== ==== Deployment ==== ==== BOOTNOTES ==== === Dashboard === ==== Overview ==== ==== Configuration ==== ==== Deployment ==== ==== BOOTNOTES ==== == HA Models == === Stateless API Server === == Leadership Election == === Pre-clustering === === Post-clustering === |
WORK IN PROGRESS
Overview
The Ubuntu OpenStack HA reference architecture is a current, best practice deployment of OpenStack on Ubuntu 12.04 using a combination of tools and HA techniques to deliver high availability across an OpenStack deployment.
The Ubuntu OpenStack HA reference architecture has been developed on Ubuntu 12.04 LTS, using the Ubuntu Cloud Archive for OpenStack Grizzly.
Juju Deployment
Before you start
Juju + MAAS
The majority of OpenStack deployments are implemented on physical hardware; Juju uses MAAS (Metal-as-a-Service) to deploy Charms onto physical service infrastructure.
Its worth reading up on how to setup MAAS and Juju for your physical server environment prior to trying to deploy the Ubuntu OpenStack HA reference architecture using Juju.
Configuration
All configuration options should be placed in a file named 'config.yaml'; this is the default file that juju will use from the current working directory.
Charms
Although all of the charms to support deployment of OpenStack are available from the Juju Charm Store, its worth branching the bzr branches that support them locally; this means that if you need to tweak a charm for your specific deployment, its much easier.
mkdir precise ( cd precise bzr branch lp:charms/ceph bzr branch lp:charms/ceph-osd bzr branch lp:~openstack-charmers/charms/precise/mysql/ha-support bzr branch lp:~openstack-charmers/charms/precise/rabbitmq-server/ha-support bzr branch lp:~openstack-charmers/charms/precise/hacluster/trunk bzr branch lp:~openstack-charmers/charms/precise/keystone/ha-support bzr branch lp:~openstack-charmers/charms/precise/nova-cloud-controller/ha-support bzr branch lp:~openstack-charmers/charms/precise/cinder/ha-support bzr branch lp:~openstack-charmers/charms/precise/glance/ha-support bzr branch lp:~openstack-charmers/charms/precise/quantum-gateway/ha-support bzr branch lp:~openstack-charmers/charms/precise/swift-proxy/ha-support bzr branch lp:~openstack-charmers/charms/precise/swift-storage/ha-support )
Base Services
Ceph
Overview
Ceph is a key infrastructure component of the Ubuntu OpenStack HA reference architecture; it provides network accessible, resilient block storage to MySQL and RabbitMQ to support HA, as well as providing an natively resilient back-end for block storage (through Cinder) and for image storage (through Glance).
Configuration
A Ceph deployment will typically consist of both Ceph Monitor (MON) Nodes (responsible for mapping the topology of a Ceph storage cluster) and Ceph Object Storage Device (OSD) Nodes (responsible for storage data on devices). Some basic configuration is required to support deployment of Ceph using the Juju Charms for Ceph:
ceph: fsid: '6547bd3e-1397-11e2-82e5-53567c8d32dc' monitor-count: 3 monitor-secret: 'AQCXrnZQwI7KGBAAiPofmKEXKxu5bUzoYLVkbQ==' osd-devices: '/dev/vdb' osd-reformat: 'yes' source: 'cloud:precise-updates/grizzly' ceph-osd: osd-devices: '/dev/vdb' osd-reformat: 'yes' source: 'cloud:precise-updates/grizzly'
In this example, Ceph is configured with the provided fsid and secret (these should be unique for your environment) and will use the '/dev/vdb' block device if found for object storage. Ceph is being sourced ('source') from the Ubuntu Cloud Archive for Grizzly to ensure we get the latest features.
The Ceph MON function is provided by the 'ceph' charm; as the monitor-count is set to '3' Ceph will not bootstrap itself and start responding to requests from clients until at least 3 service units have joined the ceph service. Note that the ceph charm will also slurp up and run OSD's on any available storage; for large deployments you might not want todo this but for proof-of-concept work its OK to just run with storage provided directly via the ceph service.
Additional storage is provided by the 'ceph-osd' charm; this allows additional service units to be spun up which purely provide object storage. Recommended for larger deployments.
Deployment
First, deploy the ceph charm with a unit count of 3 to build the Ceph MON cluster:
juju deploy -n 3 local:ceph
and then deploy some additional object storage nodes using the ceph-osd charm and relate them to the cluster.
juju deploy -n 3 local:ceph-osd juju add-relation ceph ceph-osd
All of the above commands can be run in series with no pauses; the charms are clever enough to figure things out in the correct order.
Bootnotes
By default, the CRUSH map (which tells Ceph where blocks should be stored for resilience etc..) is OSD centric; if you run multiple OSD's on a single server, Ceph will be device failure resilient but not server failure resilient as the default 3 replicas may be mapped onto OSD's on a single host.
Read the upstream documentation on how to tune the CRUSH map for your deployment requirements; this might land as a feature into the charm later on but for now this bit requires manual tuning.
MySQL
Overview
MySQL provides persistent data storage for all OpenStack services; to provide MySQL in a highly-available configuration its deployed with Pacemaker and Corosync (HA tools) in an Active/Passive configuration. Shared block storage is provided by Ceph.
NOTE: For 12.04, its worth running with the Quantal LTS kernel (3.5) to pickup improvements in the Ceph rbd kernel driver.
Configuration
The only additional configuration required by the MySQL charm is a VIP and subnet mask which will be used as the access point for other services to access the MySQL cluster:
mysql: vip: '192.168.77.8' vip_cidr: 19
Deployment
The MySQL charm is deployed in-conjunction with the HACluster subordinate charm:
juju deploy -n 2 local:mysql juju deploy local:hacluster mysql-hacluster juju add-relation mysql ceph juju add-relation mysql mysql-hacluster
After a period of time (it takes a while for all the relations to settle and for the cluster to configure and start), you should have a MySQL cluster listening on 192.168.77.8.
BOOTNOTES
Various Active/Active MySQL derivatives exist which could be used in place of MySQL; however for the Raring/Grizzly release cycle, only MySQL is in Ubuntu and fully supported by Canonical. Future releases of this architecture may use alternative MySQL solutions.
RabbitMQ
Overview
RabbitMQ provides a centralized message broker which the majority of OpenStack components use to communicate control plane requests around an OpenStack deployment. RabbitMQ does provide a native Active/Active architecture; however this is not yet well supported so for the Raring/Grizzly cycle RabbitMQ is deployed in Active/Passive configuration using Pacemaker and Corosync with Ceph providing shared block storage.
Configuration
The only additional configuration required by the RabbitMQ charm is a VIP and subnet mask which will be used as the access point for other services to access the RabbitMQ cluster:
rabbitmq-server: vip: '192.168.77.11' vip_cidr: 19
Deployment
The RabbitMQ charm is deployed in-conjunction with the HACluster subordinate charm:
juju deploy -n 2 local:rabbitmq-server rabbitmq-server juju deploy local:hacluster rabbitmq-hacluster juju add-relation rabbitmq-server ceph juju add-relation rabbitmq-server rabbitmq-hacluster
RabbitMQ will be accessible using the vip provided during configuration.
OpenStack Services
Keystone
Overview
Keystone provides central authentication and authorization servers for all OpenStack services. Keystone is generally stateless; in the reference architecture it can be scaled horizontally - requests are load balanced across all avaliable service units.
Configuration
The keystone charm requires basic configuration to be deployed in HA mode:
keystone: admin-user: 'admin' admin-password: 'openstack' admin-token: 'ubuntutesting' vip: '192.168.77.1' vip_cidr: 19
user/password/token should be specific to your deployment; the VIP and subnet mask are in-line with other charms and will form the access point for keystone requests. Keystone requests will be load balanced across all avaliable service units.
Deployment
The Keystone charm is deployed in-conjunction with the HACluster subordinate charm:
juju deploy -n 2 local:keystone juju deploy local:hacluster keystone-hacluster juju add-relation keystone keystone-hacluster juju add-relation keystone mysql
BOOTNOTES
The keystone charm uses the stateless API HA model (see below). Some state is stored on local disk (specifically service usernames and passwords). These are synced between services units during hook execution using SSH + unison.
Cloud Controller
Overview
The Cloud Controller provides the API endpoints for Nova (Compute) and Quantum (Networking) services; The API's are stateless; in the reference architecture this service can be scaled horizontally with API requests load balanced across all avaliable service units.
Configuration
The nova-cloud-controller charm has a large number of configuration options; in-line with other HA services, a VIP and subnet mask must be provided to host the API endpoints. In addition, configuration options for Quantum networking are also provided.
nova-cloud-controller: vip: '192.168.77.2' vip_cidr: 19 network-manager: 'Quantum' conf-ext-net: 'no' ext-net-cidr: '192.168.64.0/19' ext-net-gateway: '192.168.64.1' pool-floating-start: '192.168.90.1' pool-floating-end: '192.168.95.254'
Note that the conf-ext-net option is current disabled; unfortunately configuring this during service build proved a bit racey but the external (public) network can be configured post deployment of the charms:
juju set nova-cloud-controller conf-ext-net=yes
Deployment
The nova-cloud-controller charm is deployed in-conjunction with the HACluster subordinate charm:
juju deploy -n 2 local:nova-cloud-controller juju deploy local:hacluster ncc-hacluster juju add-relation nova-cloud-controller ncc-hacluster juju add-relation nova-cloud-controller mysql juju add-relation nova-cloud-controller keystone juju add-relation nova-cloud-controller rabbitmq-server
BOOTNOTES
The nova-cloud-controller charm uses the stateless API HA model (see below).
Image Storage (Glance)
Overview
Glance provides multi-tenant image storage services for an OpenStack deployment; By default, Glance uses local storage to store uploaded images. The HA reference architecture uses Ceph in conjunction with Glance to provide highly-avaliable object storage; the design relegates Glance to being a stateless API and image registry service.
Configuration
Inline with other OpenStack charms, Glance simply requires a VIP and subnet mask to host the Glance HA API endpoint:
glance: vip: '192.168.77.4' vip_cidr: 19
Deployment
juju deploy -n 2 local:glance juju deploy local:hacluster glance-hacluster juju add-relation glance glance-hacluster juju add-relation glance mysql juju add-relation glance nova-cloud-controller juju add-relation glance ceph juju add-relation glance keystone
BOOTNOTES
The glance charm uses the stateless API HA model (see below).
Block Storage (Cinder)
Overview
Cinder provides block storage to tenant instances running with an OpenStack cloud. By default, Cinder uses local storage exposed via iSCSI which is inherently not highly-avaliable. The HA reference architecture used Ceph in conjunction with Cinder to provide highly-avaliable, massively scalable block storage for tenant instances. Ceph block devices are accessed directly from compute nodes; this design relegates Cinder to being a stateless API and storage allocation service.
Configuration
Inline with other OpenStack charms, Cinder requires a VIP and subnet mask to host the HA API endpoint. In addition, Cinder itself is explicitly configured not to use local block storage:
cinder: block-device: 'None' vip: '192.168.77.3' vip_cidr: 19
Deployment
juju deploy -n 2 local:cinder juju deploy local:hacluster cinder-hacluster juju add-relation cinder cinder-hacluster juju add-relation cinder mysql juju add-relation cinder keystone juju add-relation cinder nova-cloud-controller juju add-relation cinder rabbitmq-server juju add-relation cinder ceph juju add-relation cinder glance
BOOTNOTES
The cinder charm uses the stateless API HA model (see below).
Networking
Overview
Configuration
Deployment
BOOTNOTES
Swift
Overview
Configuration
Deployment
BOOTNOTES
Compute
Overview
Configuration
Deployment
BOOTNOTES
Dashboard
Overview
Configuration
Deployment
BOOTNOTES
HA Models
Stateless API Server
Leadership Election
Pre-clustering
Post-clustering
ServerTeam/OpenStackHA (last edited 2015-06-10 12:10:47 by mariosplivalo)