For the sample configuration we will have two nodes called ha01 and ha02, and we will service an IP address, that we will call the Virtual IP address (VIP), on an active / passive configuration. Both nodes will need to have name resolution configured either through DNS or the /etc/hosts file.

Installing Pacemaker-Heartbeat

First of all, we need to add the repositories to be able to install heartbeat. We add the following to /etc/apt/sources.list

deb karmic main
deb-src karmic main

Then, we issue the following:

sudo apt-key adv --recv-keys --keyserver B64F0AFA

Now, we can install pacemaker-heartbeat:

sudo apt-get update && sudo apt-get install pacemaker-heartbeat


Pacemaker-Heartbeat needs 3 main configuration files. These are /etc/ha.d/authkeys, /etc/ha.d/, and /var/lib/heartbeat/crm/cib.xml. There is a fourth configuration file, /etc/, which contains the log settings.

  • On the first host, edit /etc/ha.d/authekys. This file is the same in all Pacemaker-Heartbeat style configurations. It stores the authentication information that will be used in the cluster nodes.

auth 1
1 md5 DesiredPassword
  • Then, change permissions of the file:

sudo chmod 600 /etc/ha.d/authkeys
  • Then, we need to create the file which has the logging configurations (/etc/

logfile /var/log/ha-log

or do as root

sed -e 's/#logfile*/logfile/' /usr/share/doc/heartbeat/ > /etc/
  • Then, edit /etc/ha.d/ with the following:

use_logd on
udpport 694
keepalive 2
warntime 15
deadtime 30
initdead 30
mcast eth0 694 1 0
node ha01 ha02
crm respawn
  • To finish, copy all configuration files to the second node:

scp /etc/ha.d/authkeys ha02:~
scp /etc/ha.d/ ha02:~
scp /etc/ ha02:~
  • And move then to the corresponding directories in the second node.

sudo mv authkeys /etc/ha.d/
sudo mv /etc/ha.d/
sudo mv /etc/
  • Then, start Heartbeat and wait for it to make intra cluster communication so that we can add the service to the cluster resources manager.

sudo /etc/init.d/heartbeat start
  • Once it has made intra cluster configuration, specify some global options for the cluster resource manager:

crm_attribute --attr-name symmetric-cluster --attr-value true
crm_attribute --attr-name default-resource-stickiness --attr-value INFINITY
crm_attribute --attr-name stonith-enabled --attr-value false
  • Then, edit the following file /tmp/resources.xml, which configuration parameters for the resource:

        <group id="group_1">
         <primitive class="ocf" id="VIP_eth0" provider="heartbeat" type="IPaddr2">
             <op id="VIP_eth0_mon" interval="5s" name="monitor" timeout="5s"/>
           <instance_attributes id="VIP_eth2_inst_attr">
               <nvpair id="VIP_eth0_attr_0" name="ip" value=""/>
               <nvpair id="VIP_eth0_attr_1" name="netmask" value="24"/>
               <nvpair id="VIP_eth0_attr_2" name="nic" value="eth0"/>
  • After creating above file, you need to tell Heartbeat to add it to the cluster configuration.

cibadmin -o resources -C -x /tmp/resources.xml
  • Finally, we need to edit the following file again, /tmp/resources.xml, which will specify the constraints, that will tell the Cluster Resource Manager how it should failover the services.

         <rsc_location id="rsc_location_group_1" rsc="group_1">
          <rule id="prefered_location_group_1" score="100">
           <expression attribute="#uname" id="prefered_location_group_1_expr" operation="eq" value="ha01"/>

After editing above file, you need to tell Heartbeat to add it to the cluster configuration.

cibadmin -o constraints -C -x /tmp/resources.xml

UbuntuHighAvailabilityTeam/PacemakerHeartbeat (last edited 2009-07-13 22:44:55 by 190)