Apart from the resources and constraints that you need to define for your specific cluster setup, Geo clusters require additional resources and constraints as described below. You can either configure them with the crm shell (crmsh) as demonstrated in the examples below, or with the HA Web Konsole (Hawk2).
This chapter focuses on tasks specific to Geo clusters. For an introduction to your preferred cluster management tool and general instructions on how to configure resources and constraints with it, refer to one of the following chapters in the Administration Guide for SUSE Linux Enterprise High Availability Extension:
If you have set up your Geo cluster with the bootstrap scripts, the cluster resources needed for booth have been configured already (including a resource group for boothd). In this case, you can skip Section 6.2 and only need to execute the remaining steps below to complete the cluster resource configuration.
If you are setting up your Geo cluster manually, you need to execute all of the following steps:
The CIB is not automatically synchronized across cluster sites of a Geo cluster. All resources that must be highly available across the Geo cluster need to be configured for each site accordingly or need to be transferred to the other site or sites.
To simplify transfer, any resources with site-specific parameters can be configured in such a way that the parameters' values depend on the name of the cluster site where the resource is running (see also Chapter 3, Requirements, Other Requirements and Recommendations).
After you have configured the resources on one site, you can tag the resources that are needed on all cluster sites, export them from the current CIB, and import them into the CIB of another cluster site. For details, see Section 6.4, “Transferring the Resource Configuration to Other Cluster Sites”.
For Geo clusters, you can specify which resources depend on a
certain ticket. Together with this special type of constraint, you can
set a loss-policy
that defines what should happen to
the respective resources if the ticket is revoked. The attribute
loss-policy
can have the following values:
fence
: Fence the nodes that are running the
relevant resources.
stop
: Stop the relevant resources.
freeze
: Do nothing to the relevant resources.
demote
: Demote relevant resources that are running
in master
mode to slave
mode.
On one of the nodes of cluster amsterdam, start a shell and log in as
root
or equivalent.
Enter crm configure
to switch to the interactive
crm shell.
Configure constraints that define which resources depend on a certain
ticket. For example, to make a primitive resource rsc1
depend on ticketA
:
crm(live)configure#
rsc_ticket
rsc1-req-ticketA ticketA: \ rsc1 loss-policy="fence"
In case ticketA
is revoked, the node running the
resource should be fenced.
If you want other resources to depend on further tickets, create as many
constraints as necessary with rsc_ticket
.
Review your changes with show
.
If everything is correct, submit your changes with
commit
and leave the crm live configuration with
exit
.
The configuration is saved to the CIB.
boothd
#
If you have set up your Geo cluster with the
ha-cluster-init
bootstrap scripts, you can skip the
following procedure as the resources and the resource group for boothd have
already been configured in this case.
Each site needs to run one instance of
boothd
that communicates
with the other booth daemons. The daemon can be started on any node,
therefore it should be configured as primitive resource. To make the
boothd
resource stay on the same node, if
possible, add resource stickiness to the configuration. As each daemon
needs a persistent IP address, configure another primitive with a
virtual IP address. Group both primitives:
On one of the nodes of cluster amsterdam
, start a
shell and log in as root
or equivalent.
Enter crm configure
to switch to the interactive
crm shell.
Enter the following to create both primitive resources and to add them to
one group, g-booth
:
crm(live)configure#
primitive
ip-booth ocf:heartbeat:IPaddr2 \ params iflabel="ha" nic="eth1" cidr_netmask="24" params rule #cluster-name eq amsterdam ip="192.168.201.100" \ params rule #cluster-name eq berlin ip="192.168.202.100"crm(live)configure#
primitive
booth-site ocf:pacemaker:booth-site \ meta resource-stickiness="INFINITY" \ params config="nfs" op monitor interval="10s"crm(live)configure#
group
g-booth ip-booth booth-site
With this configuration, each booth daemon will be available at its individual IP address, independent of the node the daemon is running on.
Review your changes with show
.
If everything is correct, submit your changes with
commit
and leave the crm live configuration with
exit
.
The configuration is saved to the CIB.
If a ticket has been granted to a site but all nodes of that site
should fail to host the boothd
resource group for any reason, a “split-brain” situation
among the geographically dispersed sites may occur. In that case, no
boothd
instance would be available to safely manage failover of the
ticket to another site. To avoid a potential concurrency violation of the
ticket (the ticket is granted to multiple sites simultaneously), add an
ordering constraint:
On one of the nodes of cluster amsterdam, start a shell and log in as
root
or equivalent.
Enter crm configure
to switch to the interactive
crm shell.
Create an ordering constraint, for example:
crm(live)configure#
order
o-booth-before-rsc1 inf: g-booth rsc1
It defines that rsc1
(which depends on
ticketA
) can only be started after the
g-booth
resource group.
For any other resources that depend on a certain ticket, define further ordering constraints.
Review your changes with show
.
If everything is correct, submit your changes with
commit
and leave the crm live configuration with
exit
.
The configuration is saved to the CIB.
After having completed or changed your resource configuration for one cluster site, transfer it to the other sites of your Geo cluster.
To simplify the transfer, you can tag any resources that are needed on all cluster sites, export them from the current CIB, and import them into the CIB of another cluster site. Tagging does not create any colocation or ordering relationship between the resources.
Procedure 6.2, “Tagging and Exporting a Resource Configuration” and Procedure 6.3, “Importing a Tagged Resource Configuration” give an example of how to do so. They are based on the following prerequisites:
You have a Geo cluster with two sites: cluster
amsterdam
and cluster berlin
.
The cluster names for each site are defined in the respective
/etc/corosync/corosync.conf
files:
totem { [...] cluster_name: amsterdam }
This can either be done manually (by editing /etc/corosync/corosync.conf
) or with the
YaST cluster module (by switching to the category and defining a ). Afterward, stop and start the pacemaker
service for the changes to take effect:
root #
systemctl
stop pacemakerroot #
systemctl
start pacemaker
The necessary resources for booth and for all services that should be
highly available across your Geo cluster have been configured in the CIB
on site amsterdam
. They will be imported to the CIB on
site berlin
.
Log in to one of the nodes of cluster amsterdam
.
Start the cluster with:
root #
systemctl
start pacemaker
Enter crm configure
to switch to the interactive
crm shell.
Review the current CIB configuration:
crm(live)configure#
show
Mark the resources and constraints that are needed across the Geo
cluster with the tag geo_resources
:
crm(live)configure#
tag
geo_resources: \ LIST_OF_RESOURCES_and_CONSTRAINTS_FOR_REQUIRED_SERVICES 1\ rsc1-req-ticketA ip-booth booth-site g-booth o-booth-before-rsc1 2
Any resources and constraints of your specific setup that you need on all sites of the Geo cluster (for example, resources for DRBD as described at https://documentation.suse.com/sbp/all/html/SBP-DRBD/index.html). | |
Resources and constraints for boothd (primitives, booth resource group, ticket dependency, additional ordering constraint), see Section 6.1 to Section 6.3. |
Review your changes with show
.
If the configuration is according to your wishes, submit your changes with
submit
and leave the crm live shell with
exit
.
Export the tagged resources and constraints to a file named
exported.cib
:
root #
crm configure show
tag:geo_resources geo_resources > exported.cib
The command crm configure show
tag:
TAGNAME shows all resources that
belong to the tag TAGNAME.
To import the saved configuration file into the CIB of the second cluster site, proceed as follows:
Log in to one of the nodes of cluster berlin
.
Start the cluster with:
root #
systemctl
start pacemaker
Copy the file exported.cib
from cluster
amsterdam
to this node.
Import the tagged resources and constraints from the file
exported.cib
into the CIB of cluster
berlin
:
root #
crm configure load
update PATH_TO_FILE/exported.cib
When using the update
parameter for the crm
configure load
command, crmsh tries to integrate the contents
of the file into the current CIB configuration (instead of replacing the
current CIB with the file contents).
View the updated CIB configuration with the following command:
root #
crm configure show
The imported resources and constraints will appear in the CIB.