Jump to content
documentation.suse.com / Geo Clustering Quick Start
SUSE Linux Enterprise High Availability 12 SP5

Geo Clustering Quick Start

Publication Date: November 28, 2024

Geo clustering allows you to have multiple, geographically dispersed sites with a local cluster each. Failover between these clusters is coordinated by a higher level entity: the booth cluster ticket manager. This document guides you through the basic setup of a Geo cluster, using the Geo bootstrap scripts provided by the ha-cluster-bootstrap package.

Copyright © 2006–2024 SUSE LLC and contributors. All rights reserved.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled GNU Free Documentation License.

For SUSE trademarks, see https://www.suse.com/company/legal/. All third-party trademarks are the property of their respective owners. Trademark symbols (®, ™ etc.) denote trademarks of SUSE and its affiliates. Asterisks (*) denote third-party trademarks.

All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its affiliates, the authors nor the translators shall be held liable for possible errors or the consequences thereof.

1 Conceptual Overview

Geo clusters based on SUSE® Linux Enterprise High Availability can be considered overlay clusters where each cluster site corresponds to a cluster node in a traditional cluster. The overlay cluster is managed by the booth cluster ticket manager (in the following called booth). Each of the parties involved in a Geo cluster runs a service, the boothd. It connects to the booth daemons running on the other sites and exchanges connectivity details. For making cluster resources highly available across sites, booth relies on cluster objects called tickets. A ticket grants the right to run certain resources on a specific cluster site. Booth guarantees that every ticket is granted to no more than one site at a time.

If the communication between two booth instances breaks down, it might be because of a network breakdown between the cluster sites or because of an outage of one cluster site. In this case, you need an additional instance (a third cluster site or an arbitrator) to reach consensus about decisions (such as failover of resources across sites). Arbitrators are single machines (outside of the clusters) that run a booth instance in a special mode. Each Geo cluster can have one or multiple arbitrators.

Two-Site Cluster (2x2 Nodes + Arbitrator)
Figure 1: Two-Site Cluster (2x2 Nodes + Arbitrator)

For more details on the concept, components and ticket management used for Geo clusters, see the Book “Geo Clustering Guide”, Chapter 2 “Conceptual Overview”.

2 Usage Scenario

In the following, we will set up a basic Geo cluster with two cluster sites and one arbitrator:

  • We assume the cluster sites are named amsterdam and berlin.

  • We assume that each site consists of two nodes. The nodes alice and bob belong to the cluster amsterdam. The nodes charlie and doro belong to the cluster berlin.

  • Site amsterdam will get the following virtual IP address: 192.168.201.100.

  • Site berlin will get the following virtual IP address: 192.168.202.100.

  • We assume that the arbitrator has the following IP address: 192.168.203.100.

Before you proceed, make sure the following requirements are fulfilled:

Requirements
Two Existing Clusters

You have at least two existing clusters that you want to combine into a Geo cluster (if you need to set up two clusters first, follow the instructions in the Article “Installation and Setup Quick Start”).

Meaningful Cluster Names

Each cluster has a meaningful name that reflects its location, for example: amsterdam and berlin. Cluster names are defined in /etc/corosync/corosync.conf.

Arbitrator

You have installed a third machine that is not part of any existing clusters and is to be used as arbitrator.

For detailed requirements on each item, see also Section 3, “Requirements”.

3 Requirements

Software Requirements
  • All machines (cluster nodes and arbitrators) that will be part of the Geo cluster have the following software installed:

    • SUSE® Linux Enterprise Server 12 SP5

    • SUSE Linux Enterprise High Availability 12 SP5

    • Geo Clustering for SUSE Linux Enterprise High Availability 12 SP5

Network Requirements
  • The virtual IPs to be used for each cluster site must be accessible across the Geo cluster.

  • The sites must be reachable on one UDP and TCP port per booth instance. That means any firewalls or IPsec tunnels in between must be configured accordingly.

  • Other setup decisions may require to open more ports (for example, for DRBD or database replication).

Other Requirements and Recommendations
  • All cluster nodes on all sites should synchronize to an NTP server outside the cluster. For more information, see https://documentation.suse.com/sles-12/html/SLES-all/cha-netz-xntp.html.

    If nodes are not synchronized, log files and cluster reports are very hard to analyze.

  • Use an uneven number of members in your Geo cluster. In case the network connection breaks down, this makes sure that there still is a majority of sites (to avoid a split brain scenario). In case you have an even number of cluster sites, use an arbitrator.

  • The cluster on each site has a meaningful name, for example: amsterdam and berlin.

    The cluster names for each site are defined in the respective /etc/corosync/corosync.conf files:

    totem {
        [...]
        cluster_name: amsterdam
        }

    This can either be done manually (by editing /etc/corosync/corosync.conf) or with the YaST cluster module (by switching to the Communication Channels category and defining a Cluster Name). Afterward, stop and start the pacemaker service for the changes to take effect:

    # systemctl stop pacemaker
    # systemctl start pacemaker

4 Overview of the Geo Bootstrap Scripts

  • With ha-cluster-geo-init, turn a cluster into the first site of a Geo cluster. The script takes some parameters like the names of the clusters, the arbitrator, and one or multiple tickets and creates /etc/booth/booth.conf from them. It copies the booth configuration to all nodes on the current cluster site. It also configures the cluster resources needed for booth on the current cluster site.

    For details, see Section 6, “Setting Up the First Site of a Geo Cluster”.

  • With ha-cluster-geo-join, add the current cluster to an existing Geo cluster. The script copies the booth configuration from an existing cluster site and writes it to /etc/booth/booth.conf on all nodes on the current cluster site. It also configures the cluster resources needed for booth on the current cluster site.

    For details, see Section 7, “Adding Another Site to a Geo Cluster”.

  • With ha-cluster-geo-init-arbitrator, turn the current machine into an arbitrator for the Geo cluster. The script copies the booth configuration from an existing cluster site and writes it to /etc/booth/booth.conf.

    For details, see Section 8, “Adding the Arbitrator”.

All bootstrap scripts log to /var/log/ha-cluster-bootstrap.log. Check the log file for any details of the bootstrap process. Any options set during the bootstrap process can be modified later (by modifying booth settings, modifying resources etc.). For details, see the Book “Geo Clustering Guide”.

5 Installation as Extension

Support for using High Availability clusters across unlimited distances is available as a separate extension, called Geo Clustering for SUSE Linux Enterprise High Availability.

For setting up a Geo cluster you need the packages included in the following installation patterns:

  • High Availability

  • Geo Clustering for High Availability

Both patterns are only available if you have registered your system at SUSE Customer Center (or a local registration server) and have added the respective product channels or installation media as an extension. For information on how to install extensions, see the Deployment Guide for SUSE Linux Enterprise 12 SP5.

Procedure 1: Installing the Packages
  1. To install the packages from both patterns via command line, use Zypper:

    # zypper install -t pattern ha_sles ha_geo
  2. Alternatively, use YaST for a graphical installation:

    1. Start YaST as root user and select Software › Software Management.

    2. Click View › Patterns and activate the following patterns:

      • High Availability

      • Geo Clustering for High Availability

    3. Click Accept to start installing the packages.

Important
Important: Installing Software Packages on all Parties

The software packages needed for High Availability and Geo clusters are not automatically copied to the cluster nodes.

  • Install SUSE Linux Enterprise Server 12 SP5 and the High Availability and Geo Clustering for High Availability patterns on all machines that will be part of your Geo cluster.

  • Instead of manually installing the packages on all machines that will be part of your cluster, use AutoYaST to clone existing nodes. Find more information at Book “Administration Guide”, Chapter 3 “Installing SUSE Linux Enterprise High Availability”, Section 3.2 “Mass Installation and Deployment with AutoYaST”.

    However, the Geo clustering extension must be installed manually on all machines that will be part of the Geo cluster. AutoYaST support for Geo Clustering for SUSE Linux Enterprise High Availability is not yet available.

6 Setting Up the First Site of a Geo Cluster

Use the ha-cluster-geo-init script to turn an existing cluster into the first site of a Geo cluster.

Procedure 2: Setting Up the First Site (amsterdam) with ha-cluster-geo-init
  1. Define a virtual IP per cluster site that can be used to access the site. We assume using 192.168.201.100 and 192.168.202.100 for this purpose. You do not need to configure the virtual IPs as cluster resources yet. This will be done by the bootstrap scripts.

  2. Define the name of at least one ticket that will grant the right to run certain resources on a cluster site. Use a meaningful name that reflects the resources that will depend on the ticket (for example, ticket-nfs). The bootstrap scripts only need the ticket name—you can define the remaining details (ticket dependencies of the resources) later on, as described in Section 10, “Next Steps”.

  3. Log in to a node of an existing cluster (for example, on node alice of the cluster amsterdam).

  4. Run ha-cluster-geo-init. For example, use the following options:

    # ha-cluster-geo-init \
      --clusters1 "amsterdam=192.168.201.100 berlin=192.168.202.100" \
      --tickets2 ticket-nfs \
      --arbitrator3 192.168.203.100

    1

    The names of the cluster sites (as defined in /etc/corosync/corosync.conf) and the virtual IP addresses you want to use for each cluster site. In this case, we have two cluster sites (amsterdam and berlin) with a virtual IP address each.

    2

    The name of one or multiple tickets.

    3

    The host name or IP address of a machine outside of the clusters.

The bootstrap script creates the booth configuration file and synchronizes it across the cluster sites. It also creates the basic cluster resources needed for booth. Step 4 of Procedure 2 would result in the following booth configuration and cluster resources:

Example 1: Booth Configuration Created By ha-cluster-geo-init
# The booth configuration file is "/etc/booth/booth.conf". You need to
# prepare the same booth configuration file on each arbitrator and
# each node in the cluster sites where the booth daemon can be launched.

# "transport" means which transport layer booth daemon will use.
# Currently only "UDP" is supported.
transport="UDP"
port="9929"

arbitrator="192.168.203.100"
site="192.168.201.100"
site="192.168.202.100"
authfile="/etc/booth/authkey"
ticket="ticket-nfs"
expire="600"
Example 2: Cluster Resources Created by ha-cluster-geo-init
primitive1 booth-ip IPaddr2 \
  params rule #cluster-name eq amsterdam ip=192.168.201.100 \
  params rule #cluster-name eq berlin ip=192.168.202.100 \
primitive2 booth-site ocf:pacemaker:booth-site \
  meta resource-stickiness=INFINITY \
  params config=booth \
  op monitor interval=10s
group3 g-booth booth-ip booth-site \
meta target-role=Stopped4

1

A virtual IP address for each cluster site. It is required by the booth daemons who need a persistent IP address on each cluster site.

2

A primitive resource for the booth daemon. It communicates with the booth daemons on the other cluster sites. The daemon can be started on any node of the site, but to make the resource stay on the same node, if possible, resource-stickiness is set to INFINITY.

3

A cluster resource group for both primitives. With this configuration, each booth daemon will be available at its individual IP address, independent of the node the daemon is running on.

4

The cluster resource group is not started by default. After verifying the configuration of your cluster resources (and adding the resources you need to complete your setup), you need to start the resource group. See Required Steps to Complete the Geo Cluster Setup for details.

7 Adding Another Site to a Geo Cluster

After you have initialized the first site of your Geo cluster, add the second cluster with ha-cluster-geo-join, as described in Procedure 3. The script needs SSH access to an already configured cluster site and will add the current cluster to the Geo cluster.

Procedure 3: Adding the Second Site (berlin) with ha-cluster-geo-join
  1. Log in to a node of the cluster site you want to add (for example, on node charlie of the cluster berlin).

  2. Run the ha-cluster-geo-join command. For example:

    # ha-cluster-geo-join \
      --cluster-node1 192.168.201.100\
      --clusters2 "amsterdam=192.168.201.100 berlin=192.168.202.100"

    1

    Specifies where to copy the booth configuration from. Use the IP address or host name of a node in an already configured Geo cluster site. You can also use the virtual IP address of an already existing cluster site (like in this example). Alternatively, use the IP address or host name of an already configured arbitrator for your Geo cluster.

    2

    The names of the cluster sites (as defined in /etc/corosync/corosync.conf) and the virtual IP addresses you want to use for each cluster site. In this case, we have two cluster sites (amsterdam and berlin) with a virtual IP address each.

The ha-cluster-geo-join script copies the booth configuration from 1, see Example 1. In addition, it creates the cluster resources needed for booth (see Example 2).

8 Adding the Arbitrator

After you have set up all sites of your Geo cluster with ha-cluster-geo-init and ha-cluster-geo-join, set up the arbitrator with ha-cluster-geo-init-arbitrator.

Procedure 4: Setting Up the Arbitrator with ha-cluster-geo-init-arbitrator
  1. Log in to the machine you want to use as arbitrator.

  2. Run the following command. For example:

    # ha-cluster-geo-init-arbitrator --cluster-node1 192.168.201.100

    1

    Specifies where to copy the booth configuration from. Use the IP address or host name of a node in an already configured Geo cluster site. Alternatively, use the virtual IP address of an already existing cluster site (like in this example).

The ha-cluster-geo-init-arbitrator script copies the booth configuration from 1, see Example 1. It also enables and starts the booth service on the arbitrator. Thus, the arbitrator is ready to communicate with the booth instances on the cluster sites as soon as the booth services are running there too.

9 Monitoring the Cluster Sites

To view both cluster sites with the resources and the ticket that you have created during the bootstrapping process, use Hawk2. The Hawk2 Web interface allows you to monitor and manage multiple (unrelated) clusters and Geo clusters.

Prerequisites
  • All clusters to be monitored from Hawk2's Dashboard must be running SUSE Linux Enterprise High Availability 12 SP5.

  • If you did not replace the self-signed certificate for Hawk2 on every cluster node with your own certificate (or a certificate signed by an official Certificate Authority) yet, do the following: Log in to Hawk2 on every node in every cluster at least once. Verify the certificate (or add an exception in the browser to bypass the warning). Otherwise Hawk2 cannot connect to the cluster.

Procedure 5: Using the Hawk2 Dashboard
  1. Start a Web browser and enter the virtual IP of your first cluster site, amsterdam:

    https://192.168.201.100:7630/

    Alternatively, use the IP address or host name of alice or bob. If you have set up both nodes with the bootstrap scripts, the hawk service should run on both nodes.

  2. Log in to the Hawk2 Web interface.

  3. From the left navigation bar, select Dashboard.

    Hawk2 shows an overview of the resources and nodes on the current cluster site. In addition, it shows any Tickets that have been configured for the Geo cluster. If you need information about the icons used in this view, click Legend.

    Hawk2 Dashboard with One Cluster Site (amsterdam)
    Figure 2: Hawk2 Dashboard with One Cluster Site (amsterdam)
  4. To add a dashboard for the second cluster site, click Add Cluster.

    1. Enter the Cluster name with which to identify the cluster in the Dashboard. In this case, berlin.

    2. Enter the fully qualified host name of one of the cluster nodes (in this case, charlie or doro).

    3. Click Add. Hawk2 will display a second tab for the newly added cluster site with an overview of its nodes and resources.

      Hawk2 Dashboard with Both Cluster Sites
      Figure 3: Hawk2 Dashboard with Both Cluster Sites
  5. To view more details for a cluster site or to manage it, switch to the site's tab and click the chain icon.

    Hawk2 opens the Status view for this site in a new browser window or tab. From there, you can administer this part of the Geo cluster.

10 Next Steps

The Geo clustering bootstrap scripts provide a quick way to set up a basic Geo cluster that can be used for testing purposes. However, to move the resulting Geo cluster into a functioning Geo cluster that can be used in production environments, more steps are required—see Required Steps to Complete the Geo Cluster Setup.

Required Steps to Complete the Geo Cluster Setup
Starting the Booth Services on Cluster Sites

After the bootstrap process, the arbitrator booth service cannot communicate with the booth services on the cluster sites yet, because they are not started by default.

The booth service for each cluster site is managed by the booth resource group g-booth (see Example 2, “Cluster Resources Created by ha-cluster-geo-init). To start one instance of the booth service per site, start the respective booth resource group on each cluster site. This enables all booth instances to communicate with each other.

Configuring Ticket Dependencies and Ordering Constraints

To make resources depend on the ticket that you have created during the Geo cluster bootstrap process, configure constraints. For each constraint, set a loss-policy that defines what should happen to the respective resources if the ticket is revoked from a cluster site.

For details, see Book “Geo Clustering Guide”, Chapter 6 “Configuring Cluster Resources and Constraints”.

Initially Granting a Ticket to a Site

Before booth can manage a certain ticket within the Geo cluster, you initially need to grant it to a site manually. You can use either the booth client command line tool or Hawk2 to grant a ticket.

For details, see Book “Geo Clustering Guide”, Chapter 8 “Managing Geo Clusters”.

The bootstrap scripts create the same booth resources on both cluster sites, and the same booth configuration files on all sites, including the arbitrator. If you extend the Geo cluster setup (to move to a production environment), you will probably fine-tune the booth configuration and also change the configuration of the booth-related cluster resources. Afterward, you need to synchronize the changes to the other sites of your Geo cluster to take effect.

Note
Note: Synchronizing Changes Across Cluster Sites
  • To synchronize changes in the booth configuration to all cluster sites (including the arbitrator), use Csync2. Find more information at Book “Geo Clustering Guide”, Chapter 5 “Synchronizing Configuration Files Across All Sites and Arbitrators”.

  • The CIB (Cluster Information Database) is not automatically synchronized across cluster sites of a Geo cluster. That means any changes in resource configuration that are required on all cluster sites need to be transferred to the other sites manually. Do so by tagging the respective resources, exporting them from the current CIB, and importing them to the CIB on the other cluster sites. For details, see Book “Geo Clustering Guide”, Chapter 6 “Configuring Cluster Resources and Constraints”, Section 6.4 “Transferring the Resource Configuration to Other Cluster Sites”.

11 For More Information