Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
SUSE Linux Enterprise High Availability Extension 12 SP5

Pacemaker Remote Quick Start

SUSE Linux Enterprise High Availability Extension 12 SP5

Abstract

This document guides you through the setup of a High Availability cluster with a remote node or a guest node, managed by Pacemaker and pacemaker_remote. Remote in the pacemaker_remote term does not mean physical distance, but non-membership of a cluster.

Authors: Tanja Roth and Thomas Schraitle
Publication Date: July 25, 2024

1 Conceptual Overview and Terminology

A regular cluster may contain up to 32 nodes. With the pacemaker_remote service, High Availability clusters can be extended to include additional nodes beyond this limit.

The pacemaker_remote service can be operated as a physical node (called remote node) or as a virtual node (called guest node). Unlike normal cluster nodes, both remote and guest nodes are managed by the cluster as resources. As such, they are not bound to the 32 node limitation of the cluster stack. However, from the resource management point of view, they behave as regular cluster nodes.

Remote nodes do not need to have the full cluster stack installed, as they only run the pacemaker_remote service. The service acts as a proxy, allowing the cluster stack on the regular cluster nodes to connect to the service. Thus, the node that runs the pacemaker_remote service is effectively integrated into the cluster as a remote node (see Terminology).

Terminology
Cluster Node

A node that runs the complete cluster stack, see Figure 1, “Regular Cluster Stack (Two-Node Cluster)”.

Regular Cluster Stack (Two-Node Cluster)
Figure 1: Regular Cluster Stack (Two-Node Cluster)

A regular cluster node may perform the following tasks:

  • Run cluster resources.

  • Run all command line tools, such as crm, crm_mon.

  • Execute fencing actions.

  • Count toward cluster quorum.

  • Serve as the cluster's designated coordinator (DC).

Pacemaker Remote (systemd service: pacemaker_remote)

A service daemon that makes it possible to use a node as a Pacemaker node without deploying the full cluster stack. Note that pacemaker_remote is the name of the systemd service. However, the name of the daemon is pacemaker_remoted (with a trailing d after its name).

Remote Node

A physical machine that runs the pacemaker_remote daemon. A special resource (ocf:pacemaker:remote) needs to run on one of the cluster nodes to manage communication between the cluster node and the remote node (see Section 3, “Use Case 1: Setting Up a Cluster with Remote Nodes”).

Guest Node

A virtual machine that runs the pacemaker_remote daemon. A guest node is created using a resource agent such as ocf:pacemaker:VirtualDomain with the remote-node meta attribute (see Section 4, “Use Case 2: Setting Up a Cluster with Guest Nodes”).

For a physical machine that contains several guest nodes, the process is as follows:

  1. On the cluster node, virtual machines are launched by Pacemaker.

  2. The cluster connects to the pacemaker_remote service of the virtual machines.

  3. The virtual machines are integrated into the cluster by pacemaker_remote.

It is important to distinguish between several roles that a virtual machine can take in the High Availability cluster:

  • A virtual machine can run a full cluster stack. In this case, the virtual machine is a regular cluster node and is not itself managed by the cluster.

  • A virtual machine can be managed by the cluster as a resource, without the cluster being aware of the services that run inside the virtual machine. In this case, the virtual machine is opaque to the cluster.

  • A virtual machine can be a cluster resource and run pacemaker_remote, which allows the cluster to manage services inside the virtual machine. In this case, the virtual machine is a guest node and is transparent to the cluster.

Remote nodes and guest nodes can run cluster resources and most command line tools. However, they have the following limitations:

  • They cannot execute fencing actions.

  • They do not affect quorum.

  • They cannot serve as Designated Coordinator (DC).

2 Usage Scenario

The procedures in this document describe the process of setting up a minimal cluster with the following characteristics:

  • Two cluster nodes running SUSE Linux Enterprise High Availability Extension 12 GA or higher. In this guide, their host names are alice and bob.

  • Depending on the setup you choose, your cluster will end up with one of the following nodes:

    • One remote node running pacemaker_remote (the remote node is named charlie in this document).

      Or:

    • One guest node running pacemaker_remote (the guest node is named doro in this document).

  • Pacemaker to manage guest nodes and remote nodes.

  • Failover of resources from one node to the other if the active host breaks down (active/passive setup).

3 Use Case 1: Setting Up a Cluster with Remote Nodes

In the following example setup, a remote node charlie is used.

3.1 Preparing the Cluster Nodes and the Remote Node

To prepare the cluster nodes and remote node, proceed as follows:

  1. Install and set up a basic two-node cluster as described in the Guida rapida di installazione e configurazione. This will lead to a two-node cluster with two physical hosts, alice and bob.

  2. On a physical host (charlie) that you want to use as remote node, install SUSE Linux Enterprise Server 12 SP5 and add SUSE Linux Enterprise High Availability Extension 12 SP5 as extension. However, do not install the High Availability installation pattern, because the remote node needs only individual packages (see Section 3.3).

  3. On all cluster nodes, check /etc/hosts and add an entry for charlie.

3.2 Configuring an Authentication Key

On the cluster node alice proceed as follows:

  1. Create a specific authentication key for the pacemaker_remote service:

    root # mkdir -p --mode=0755 /etc/pacemaker
    root # dd if=/dev/urandom of=/etc/pacemaker/authkey bs=4k count=1

    The key for the pacemaker_remote service is different from the cluster authentication key that you create in the YaST cluster module.

  2. Synchronize the authentication key among all cluster nodes and your future remote node with scp:

    root # scp -r -p /etc/pacemaker/ bob:/etc
    root # scp -r -p /etc/pacemaker/ charlie:/etc

    The key needs to be kept synchronized all the time.

3.3 Configuring the Remote Node

The following procedure configures the physical host charlie as a remote node:

  1. On charlie, proceed as follows:

    1. In the firewall settings, open the TCP port 3121 for pacemaker_remote.

    2. Install the pacemaker-remote and crmsh packages:

      root # zypper in pacemaker-remote crmsh
    3. Enable and start the pacemaker_remote service on charlie:

      root # systemctl enable pacemaker_remote
      root # systemctl start pacemaker_remote
  2. On alice or bob, verify the host connection to the remote node by using ssh:

    root # ssh -p 3121 charlie

    This SSH connection will fail, but how it fails shows if the setup is working:

    Working Setup
    ssh_exhange_identification: read: Connection reset by peer.
    Broken Setup
    ssh: connect to host charlie port 3121: No route to host
    ssh: connect to host charlie port 3121: Connection refused

    If you see either of those two messages, the setup does not work. Use the -v option for ssh and execute the command again to see debugging messages. This can be helpful to find connection, authentication, or configuration problems. Multiple -v options increase the verbosity.

    If needed, add more remote nodes and configure them as described above.

3.4 Integrating the Remote Node into the Cluster

To integrate the remote node into the cluster, proceed as follows:

  1. Log in to each cluster node and make sure Pacemaker service is already started:

    root # systemctl start pacemaker
  2. On node alice, create a ocf:pacemaker:remote primitive:

    root # crm configure
    crm(live)configure# primitive charlie ocf:pacemaker:remote \
         params server=charlie reconnect_interval=15m \
         op monitor interval=30s
    crm(live)configure# commit
    crm(live)configure# exit
  3. Check the status of the cluster with the command crm status. It should contain a running cluster with nodes that are all accessible:

    root # crm status
    [...]
    Online: [ alice bob ]
    RemoteOnline: [ charlie ]
    
    Full list of resources:
    charlie (ocf:pacemaker:remote): Started alice
     [...]

3.5 Starting Resources on the Remote Node

After the remote node is integrated into the cluster, you can start resources on the remote node in the same way as on any cluster node.

Warning
Warning: Restrictions Regarding Groups and Constraints

Never involve a remote node connection resource in a resource group, colocation constraint, or order constraint. This may lead to unexpected behavior on cluster transitions.

Fencing Remote Nodes.  Remote nodes are fenced in the same way as cluster nodes. Configure fencing resources for use with remote nodes in the same way as with cluster nodes.

Remote nodes do not take part in initiating a fencing action. Only cluster nodes can execute a fencing operation against another node.

4 Use Case 2: Setting Up a Cluster with Guest Nodes

In the following example setup, KVM is used for setting up the virtual guest node (doro).

4.1 Preparing the Cluster Nodes and the Guest Node

To prepare the cluster nodes and guest node, proceed as follows:

  1. Install and set up a basic two-node cluster as described in the Guida rapida di installazione e configurazione. This will lead to a two-node cluster with two physical hosts, alice and bob.

  2. Create a KVM guest on alice. For details refer to https://documentation.suse.com/sles-12/html/SLES-all/cha-kvm-inst.html.

  3. On the KVM guest (doro) that you want to use as guest node, install SUSE Linux Enterprise Server 12 SP5 and add SUSE Linux Enterprise High Availability Extension 12 SP5 as extension. However, do not install the High Availability installation pattern, because the remote node needs only individual packages (see Section 4.3).

  4. On all cluster nodes, check /etc/hosts and add an entry for doro.

4.2 Configuring an Authentication Key

On the cluster node alice proceed as follows:

  1. Create a specific authentication key for the pacemaker_remote service:

    root # mkdir -p --mode=0755 /etc/pacemaker
    root # dd if=/dev/urandom of=/etc/pacemaker/authkey bs=4k count=1

    The key for the pacemaker_remote service is different from the cluster authentication key that you create in the YaST cluster module.

  2. Synchronize the authentication key among all cluster nodes and your guest node with scp:

    root # scp -r -p /etc/pacemaker/ bob:/etc
    root # scp -p /etc/pacemaker/ doro:/etc

    The key needs to be kept synchronized all the time.

4.3 Configuring the Guest Node

The following procedure configures doro as a guest node on your cluster node alice:

  1. On doro, proceed as follows:

    1. In the firewall settings, open the TCP port 3121 for pacemaker_remote.

    2. Install the pacemaker-remote and crmsh packages:

      root # zypper in pacemaker-remote crmsh
    3. Enable and start the pacemaker_remote service on alice:

      root # systemctl enable pacemaker_remote
      root # systemctl start pacemaker_remote
  2. On alice or bob, verify the host connection to the guest by running ssh:

    root # ssh -p 3121 doro

    This SSH connection will fail, but how it fails shows if the setup is working:

    Working Setup
    ssh_exhange_identification: read: Connection reset by peer.
    Broken Setup
    ssh: connect to host doro port 3121: No route to host
    ssh: connect to host doro port 3121: Connection refused

    If you see either of those two messages, the setup does not work. Use the -v option for ssh and execute the command again to see debugging messages. This can be helpful to find connection, authentication, or configuration problems. Multiple -v options increase the verbosity.

    If needed, add more guest nodes and configure them as described above.

  3. Shut down the guest node and proceed with Section 4.4, “Integrating a Guest Node into the Cluster”.

4.4 Integrating a Guest Node into the Cluster

To integrate the guest node into the cluster, proceed as follows:

  1. Log in to each cluster node and make sure Pacemaker service is already started:

    root # systemctl start pacemaker
  2. Dump the XML configuration of the KVM guest(s) that you need in the next step:

    root # virsh list --all
     Id    Name         State
    -----------------------------------
     -     doro       shut off
    root # virsh dumpxml doro > /etc/pacemaker/doro.xml
  3. On node alice, create a VirtualDomain resource to launch the virtual machine. Use the dumped configuration from Step 2:

    root # crm configure
    crm(live)configure# primitive vm-doro ocf:heartbeat:VirtualDomain \
      params hypervisor="qemu:///system" \
             config="/etc/pacemaker/doro.xml" \
             meta remote-node=doro

    Pacemaker will automatically monitor pacemaker_remote connections for failure, so it is not necessary to create a recurring monitor on the VirtualDomain resource.

  4. Check the status of the cluster with the command crm status. It should contain a running cluster with nodes that are all accessible.

4.5 Testing the Setup

To demonstrate how resources are executed, use a dummy resource. It serves for testing purposes only.

  1. Create a dummy resource:

    root # crm configure primitive fake1 ocf:pacemaker:Dummy
  2. Check the cluster status with the crm status command. You should see something like the following:

    root # crm status
    [...]
    Online: [ alice bob ]
    GuestOnline: [ doro@alice ]
    
    Full list of resources:
    vm-doro (ocf:heartbeat:VirtualDomain): Started alice
    fake1           (ocf:pacemaker:Dummy): Started bob
  3. To move the Dummy primitive to the guest node (doro), use the following command:

    root # crm resource move fake1 doro

    The status will change to this:

    root # crm status
    [...]
    Online: [ alice bob ]
    GuestOnline: [ doro@alice ]
    
    Full list of resources:
    vm-doro (ocf:heartbeat:VirtualDomain): Started alice
    fake1           (ocf:pacemaker:Dummy): Started doro
  4. To test whether fencing works, kill the pacemaker_remoted daemon on doro:

    root # kill -9 $(pidof pacemaker_remoted)
  5. After a few seconds, check the status of the cluster again. It should look like this:

    root # crm status
    [...]
    Online: [ alice bob ]
    
    Full list of resources:
    vm-doro (ocf::heartbeat:VirtualDomain): Started alice
    fake1           (ocf:pacemaker:Dummy): Stopped
    
    Failed Actions:
    * doro_monitor_30000 on alice 'unknown error' (1): call=8, status=Error, exitreason='none',
        last-rc-change='Tue Jul 18 13:11:51 2017', queued=0ms, exec=0ms

5 Upgrading the Cluster and Pacemaker_remote Nodes

Upgrade all pacemaker_remote nodes to SUSE Linux Enterprise 12 SP5 one by one and make sure SUSE Linux Enterprise High Availability Extension 12 SP5 has been added as extension. Update the packages pacemaker-remote and crmsh including their dependencies. For details, see Section 5.3, “Updating Software Packages on Cluster Nodes”.

It also lists different scenarios and supported upgrade paths in chapter Upgrading Your Cluster and Updating Software Packages.

6 Note legali

Copyright © 2006– 2024 SUSE LLC e collaboratori. Tutti i diritti riservati.

L'autorizzazione per la copia, la distribuzione e/o la modifica di questo documento è soggetta ai termini indicati nella licenza GFDL (GNU Free Documentation License), versione 1.2 oppure, a scelta, 1.3, di cui la presente licenza e le presenti informazioni sul copyright rappresentano la sezione non variabile. Una copia della licenza versione 1.2 è inclusa nella sezione intitolata GNU Free Documentation License.

Per i marchi di fabbrica SUSE vedere http://www.suse.com/company/legal/. Tutti gli altri marchi di fabbrica di terze parti sono proprietà dei rispettivi titolari. I simboli di marchio di fabbrica (®, ™ e così via) indicano i marchi di fabbrica appartenenti a SUSE e alle rispettive affiliate. Gli asterischi (*) indicano i marchi di fabbrica di terze parti.

Tutte le informazioni nella presente pubblicazione sono state compilate con la massima attenzione ai dettagli. Ciò, tuttavia, non garantisce una precisione assoluta. SUSE LLC, le rispettive affiliate, gli autori e i traduttori non potranno essere ritenuti responsabili di eventuali errori o delle relative conseguenze.

Print this page