Jump to content
documentation.suse.com / Installation and Setup Quick Start
SUSE Linux Enterprise High Availability 15 SP5

Installation and Setup Quick Start

Publication Date: December 12, 2024

This document guides you through the setup of a very basic two-node cluster, using the bootstrap scripts provided by the crm shell. This includes the configuration of a virtual IP address as a cluster resource and the use of SBD on shared storage as a node fencing mechanism.

Copyright © 2006–2024 SUSE LLC and contributors. All rights reserved.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled GNU Free Documentation License.

For SUSE trademarks, see https://www.suse.com/company/legal/. All third-party trademarks are the property of their respective owners. Trademark symbols (®, ™ etc.) denote trademarks of SUSE and its affiliates. Asterisks (*) denote third-party trademarks.

All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its affiliates, the authors nor the translators shall be held liable for possible errors or the consequences thereof.

1 Usage scenario

The procedures in this document will lead to a minimal setup of a two-node cluster with the following properties:

  • Two nodes: alice (IP: 192.168.1.1) and bob (IP: 192.168.1.2), connected to each other via network.

  • A floating, virtual IP address (192.168.1.10) that allows clients to connect to the service no matter which node it is running on. This IP address is used to connect to the graphical management tool Hawk2.

  • A shared storage device, used as SBD fencing mechanism. This avoids split-brain scenarios.

  • Failover of resources from one node to the other if the active host breaks down (active/passive setup).

You can use the two-node cluster for testing purposes or as a minimal cluster configuration that you can extend later on. Before using the cluster in a production environment, see Book “Administration Guide” to modify the cluster according to your requirements.

2 System requirements

This section informs you about the key system requirements for the scenario described in Section 1. To adjust the cluster for use in a production environment, refer to the full list in Book “Administration Guide”, Chapter 2 “System requirements and recommendations”.

2.1 Hardware requirements

Servers

Two servers with software as specified in Section 2.2, “Software requirements”.

The servers can be bare metal or virtual machines. They do not require identical hardware (memory, disk space, etc.), but they must have the same architecture. Cross-platform clusters are not supported.

Communication channels

At least two TCP/IP communication media per cluster node. The network equipment must support the communication means you want to use for cluster communication: multicast or unicast. The communication media should support a data rate of 100 Mbit/s or higher. For a supported cluster setup two or more redundant communication paths are required. This can be done via:

  • Network Device Bonding (preferred).

  • A second communication channel in Corosync.

Node fencing/STONITH

A node fencing (STONITH) device to avoid split-brain scenarios. This can be either a physical device (a power switch) or a mechanism like SBD (STONITH by disk) in combination with a watchdog. SBD can be used either with shared storage or in diskless mode. This document describes using SBD with shared storage. The following requirements must be met:

For more information on STONITH, see Book “Administration Guide”, Chapter 12 “Fencing and STONITH”. For more information on SBD, see Book “Administration Guide”, Chapter 13 “Storage protection and SBD”.

2.2 Software requirements

All nodes that will be part of the cluster need at least the following modules and extensions:

  • Basesystem Module 15 SP5

  • Server Applications Module 15 SP5

  • SUSE Linux Enterprise High Availability 15 SP5

2.3 Other requirements and recommendations

Time synchronization

Cluster nodes must synchronize to an NTP server outside the cluster. Since SUSE Linux Enterprise High Availability 15, chrony is the default implementation of NTP. For more information, see the Administration Guide for SUSE Linux Enterprise Server 15 SP5.

The cluster might not work properly if the nodes are not synchronized, or even if they are synchronized but have different timezones configured. In addition, log files and cluster reports are very hard to analyze without synchronization. If you use the bootstrap scripts, you will be warned if NTP is not configured yet.

Host name and IP address
  • Use static IP addresses.

  • Only the primary IP address is supported.

  • List all cluster nodes in the /etc/hosts file with their fully qualified host name and short host name. It is essential that members of the cluster can find each other by name. If the names are not available, internal cluster communication will fail.

SSH

All cluster nodes must be able to access each other via SSH. Tools like crm report (for troubleshooting) and Hawk2's History Explorer require passwordless SSH access between the nodes, otherwise they can only collect data from the current node.

If you use the bootstrap scripts for setting up the cluster, the SSH keys will automatically be created and copied.

3 Overview of the bootstrap scripts

The following commands execute bootstrap scripts that require only a minimum of time and manual intervention.

  • With crm cluster init, define the basic parameters needed for cluster communication. This leaves you with a running one-node cluster.

  • With crm cluster join, add more nodes to your cluster.

  • With crm cluster remove, remove nodes from your cluster.

The options set by the bootstrap scripts might not be the same as the Pacemaker default settings. You can check which settings the bootstrap scripts changed in /var/log/crmsh/crmsh.log. Any options set during the bootstrap process can be modified later with the YaST cluster module. See Book “Administration Guide”, Chapter 4 “Using the YaST cluster module” for details.

The bootstrap script crm cluster init checks and configures the following components:

NTP

Checks if NTP is configured to start at boot time. If not, a message appears.

SSH

Creates SSH keys for passwordless login between cluster nodes.

Csync2

Configures Csync2 to replicate configuration files across all nodes in a cluster.

Corosync

Configures the cluster communication system.

SBD/watchdog

Checks if a watchdog exists and asks you whether to configure SBD as node fencing mechanism.

Virtual floating IP

Asks you whether to configure a virtual IP address for cluster administration with Hawk2.

Firewall

Opens the ports in the firewall that are needed for cluster communication.

Cluster name

Defines a name for the cluster, by default hacluster. This is optional and mostly useful for Geo clusters. Usually, the cluster name reflects the geographical location and makes it easier to distinguish a site inside a Geo cluster.

QDevice/QNetd

Asks you whether to configure QDevice/QNetd to participate in quorum decisions. We recommend using QDevice and QNetd for clusters with an even number of nodes, and especially for two-node clusters.

This configuration is not covered here, but you can set it up later as described in Book “Administration Guide”, Chapter 14 “QDevice and QNetd”.

Note
Note: Cluster configuration for different platforms

The crm cluster init script detects the system environment (for example, Microsoft Azure) and adjusts certain cluster settings based on the profile for that environment. For more information, see the file /etc/crm/profiles.yml.

4 Installing the High Availability packages

The packages for configuring and managing a cluster are included in the High Availability installation pattern. This pattern is only available after the SUSE Linux Enterprise High Availability is installed.

You can register to the SUSE Customer Center and install SUSE Linux Enterprise High Availability while installing SUSE Linux Enterprise Server, or after installation. For more information, see the Deployment Guide for SUSE Linux Enterprise Server.

Procedure 1: Installing the High Availability pattern
  1. Install the High Availability pattern from the command line:

    # zypper install -t pattern ha_sles
  2. Install the High Availability pattern on all machines that will be part of your cluster.

    Note
    Note: Installing software packages on all nodes

    For an automated installation of SUSE Linux Enterprise Server 15 SP5 and SUSE Linux Enterprise High Availability 15 SP5, use AutoYaST to clone existing nodes. For more information, see Book “Administration Guide”, Chapter 3 “Installing SUSE Linux Enterprise High Availability”, Section 3.2 “Mass installation and deployment with AutoYaST”.

5 Using SBD for node fencing

Before you can configure SBD with the bootstrap script, you must enable a watchdog on each node. SUSE Linux Enterprise Server ships with several kernel modules that provide hardware-specific watchdog drivers. SUSE Linux Enterprise High Availability uses the SBD daemon as the software component that feeds the watchdog.

The following procedure uses the softdog watchdog.

Important
Important: Softdog Limitations

The softdog driver assumes that at least one CPU is still running. If all CPUs are stuck, the code in the softdog driver that should reboot the system will never be executed. In contrast, hardware watchdogs keep working even if all CPUs are stuck.

Before using the cluster in a production environment, we highly recommend replacing the softdog module with the hardware module that best fits your hardware.

However, if no watchdog matches your hardware, softdog can be used as kernel watchdog module.

Procedure 2: Enabling the softdog watchdog for SBD
  1. On each node, enable the softdog watchdog:

    # echo softdog > /etc/modules-load.d/watchdog.conf
    # systemctl restart systemd-modules-load
  2. Test if the softdog module is loaded correctly:

    # lsmod | grep dog
    softdog           16384  1

6 Setting up the first node

Set up the first node with the crm cluster init script. This requires only a minimum of time and manual intervention.

Procedure 3: Setting up the first node (alice) with crm cluster init
  1. Log in to the first cluster node as root, or as a user with sudo privileges.

    Important
    Important: sudo user SSH key access

    The cluster uses passwordless SSH access for communication between the nodes. The crm cluster init script checks for SSH keys and generates them if they do not already exist.

    If you intend to set up the first node as a user with sudo privileges, you must ensure the user's SSH keys exist (or will be generated) locally on the node, not on a remote system.

  2. Start the bootstrap script:

    # crm cluster init --name CLUSTERNAME

    Replace the CLUSTERNAME placeholder with a meaningful name, like the geographical location of your cluster (for example, amsterdam). This is especially helpful to create a Geo cluster later on, as it simplifies the identification of a site.

    If you need to use multicast instead of unicast (the default) for your cluster communication, use the option --multicast (or -U).

    The script checks for NTP configuration and a hardware watchdog service. If required, it generates the public and private SSH keys used for SSH access and Csync2 synchronization and starts the respective services.

  3. Configure the cluster communication layer (Corosync):

    1. Enter a network address to bind to. By default, the script proposes the network address of eth0. Alternatively, enter a different network address, for example the address of bond0.

    2. Accept the proposed port (5405) or enter a different one.

  4. Set up SBD as the node fencing mechanism:

    1. Confirm with y that you want to use SBD.

    2. Enter a persistent path to the partition of your block device that you want to use for SBD. The path must be consistent across all nodes in the cluster.

      The script creates a small partition on the device to be used for SBD.

  5. Configure a virtual IP address for cluster administration with Hawk2:

    1. Confirm with y that you want to configure a virtual IP address.

    2. Enter an unused IP address that you want to use as administration IP for Hawk2: 192.168.1.10

      Instead of logging in to an individual cluster node with Hawk2, you can connect to the virtual IP address.

  6. Choose whether to configure QDevice and QNetd. For the minimal setup described in this document, decline with n for now. You can set up QDevice and QNetd later, as described in Book “Administration Guide”, Chapter 14 “QDevice and QNetd”.

Finally, the script will start the cluster services to bring the cluster online and enable Hawk2. The URL to use for Hawk2 is displayed on the screen.

You now have a running one-node cluster. To view its status, proceed as follows:

Procedure 4: Logging in to the Hawk2 Web interface
  1. On any machine, start a Web browser and make sure that JavaScript and cookies are enabled.

  2. As URL, enter the virtual IP address that you configured with the bootstrap script:

    https://192.168.1.10:7630/
    Note
    Note: Certificate warning

    If a certificate warning appears when you try to access the URL for the first time, a self-signed certificate is in use. Self-signed certificates are not considered trustworthy by default.

    Ask your cluster operator for the certificate details to verify the certificate.

    To proceed anyway, you can add an exception in the browser to bypass the warning.

  3. On the Hawk2 login screen, enter the Username and Password of the user that was created by the bootstrap script (user hacluster, password linux).

    Important
    Important: Secure password

    Replace the default password with a secure one as soon as possible:

    # passwd hacluster
  4. Click Log In. The Hawk2 Web interface shows the Status screen by default:

    Status of the one-node cluster in Hawk2
    Figure 1: Status of the one-node cluster in Hawk2

7 Adding the second node

Add a second node to the cluster with the crm cluster join bootstrap script. The script only needs access to an existing cluster node, and completes the basic setup on the current machine automatically.

For more information, see the crm cluster join --help command.

Procedure 5: Adding the second node (bob) with crm cluster join
  1. Log in to the second node as root, or as a user with sudo privileges.

  2. Start the bootstrap script:

    If you set up the first node as root, you can run this command with no additional parameters:

    # crm cluster join

    If you set up the first node as a sudo user, you must specify that user with the -c option:

    > sudo crm cluster join -c USER@alice

    If NTP is not configured to start at boot time, a message appears. The script also checks for a hardware watchdog device. You are warned if none is present.

  3. If you did not already specify alice with -c, you will be prompted for the IP address of the first node.

  4. If you did not already configure passwordless SSH access between both machines, you will be prompted for the password of the first node.

    After logging in to the specified node, the script copies the Corosync configuration, configures SSH and Csync2, brings the current machine online as a new cluster node, and starts the service needed for Hawk2.

Check the cluster status in Hawk2. Under Status › Nodes you should see two nodes with a green status:

Status of the two-node cluster
Figure 2: Status of the two-node cluster

8 Testing the cluster

The following tests can help you identify issues with the cluster setup. However, a realistic test involves specific use cases and scenarios. Before using the cluster in a production environment, test it thoroughly according to your use cases.

8.1 Testing resource failover

As a quick test, the following procedure checks on resource failovers:

Procedure 6: Testing resource failover
  1. Open a terminal and ping 192.168.1.10, your virtual IP address:

    # ping 192.168.1.10
  2. Log in to Hawk2.

  3. Under Status › Resources, check which node the virtual IP address (resource admin_addr) is running on. This procedure assumes the resource is running on alice.

  4. Put alice into Standby mode:

    Node alice in standby mode
    Figure 3: Node alice in standby mode
  5. Click Status › Resources. The resource admin_addr has been migrated to bob.

During the migration, you should see an uninterrupted flow of pings to the virtual IP address. This shows that the cluster setup and the floating IP work correctly. Cancel the ping command with CtrlC.

8.2 Testing with the crm cluster crash_test command

The command crm cluster crash_test triggers cluster failures to find problems. Before you use your cluster in production, it is recommended to use this command to make sure everything works as expected.

The command supports the following checks:

--split-brain-iptables

Simulates a split-brain scenario by blocking the Corosync port. Checks whether one node can be fenced as expected.

--kill-sbd/--kill-corosync/ --kill-pacemakerd

Kills the daemons for SBD, Corosync, and Pacemaker. After running one of these tests, you can find a report in the directory /var/lib/crmsh/crash_test/. The report includes a test case description, action logging, and an explanation of possible results.

--fence-node NODE

Fences a specific node passed from the command line.

For more information, see crm cluster crash_test --help.

Example 1: Testing the cluster: node fencing
# crm_mon -1
Stack: corosync
Current DC: alice (version ...) - partition with quorum
Last updated: Fri Mar 03 14:40:21 2020
Last change: Fri Mar 03 14:35:07 2020 by root via cibadmin on alice

2 nodes configured
1 resource configured

Online: [ alice bob ]
Active resources:

 stonith-sbd    (stonith:external/sbd): Started alice

# crm cluster crash_test --fence-node bob

==============================================
Testcase:          Fence node bob
Fence action:      reboot
Fence timeout:     60

!!! WARNING WARNING WARNING !!!
THIS CASE MAY LEAD TO NODE BE FENCED.
TYPE Yes TO CONTINUE, OTHER INPUTS WILL CANCEL THIS CASE [Yes/No](No): Yes
INFO: Trying to fence node "bob"
INFO: Waiting 60s for node "bob" reboot...
INFO: Node "bob" will be fenced by "alice"!
INFO: Node "bob" was successfully fenced by "alice"

To watch bob change status during the test, log in to Hawk2 and navigate to Status › Nodes.

9 Next steps

The bootstrap scripts provide a quick way to set up a basic High Availability cluster that can be used for testing purposes. However, to expand this cluster into a functioning High Availability cluster that can be used in production environments, more steps are recommended.

Recommended steps to complete the High Availability cluster setup
Adding more nodes

Add more nodes to the cluster using one of the following methods:

  • For individual nodes, use the crm cluster join script as described in Section 7, “Adding the second node”.

  • For mass installation of multiple nodes, use AutoYaST as described in Book “Administration Guide”, Chapter 3 “Installing SUSE Linux Enterprise High Availability”, Section 3.2 “Mass installation and deployment with AutoYaST”.

A regular cluster can contain up to 32 nodes. With the pacemaker_remote service, High Availability clusters can be extended to include additional nodes beyond this limit. See Article “Pacemaker Remote Quick Start” for more details.

Configuring QDevice

If the cluster has an even number of nodes, configure QDevice and QNetd to participate in quorum decisions. QDevice provides a configurable number of votes, allowing a cluster to sustain more node failures than the standard quorum rules allow. For details, see Book “Administration Guide”, Chapter 14 “QDevice and QNetd”.

Enabling a hardware watchdog

Before using the cluster in a production environment, replace the softdog module with the hardware module that best fits your hardware. For details, see Book “Administration Guide”, Chapter 13 “Storage protection and SBD”, Section 13.6 “Setting up the watchdog”.

10 For more information

More documentation for this product is available at https://documentation.suse.com/sle-ha/. For further configuration and administration tasks, see the comprehensive Administration Guide.

A Basic iSCSI storage for SBD

Use the following procedures to configure basic iSCSI storage to use with SBD. These procedures are only recommended for testing purposes. Before using iSCSI in a production environment, see Storage Administration Guide for SUSE Linux Enterprise Server.

Requirements
  • A SUSE Linux Enterprise Server virtual machine to act as the iSCSI target. This VM is not part of the cluster.

  • Two virtual storage devices on the VM: a 20 GB device for the system, and a 1 GB device for SBD.

  • Two SUSE Linux Enterprise Server nodes that have not been added to a High Availability cluster yet.

First, set up an iSCSI target on the virtual machine:

Procedure A..1: Configuring an iSCSI target
  1. Install the package yast2-iscsi-lio-server:

    # zypper install yast2-iscsi-lio-server
  2. Start the iscsi-lio-server module in YaST:

    # yast2 iscsi-lio-server
  3. In the Service tab, under After reboot, select Start on boot.

  4. Activate Open Port in Firewall.

  5. In the Global tab, activate Discovery Authentication.

  6. Under Authentication by Targets, enter a Username and Password.

  7. Under Authentication by Initiators, enter a Mutual Username and Mutual Password. This password must be different from the Authentication by Targets password.

  8. In the Targets tab, select Add.

  9. Change the Target name by replacing .com.example.

  10. The IP Address of this server should be filled automatically. If not, add the IP address now.

  11. Select Add.

  12. In the LUN Details window, enter the LUN Path to the 1 GB storage device (for example, /dev/vbd).

  13. Select Ok.

  14. Select Next.

  15. Select Finish to close YaST.

  16. To check the target setup, switch to the target CLI:

    # targetcli

    Show the configuration:

    /> ls

Next, set up iSCSI initiators on the nodes. Repeat this procedure on both nodes:

Procedure A..2: Configuring an iSCSI initiator
  1. Install the required packages:

    # zypper install open-iscsi yast2-iscsi-client
  2. Start the iscsid service:

    # systemctl start iscsid
  3. Open the iscsi-client module in YaST:

    # yast2 iscsi-client
  4. In the Discovered Targets tab, select Discovery.

  5. Enter the IP address of the iSCSI target.

  6. Clear No Discovery Authentication.

  7. Under Authentication by Initiator, enter the initiator Username and Password.

  8. Under Authentication by Targets, enter the target Username and Password.

  9. Select Next.

  10. After YaST discovers the iSCSI target, select Connect.

  11. Under Startup, select onboot.

  12. Select Next.

  13. Select Ok to close YaST.

  14. Check the iSCSI initiator:

    # lsscsi
    [0:0:1:0] cd/dvd QEMU QEMU DVD-ROM 2.5+ /dev/sr0
    [2:0:0:0] disk LIO-ORG IBLOCK 4.0 /dev/sda

    Look for a line with IBLOCK. In this example, the iSCSI device is /dev/sda.

  15. Check the status of the iscsid service:

    # systemctl status iscsid

You can find the stable device name in /dev/disk/by-id/. Usually, an iSCSI device starts with scsi-SLIO-ORG_IBLOCK.

If you have multiple disks, you can run the command lsblk -o name,serial to confirm which stable device name corresponds to which short name (for example, /dev/sda).

When you configure the cluster, specify the stable device name using one of these methods:

  • When you run crm cluster init, enter the stable device name when prompted.

  • Before running crm cluster init, add the stable device name to /etc/sysconfig/sbd:

    SBD_DEVICE=/dev/disk/by-id/scsi-SLIO-ORG_IBLOCK_DEVICE_ID_STRING

    When you run crm cluster init, answer n for this question:

    SBD is already configured to use /dev/disk/by-id/scsi-SLIO-ORG_IBLOCK_... - overwrite (y/n)?