Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
documentation.suse.com / SUSE Linux Enterprise High Availability Documentation / Administration Guide / Installation and setup / Using the bootstrap scripts
Applies to SUSE Linux Enterprise High Availability 15 SP7

6 Using the bootstrap scripts

SUSE Linux Enterprise High Availability includes bootstrap scripts to simplify the installation of a cluster. You can use these scripts to set up the cluster on the first node, add more nodes to the cluster, remove nodes from the cluster, and adjust certain settings in an existing cluster.

6.1 Overview of the crm cluster init script

The crm cluster init command executes a bootstrap script that defines the basic parameters needed for cluster communication, resulting in a running one-node cluster. The script checks and configures the following components:

NTP/Chrony

Checks if NTP/Chrony is configured to start at boot time. If not, a message appears.

SSH

Detects or generates SSH keys for passwordless login between cluster nodes.

Csync2

Configures Csync2 to replicate configuration files across all nodes in a cluster.

Corosync

Configures the cluster communication system.

SBD/watchdog

Checks if a watchdog exists and asks you whether to configure SBD as the node fencing mechanism.

Virtual floating IP

Asks you whether to configure a virtual IP address for cluster administration with Hawk2.

Firewall

Opens the ports in the firewall that are needed for cluster communication.

Cluster name

Defines a name for the cluster, by default hacluster. This is optional for a basic cluster but is required when using DLM. A unique cluster name is also useful for Geo clusters. Usually, the cluster name reflects the geographical location and makes it easier to distinguish a site inside a Geo cluster.

QDevice/QNetd

Asks you whether to configure QDevice/QNetd to participate in quorum decisions. We recommend using QDevice and QNetd for clusters with an even number of nodes, and especially for two-node clusters.

Note
Note: Pacemaker default settings

The options set by the bootstrap script might not be the same as the Pacemaker default settings. You can check which settings the bootstrap script changed in /var/log/crmsh/crmsh.log. Any options set during the bootstrap process can be modified later with YaST or crmsh.

Note
Note: Cluster configuration for different platforms

The crm cluster init script detects the system environment (for example, Microsoft Azure) and adjusts certain cluster settings based on the profile for that environment. For more information, see the file /etc/crm/profiles.yml.

6.2 Setting up the first node with crm cluster init

Setting up the first node with the crm cluster init script requires only a minimum of time and manual intervention.

This procedure describes multiple configuration options. We recommend reviewing the whole procedure before starting the script. Alternatively, for a minimal setup with only the default options, see Installation and Setup Quick Start.

Requirements
Procedure 6.1: Setting up the first node with crm cluster init
  1. Log in to the first cluster node as root, or as a user with sudo privileges.

  2. Start the bootstrap script:

    You can start the script without specifying any options. This prompts you for input for certain settings and uses crmsh's default values for other settings.

    • If you logged in as root, you can run this command with no additional parameters:

      # crm cluster init
    • If you logged in as a sudo user without SSH agent forwarding, run this command with sudo:

      > sudo crm cluster init
    • If you logged in as a sudo user with SSH agent forwarding enabled, you must preserve the environment variable SSH_AUTH_SOCK and tell the script to use your local SSH keys instead of generating keys on the node:

      > sudo --preserve-env=SSH_AUTH_SOCK crm cluster init --use-ssh-agent

    Alternatively, you can specify additional options as part of the initialization command. You can include multiple options in the same command. Some examples are shown below. For more options, run crm cluster init --help.

    Cluster name

    The default cluster name is hacluster. To choose a different name, use the option --name (or -n). For example:

    # crm cluster init --name CLUSTERNAME

    Choose a meaningful name, like the geographical location of the cluster. This is especially helpful if you create a Geo cluster later, as it simplifies the identification of a site.

    Multicast

    Unicast is the default transport type for cluster communication. To use multicast instead, use the option --multicast (or -U). For example:

    # crm cluster init --multicast
    SBD disks

    In a later step, the script asks if you want to set up SBD and prompts you for a disk to use. To configure the cluster with multiple SBD disks, use the option --sbd-device (or -s) multiple times. For example:

    # crm cluster init -s /dev/disk/by-id/ID1 -s /dev/disk/by-id/ID2

    This option is also useful because you can use tab completion for the device ID, which is not available later when the script prompts you for the path.

    Redundant communication channel

    Supported clusters must have two communication channels. The preferred method is to use network device bonding. If you cannot use bonding, you can set up a redundant communication channel in Corosync (also known as a second ring or heartbeat line). By default, the script prompts you for a network address for a single ring. To configure the cluster with two rings, use the option --interface (or -i) twice. For example:

    # crm cluster init -i eth0 -i eth1

    After you start the script, it prompts you for the following information. Some of these steps might differ slightly if you included additional options with the init command.

  3. Configure the cluster communication layer (Corosync):

    1. Enter a network address to bind to. By default, the script proposes the network address of eth0. Alternatively, enter a different network address, for example, the address of bond0.

    2. Accept the proposed port (5405) or enter a different one.

    3. If you started the script with an option that configures a redundant communication channel, enter y to accept a second heartbeat line, then either accept the proposed network address and port or enter different ones.

  4. Choose whether to set up SBD as the node fencing mechanism. If you are using a different fencing mechanism or want to set up SBD later, enter n to skip this step.

    If you chose y, select the type of SBD to use:

    • To use diskless SBD, enter none.

    • To use disk-based SBD, enter a persistent path to the partition of the block device you want to use. The path must be consistent across all nodes in the cluster, for example, /dev/disk/by-id/ID.

  5. Choose whether to configure a virtual IP address for cluster administration with Hawk2. Instead of logging in to an individual cluster node with Hawk2, you can connect to the virtual IP address.

    If you chose y, enter an unused IP address to use for Hawk2.

  6. Choose whether to configure QDevice and QNetd. If you do not need to use QDevice or have not set up the QNetd server yet, enter n to skip this step. You can set up QDevice and QNetd later if required.

    If you chose y, provide the following information:

    1. Enter the host name or IP address of the QNetd server. The cluster node must have SSH access to this server to complete the configuration.

      For the remaining fields, you can accept the default values or change them as required:

    2. Accept the proposed port (5403) or enter a different one.

    3. Choose the algorithm that determines how votes are assigned.

    4. Choose the method to use when a tie-breaker is required.

    5. Choose whether to enable TLS.

    6. Enter heuristics commands to affect how votes are determined. To skip this step, leave the field blank.

The script checks for NTP/Chrony configuration and a hardware watchdog service. If required, it generates the public and private SSH keys used for passwordless SSH access and Csync2 synchronization and starts the respective services. Finally, the script starts the cluster services to bring the cluster online and enables Hawk2. The URL to use for Hawk2 is displayed on the screen.

To log in to Hawk2, see Section 9.4.2, “Logging in”.

Important
Important: Secure password for hacluster

The crm cluster init script creates a default user (hacluster) and password (linux). Replace the default password with a secure one as soon as possible:

# passwd hacluster

6.3 Adding nodes with crm cluster join

You can add more nodes to the cluster with the crm cluster join bootstrap script. The script only needs access to an existing cluster node and completes the basic setup on the current machine automatically.

For more information, run the crm cluster join --help command.

Requirements
Procedure 6.2: Adding nodes with crm cluster join
  1. Log in to this node as the same user you set up the first node with.

  2. Start the bootstrap script:

    How you start the script depends on how you set up the first node. Review the following options and use a command that matches the first node's setup. If required, you can include multiple options in the same command.

    • If you set up the first node as root and with only one network interface (or bonded device), you can run this command with no additional parameters:

      # crm cluster join
    • If you set up the first node with two network interfaces, you must specify the same interfaces for this node by using the option --interface (or -i) twice:

      # crm cluster join -i eth0 -i eth1
    • Optionally, you can specify the first node with --cluster-node (or -c):

      # crm cluster join -c [USER]@NODE1

      If you set up the first node as root, you do not need to specify the user.

    • If you set up the first node as a sudo user, you must specify the user and node with the -c option:

      > sudo crm cluster join -c USER@NODE1
    • If you set up the first node as a sudo user with SSH agent forwarding, you must also tell the script to use your local SSH keys instead of generating keys on the node:

      > sudo --preserve-env=SSH_AUTH_SOCK \
      crm cluster join --use-ssh-agent -c USER@NODE1

    If NTP/Chrony is not configured to start at boot time, a message appears. The script also checks for a hardware watchdog device. You are warned if none is present.

  3. If you did not already specify the first cluster node with the -c option, you are prompted for its IP address.

  4. If you did not already configure passwordless SSH access between the cluster nodes, you are prompted for the password of the first node.

    After logging in to the specified node, the script copies the Corosync configuration, configures SSH and Csync2, brings the current machine online as a new cluster node, and starts the service needed for Hawk2.

Repeat this procedure for each node. You can check the status of the cluster at any time with the crm status command, or by logging in to Hawk2 and navigating to Status › Nodes.

6.4 Removing nodes with crm cluster remove

You can remove nodes from the cluster with the crm cluster remove bootstrap script.

If you run crm cluster remove with no additional parameters, you are prompted for the IP address or host name of the node to remove. Alternatively, you can specify the node when you run the command:

# crm cluster remove NODE

On the specified node, this stops all cluster services and removes the local cluster configuration files. On the rest of the cluster nodes, the specified node is removed from the cluster configuration.

In most cases, you must run crm cluster remove from a different node, not from the node you want to remove. However, to remove the last node and delete the cluster, you can use the option --force (or -F):

# crm cluster remove --force LASTNODE

For more information, run crm cluster remove --help.

Documentation survey