Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
Applies to SUSE Linux Enterprise High Availability Extension 11 SP4

3 Installation and Basic Setup

This chapter describes how to install and set up SUSE® Linux Enterprise High Availability Extension 11 SP4 from scratch. Choose between an automatic setup or a manual setup. The automatic setup enables you to have a cluster up and running within a few minutes (with the choice to adjust any options later on), whereas the manual setup allows you to set your individual options right at the beginning.

Refer to chapter Appendix E, Upgrading Your Cluster and Updating Software Packages if you want to migrate a cluster that runs an older version of SUSE Linux Enterprise High Availability Extension, or if you want to update software packages on nodes that belong to a running cluster.

3.1 Definition of Terms

This chapter uses several terms that are defined below.

Existing Cluster

The term existing cluster is used to refer to any cluster that consists of at least one node. Existing clusters have a basic Corosync configuration that defines the communication channels, but they do not necessarily have resource configuration yet.

Multicast

A technology used for a one-to-many communication within a network that can be used for cluster communication. Corosync supports both multicast and unicast. If multicast does not comply with your corporate IT policy, use unicast instead.

Note
Note: Switches and Multicast

If you want to use multicast for cluster communication, make sure your switches support multicast.

Multicast Address (mcastaddr)

IP address to be used for multicasting by the Corosync executive. The IP address can either be IPv4 or IPv6. If IPv6 networking is used, node IDs must be specified. You can use any multicast address in your private network.

Multicast Port (mcastport)

The port to use for cluster communication. Corosync uses two ports: the specified mcastport for receiving multicast, and mcastport -1 for sending multicast.

Unicast

A technology for sending messages to a single network destination. Corosync supports both multicast and unicast. In Corosync, unicast is implemented as UDP-unicast (UDPU).

Bind Network Address (bindnetaddr)

The network address the Corosync executive should bind to. To ease sharing configuration files across the cluster, OpenAIS uses network interface netmask to mask only the address bits that are used for routing the network. For example, if the local interface is 192.168.5.92 with netmask 255.255.255.0, set bindnetaddr to 192.168.5.0. If the local interface is 192.168.5.92 with netmask 255.255.255.192, set bindnetaddr to 192.168.5.64.

Note
Note: Network Address for All Nodes

As the same Corosync configuration will be used on all nodes, make sure to use a network address as bindnetaddr, not the address of a specific network interface.

Redundant Ring Protocol (RRP)

Allows the use of multiple redundant local area networks for resilience against partial or total network faults. This way, cluster communication can still be kept up as long as a single network is operational. Corosync supports the Totem Redundant Ring Protocol. A logical token-passing ring is imposed on all participating nodes to deliver messages in a reliable and sorted manner. A node is allowed to broadcast a message only if it holds the token. For more information, refer to http://corosync.github.io/corosync/doc/icdcs02.ps.gz.

When having defined redundant communication channels in Corosync, use RRP to tell the cluster how to use these interfaces. RRP can have three modes (rrp_mode):

  • If set to active, Corosync uses both interfaces actively.

  • If set to passive, Corosync sends messages alternatively over the available networks.

  • If set to none, RRP is disabled.

Csync2

A synchronization tool that can be used to replicate configuration files across all nodes in the cluster, and even across Geo clusters. Csync2 can handle any number of hosts, sorted into synchronization groups. Each synchronization group has its own list of member hosts and its include/exclude patterns that define which files should be synchronized in the synchronization group. The groups, the host names belonging to each group, and the include/exclude rules for each group are specified in the Csync2 configuration file, /etc/csync2/csync2.cfg.

For authentication, Csync2 uses the IP addresses and pre-shared keys within a synchronization group. You need to generate one key file for each synchronization group and copy it to all group members.

For more information about Csync2, refer to http://oss.linbit.com/csync2/paper.pdf

conntrack Tools

Allow interaction with the in-kernel connection tracking system for enabling stateful packet inspection for iptables. Used by the High Availability Extension to synchronize the connection status between cluster nodes. For detailed information, refer to http://conntrack-tools.netfilter.org/.

AutoYaST

AutoYaST is a system for installing one or more SUSE Linux Enterprise systems automatically and without user intervention. On SUSE Linux Enterprise you can create an AutoYaST profile that contains installation and configuration data. The profile tells AutoYaST what to install and how to configure the installed system to get a ready-to-use system in the end. This profile can then be used for mass deployment in different ways (for example, to clone existing cluster nodes).

For detailed instructions on how to use AutoYaST in various scenarios, see the SUSE Linux Enterprise 11 SP4 Deployment Guide, available at https://documentation.suse.com/. Refer to chapter Automated Installation.

3.2 Overview

The following basic steps are needed for installation and initial cluster setup.

  1. Installation as Add-on:

    Install the software packages with YaST. Alternatively, you can install them from the command line with zypper:

    root # zypper in -t pattern ha_sles
  2. Initial Cluster Setup:

    After installing the software on all nodes that will be part of your cluster, the following steps are needed to initially configure the cluster.

    1. Defining the Communication Channels

    2. Optional: Defining Authentication Settings

    3. Transferring the Configuration to All Nodes. Whereas the configuration of Csync2 is done on one node only, the services Csync2 and xinetd need to be started on all nodes.

    4. Optional: Synchronizing Connection Status Between Cluster Nodes

    5. Configuring Services

    6. Bringing the Cluster Online. The OpenAIS/Corosync service needs to be started on all nodes.

The cluster setup steps can either be executed automatically (with bootstrap scripts) or manually (with the YaST cluster module or from command line).

You can also use a combination of both setup methods, for example: set up one node with YaST cluster and then use sleha-join to integrate more nodes.

Existing nodes can also be cloned for mass deployment with AutoYaST. The cloned nodes will have the same packages installed and the same system configuration. For details, refer to Section 3.6, “Mass Deployment with AutoYaST”.

3.3 Installation as Add-on

The packages needed for configuring and managing a cluster with the High Availability Extension are included in the High Availability installation pattern. This pattern is only available after SUSE Linux Enterprise High Availability Extension has been installed as an add-on to SUSE® Linux Enterprise Server. For information on how to install add-on products, see the SUSE Linux Enterprise 11 SP4 Deployment Guide, available at https://documentation.suse.com/. Refer to chapter Installing Add-On Products.

Procedure 3.1: Installing the High Availability Pattern
  1. To install the packages via command line, use Zypper:

    sudo zypper in -t pattern ha_sles
  2. Alternatively, start YaST as root user and select Software › Software Management.

    It is also possible to start the YaST module as root on a command line with yast2 sw_single.

  3. From the Filter list, select Patterns and activate the High Availability pattern in the pattern list.

  4. Click Accept to start installing the packages.

    Note
    Note: Installing Software Packages on All Parties

    The software packages needed for High Availability clusters are not automatically copied to the cluster nodes.

  5. Install the High Availability pattern on all machines that will be part of your cluster.

    If you do not want to install SUSE Linux Enterprise Server 11 SP4 and SUSE Linux Enterprise High Availability Extension 11 SP4 manually on all nodes that will be part of your cluster, use AutoYaST to clone existing nodes. For more information, refer to Section 3.6, “Mass Deployment with AutoYaST”.

3.4 Automatic Cluster Setup (sleha-bootstrap)

The sleha-bootstrap package provides everything you need to get a one-node cluster up and running, to make other nodes join, and to remove nodes from an existing cluster:

Automatically Setting Up the First Node

With sleha-init, define the basic parameters needed for cluster communication and (optionally) set up a STONITH mechanism to protect your shared storage. This leaves you with a running one-node cluster.

Adding Nodes to an Existing Cluster

With sleha-join, add more nodes to your cluster.

Removing Nodes From An Existing Cluster

With sleha-remove, remove nodes from your cluster.

All commands execute bootstrap scripts that require only a minimum of time and manual intervention. The bootstrap scripts for initialization and joining automatically open the ports in the firewall that are needed for cluster communication. The configuration is written to /etc/sysconfig/SuSEfirewall2.d/services/cluster. Any options set during the bootstrap process can be modified later with the YaST cluster module.

Before starting the automatic setup, make sure that the following prerequisites are fulfilled on all nodes that will participate in the cluster:

Prerequisites
Procedure 3.2: Automatically Setting Up the First Node

The sleha-init command checks for configuration of NTP and guides you through configuration of the cluster communication layer (Corosync), and (optionally) through the configuration of SBD to protect your shared storage. Follow the steps below. For details, refer to the sleha-init man page.

  1. Log in as root to the physical or virtual machine you want to use as cluster node.

  2. Start the bootstrap script by executing

    root # sleha-init

    If NTP has not been configured to start at boot time, a message appears.

    If you decide to continue anyway, the script will automatically generate keys for SSH access and for the Csync2 synchronization tool and start the services needed for both.

  3. To configure the cluster communication layer (Corosync):

    1. Enter a network address to bind to. By default, the script will propose the network address of eth0. Alternatively, enter a different network address, for example the address of bond0.

    2. Enter a multicast address. The script proposes a random address that you can use as default.

    3. Enter a multicast port. The script proposes 5405 as default.

  4. To configure SBD (optional), enter a persistent path to the partition of your block device that you want to use for SBD. The path must be consistent across all nodes in the cluster.

    Finally, the script will start the OpenAIS service to bring the one-node cluster online and enable the Web management interface Hawk. The URL to use for Hawk is displayed on the screen.

  5. For any details of the setup process, check /var/log/sleha-bootstrap.log.

You now have a running one-node cluster. Check the cluster status with crm status:

root # crm status
   Last updated: Thu Jul  3 11:04:10 2014
   Last change: Thu Jul  3 10:58:43 2014
   Current DC: alice (175704363) - partition with quorum
   1 Nodes configured
   0 Resources configured
      
   Online: [ alice ]
Important
Important: Secure Password

The bootstrap procedure creates a Linux user named hacluster with the password linux. You need it for logging in to Hawk. Replace the default password with a secure one as soon as possible:

root # passwd hacluster
Procedure 3.3: Adding Nodes to an Existing Cluster

If you have a cluster up and running (with one or more nodes), add more cluster nodes with the sleha-join bootstrap script. The script only needs access to an existing cluster node and will complete the basic setup on the current machine automatically. Follow the steps below. For details, refer to the sleha-join man page.

If you have configured the existing cluster nodes with the YaST cluster module, make sure the following prerequisites are fulfilled before you run sleha-join:

If you are logged in to the first node via Hawk, you can follow the changes in cluster status and view the resources being activated in the Web interface.

  1. Log in as root to the physical or virtual machine supposed to join the cluster.

  2. Start the bootstrap script by executing:

    root # sleha-join

    If NTP has not been configured to start at boot time, a message appears.

  3. If you decide to continue anyway, you will be prompted for the IP address of an existing node. Enter the IP address.

  4. If you have not already configured a passwordless SSH access between both machines, you will also be prompted for the root password of the existing node.

    After logging in to the specified node, the script will copy the Corosync configuration, configure SSH and Csync2, and will bring the current machine online as new cluster node. Apart from that, it will start the service needed for Hawk. If you have configured shared storage with OCFS2, it will also automatically create the mountpoint directory for the OCFS2 file system.

  5. Repeat the steps above for all machines you want to add to the cluster.

  6. For details of the process, check /var/log/sleha-bootstrap.log.

Check the cluster status with crm status. If you have successfully added a second node, the output will be similar to the following:

root # crm status
   Last updated: Thu Jul  3 11:07:10 2014
   Last change: Thu Jul  3 10:58:43 2014
   Current DC: alice (175704363) - partition with quorum
   2 Nodes configured
   0 Resources configured
   
   Online: [ alice bob ]
Important
Important: Check no-quorum-policy

After adding all nodes, check if you need to adjust the no-quorum-policy in the global cluster options. This is especially important for two-node clusters. For more information, refer to Section 4.1.2, “Option no-quorum-policy.

Procedure 3.4: Removing Nodes From An Existing Cluster

If you have a cluster up and running (with at least two nodes), you can remove single nodes from the cluster with the sleha-remove bootstrap script. You need to know the IP address or host name of the node you want to remove from the cluster. Follow the steps below. For details, refer to the sleha-remove man page.

  1. Log in as root to one of the cluster nodes.

  2. Start the bootstrap script by executing:

    root # sleha-remove -c IP_ADDR_OR_HOSTNAME

    The script enables the sshd, stops the OpenAIS service on the specified node, and propagates the files to synchronize with Csync2 across the remaining nodes.

    If you specified a host name and the node to remove cannot be contacted (or the host name cannot be resolved), the script will inform you and ask whether to remove the node anyway. If you specified an IP address and the node cannot be contacted, you will be asked to enter the host name and to confirm whether to remove the node anyway.

  3. To remove more nodes, repeat the step above.

  4. For details of the process, check /var/log/sleha-bootstrap.log.

If you need to re-add the removed node at a later point in time, add it with sleha-join. For details, refer to Procedure 3.3, “Adding Nodes to an Existing Cluster”.

Procedure 3.5: Removing the High Availability Extension Software From a Machine

To remove the High Availability Extension software from a machine that you no longer need as cluster node, proceed as follows.

  1. Stop the cluster service:

    root # rcopenais stop
  2. Remove the High Availability Extension add-on:

    root # zypper rm -t products sle-hae

3.5 Manual Cluster Setup (YaST)

See Section 3.2, “Overview” for an overview of all steps for initial setup.

3.5.1 YaST Cluster Module

The following sections guide you through each of the setup steps, using the YaST cluster module. To access it, start YaST as root and select High Availability › Cluster. Alternatively, start the module from command line with yast2 cluster.

If you start the cluster module for the first time, it appears as wizard, guiding you through all the steps necessary for basic setup. Otherwise, click the categories on the left panel to access the configuration options for each step.

YaST Cluster Module—Overview
Figure 3.1: YaST Cluster Module—Overview

Note that some options in the YaST cluster module apply only to the current node, whereas others may automatically be transferred to all nodes. Find detailed information about this in the following sections.

3.5.2 Defining the Communication Channels

For successful communication between the cluster nodes, define at least one communication channel.

Important
Important: Redundant Communication Paths

It is highly recommended to set up cluster communication via two or more redundant paths. This can be done via:

If possible, choose network device bonding.

Procedure 3.6: Defining the First Communication Channel

For communication between the cluster nodes, use either multicast (UDP) or unicast (UDPU).

  1. In the YaST cluster module, switch to the Communication Channels category.

  2. To use multicast:

    1. Set the Transport protocol to UDP.

    2. Define the Bind Network Address. Set the value to the subnet you will use for cluster multicast.

    3. Define the Multicast Address.

    4. Define the Multicast Port.

      With the values entered above, you have now defined one communication channel for the cluster. In multicast mode, the same bindnetaddr, mcastaddr, and mcastport will be used for all cluster nodes. All nodes in the cluster will know each other by using the same multicast address. For different clusters, use different multicast addresses.

      YaST Cluster—Multicast Configuration
      Figure 3.2: YaST Cluster—Multicast Configuration
  3. To use unicast:

    1. Set the Transport protocol to UDPU.

    2. Define the Bind Network Address. Set the value to the subnet you will use for cluster unicast.

    3. Define the Multicast Port.

    4. For unicast communication, Corosync needs to know the IP addresses of all nodes in the cluster. For each node that will be part of the cluster, click Add and enter the following details:

      • IP Address

      • Redundant IP Address (only required if you use a second communication channel in Corosync)

      • Node ID (only required if the option Auto Generate Node ID is disabled)

      To modify or remove any addresses of cluster members, use the Edit or Del buttons.

      YaST Cluster—Unicast Configuration
      Figure 3.3: YaST Cluster—Unicast Configuration
  4. The option Auto Generate Node ID is enabled by default. If you are using IPv4 addresses, node IDs are optional but they are required when using IPv6 addresses. To automatically generate a unique ID for every cluster node (which is less error-prone than specifying IDs manually for each node), keep this option enabled.

  5. If you modified any options for an existing cluster, confirm your changes and close the cluster module. YaST writes the configuration to /etc/corosync/corosync.conf.

  6. If needed, define a second communication channel as described below. Or click Next and proceed with Procedure 3.8, “Enabling Secure Authentication”.

Procedure 3.7: Defining a Redundant Communication Channel

If network device bonding cannot be used for any reason, the second best choice is to define a redundant communication channel (a second ring) in Corosync. That way, two physically separate networks can be used for communication. In case one network fails, the cluster nodes can still communicate via the other network.

Important
Important: Redundant Rings and /etc/hosts

If multiple rings are configured, each node can have multiple IP addresses. This needs to be reflected in the /etc/hosts file of all nodes.

  1. In the YaST cluster module, switch to the Communication Channels category.

  2. Activate Redundant Channel. The redundant channel must use the same protocol as the first communication channel you defined.

  3. If you use multicast, define the Bind Network Address, the Multicast Address and the Multicast Port for the redundant channel.

    If you use unicast, define the Bind Network Address, the Multicast Port and enter the IP addresses of all nodes that will be part of the cluster.

    Now you have defined an additional communication channel in Corosync that will form a second token-passing ring. In /etc/corosync/corosync.conf, the primary ring (the first channel you have configured) gets the ringnumber 0, the second ring (redundant channel) the ringnumber 1.

  4. To tell Corosync how and when to use the different channels, select the rrp_mode you want to use (active or passive). For more information about the modes, refer to Redundant Ring Protocol (RRP) or click Help. As soon as RRP is used, the Stream Control Transmission Protocol (SCTP) is used for communication between the nodes (instead of TCP). The High Availability Extension monitors the status of the current rings and automatically re-enables redundant rings after faults. Alternatively, you can also check the ring status manually with corosync-cfgtool. View the available options with -h.

    If only one communication channel is defined, rrp_mode is automatically disabled (value none).

  5. If you modified any options for an existing cluster, confirm your changes and close the cluster module. YaST writes the configuration to /etc/corosync/corosync.conf.

  6. For further cluster configuration, click Next and proceed with Section 3.5.3, “Defining Authentication Settings”.

Find an example file for a UDP setup in /etc/corosync/corosync.conf.example. An example for UDPU setup is available in /etc/corosync/corosync.conf.example.udpu.

3.5.3 Defining Authentication Settings

The next step is to define the authentication settings for the cluster. You can use HMAC/SHA1 authentication that requires a shared secret used to protect and authenticate messages. The authentication key (password) you specify will be used on all nodes in the cluster.

Procedure 3.8: Enabling Secure Authentication
  1. In the YaST cluster module, switch to the Security category.

  2. Activate Enable Security Auth.

  3. For a newly created cluster, click Generate Auth Key File. An authentication key is created and written to /etc/corosync/authkey.

    YaST Cluster—Security
    Figure 3.4: YaST Cluster—Security

    If you want the current machine to join an existing cluster, do not generate a new key file. Instead, copy the /etc/corosync/authkey from one of the nodes to the current machine (either manually or with Csync2).

  4. If you modified any options for an existing cluster, confirm your changes and close the cluster module. YaST writes the configuration to /etc/corosync/corosync.conf.

  5. For further cluster configuration, click Next and proceed with Section 3.5.4, “Transferring the Configuration to All Nodes”.

3.5.4 Transferring the Configuration to All Nodes

Instead of copying the resulting configuration files to all nodes manually, use the csync2 tool for replication across all nodes in the cluster.

This requires the following basic steps:

Csync2 helps you to keep track of configuration changes and to keep files synchronized across the cluster nodes:

  • You can define a list of files that are important for operation.

  • You can show changes of these files (against the other cluster nodes).

  • You can synchronize the configured files with a single command.

  • With a simple shell script in ~/.bash_logout, you can be reminded about unsynchronized changes before logging out of the system.

Find detailed information about Csync2 at http://oss.linbit.com/csync2/ and http://oss.linbit.com/csync2/paper.pdf.

Procedure 3.9: Configuring Csync2 with YaST
  1. In the YaST cluster module, switch to the Csync2 category.

  2. To specify the synchronization group, click Add in the Sync Host group and enter the local host names of all nodes in your cluster. For each node, you must use exactly the strings that are returned by the hostname command.

  3. Click Generate Pre-Shared-Keys to create a key file for the synchronization group. The key file is written to /etc/csync2/key_hagroup. After it has been created, it must be copied manually to all members of the cluster.

  4. To populate the Sync File list with the files that usually need to be synchronized among all nodes, click Add Suggested Files.

    YaST Cluster—Csync2
    Figure 3.5: YaST Cluster—Csync2
  5. If you want to Edit, Add or Remove files from the list of files to be synchronized use the respective buttons. You must enter the absolute path name for each file.

  6. Activate Csync2 by clicking Turn Csync2 ON. This will execute chkconfig csync2 to start Csync2 automatically at boot time.

  7. If you modified any options for an existing cluster, confirm your changes and close the cluster module. YaST then writes the Csync2 configuration to /etc/csync2/csync2.cfg. To start the synchronization process now, proceed with Procedure 3.10, “Synchronizing the Configuration Files with Csync2”.

  8. For further cluster configuration, click Next and proceed with Section 3.5.5, “Synchronizing Connection Status Between Cluster Nodes”.

Procedure 3.10: Synchronizing the Configuration Files with Csync2

To successfully synchronize the files with Csync2, make sure that the following prerequisites are met:

  • The same Csync2 configuration is available on all nodes. Copy the file /etc/csync2/csync2.cfg manually to all nodes after you have configured it as described in Procedure 3.9, “Configuring Csync2 with YaST”. It is recommended to include this file in the list of files to be synchronized with Csync2.

  • Copy the file /etc/csync2/key_hagroup you have generated on one node in Step 3 to all nodes in the cluster. It is needed for authentication by Csync2. However, do not regenerate the file on the other nodes as it needs to be the same file on all nodes.

  • Both Csync2 and xinetd must be running on all nodes.

    Note
    Note: Starting Services at Boot Time

    Execute the following commands on all nodes to make both services start automatically at boot time and to start xinetd now:

    root # chkconfig csync2 on
    chkconfig xinetd on
    rcxinetd start
  1. On the node that you want to copy the configuration from, execute the following command:

    root # csync2 -xv

    This will synchronize all the files once by pushing them to the other nodes. If all files are synchronized successfully, Csync2 will finish with no errors.

    If one or several files that are to be synchronized have been modified on other nodes (not only on the current one), Csync2 will report a conflict. You will get an output similar to the one below:

    While syncing file /etc/corosync/corosync.conf:
    ERROR from peer hex-14: File is also marked dirty here!
    Finished with 1 errors.
  2. If you are sure that the file version on the current node is the best one, you can resolve the conflict by forcing this file and resynchronizing:

    root # csync2 -f /etc/corosync/corosync.conf
    csync2 -x

For more information on the Csync2 options, run csync2  -help.

Note
Note: Pushing Synchronization After Any Changes

Csync2 only pushes changes. It does not continuously synchronize files between the nodes.

Each time you update files that need to be synchronized, you need to push the changes to the other nodes: Run csync2  -xv on the node where you did the changes. If you run the command on any of the other nodes with unchanged files, nothing will happen.

3.5.5 Synchronizing Connection Status Between Cluster Nodes

To enable stateful packet inspection for iptables, configure and use the conntrack tools.

  1. Configuring the conntrackd with YaST.

  2. Configuring a resource for conntrackd (class: ocf, provider: heartbeat). If you use Hawk to add the resource, use the default values proposed by Hawk.

After configuring the conntrack tools, you can use them for Load Balancing with Linux Virtual Server.

Procedure 3.11: Configuring the conntrackd with YaST

Use the YaST cluster module to configure the user-space conntrackd. It needs a dedicated network interface that is not used for other communication channels. The daemon can be started via a resource agent afterward.

  1. In the YaST cluster module, switch to the Configure conntrackd category.

  2. Select a Dedicated Interface for synchronizing the connection status. The IPv4 address of the selected interface is automatically detected and shown in YaST. It must already be configured and it must support multicast.

  3. Define the Multicast Address to be used for synchronizing the connection status.

  4. In Group Number, define a numeric ID for the group to synchronize the connection status to.

  5. Click Generate /etc/conntrackd/conntrackd.conf to create the configuration file for conntrackd.

  6. If you modified any options for an existing cluster, confirm your changes and close the cluster module.

  7. For further cluster configuration, click Next and proceed with Section 3.5.6, “Configuring Services”.

YaST Cluster—conntrackd
Figure 3.6: YaST Cluster—conntrackd

3.5.6 Configuring Services

In the YaST cluster module define whether to start certain services on a node at boot time. You can also use the module to start and stop the services manually. To bring the cluster nodes online and start the cluster resource manager, OpenAIS must be running as a service.

Procedure 3.12: Enabling the Cluster Services
  1. In the YaST cluster module, switch to the Service category.

  2. To start OpenAIS each time this cluster node is booted, select the respective option in the Booting group. If you select Off in the Booting group, you must start OpenAIS manually each time this node is booted. To start OpenAIS manually, use the rcopenais start command.

    Note
    Note: No-Start-on-Boot Parameter for OpenAIS

    While generally disabling the cluster service (including other start/stop scripts) at boot time might break the cluster configuration sometimes, enabling it unconditionally at boot time may also lead to unwanted effect with regards to fencing.

    To fine-tune this, insert the START_ON_BOOT parameter to /etc/sysconfig/openais. Setting START_ON_BOOT=No will prevent the OpenAIS service from starting at boot time (allowing you to start it manually whenever you want to start it). The default is START_ON_BOOT=Yes.

  3. If you want to use the Pacemaker GUI for configuring, managing and monitoring cluster resources, activate Enable mgmtd. This daemon is needed for the GUI.

  4. To start or stop OpenAIS immediately, click the respective button.

  5. To open the ports in the firewall that are needed for cluster communication on the current machine, activate Open Port in Firewall. The configuration is written to /etc/sysconfig/SuSEfirewall2.d/services/cluster.

  6. If you modified any options for an existing cluster node, confirm your changes and close the cluster module. Note that the configuration only applies to the current machine, not to all cluster nodes.

    If you have done the initial cluster setup exclusively with the YaST cluster module, you have now completed the basic configuration steps. Proceed with Section 3.5.7, “Bringing the Cluster Online”.

    YaST Cluster—Services
    Figure 3.7: YaST Cluster—Services

3.5.7 Bringing the Cluster Online

After the initial cluster configuration is done, start the OpenAIS/Corosync service on each cluster node to bring the stack online:

Procedure 3.13: Starting OpenAIS/Corosync and Checking the Status
  1. Log in to an existing node.

  2. Check if the service is already running:

    root # rcopenais status

    If not, start OpenAIS/Corosync now:

    root # rcopenais start
  3. Repeat the steps above for each of the cluster nodes.

  4. On one of the nodes, check the cluster status with the crm status command. If all nodes are online, the output should be similar to the following:

    root # crm status
          Last updated: Thu Jul  3 11:07:10 2014
          Last change: Thu Jul  3 10:58:43 2014
          Current DC: alice (175704363) - partition with quorum
          2 Nodes configured
          0 Resources configured
          
          Online: [ alice bob ]

    This output indicates that the cluster resource manager is started and is ready to manage resources.

After the basic configuration is done and the nodes are online, you can start to configure cluster resources, using one of the cluster management tools like the crm shell, the Pacemaker GUI, or the HA Web Konsole. For more information, refer to the following chapters.

3.6 Mass Deployment with AutoYaST

The following procedure is suitable for deploying cluster nodes which are clones of an already existing node. The cloned nodes will have the same packages installed and the same system configuration.

Procedure 3.14: Cloning a Cluster Node with AutoYaST
Important
Important: Identical Hardware

This scenario assumes you are rolling out SUSE Linux Enterprise High Availability Extension 11 SP4 to a set of machines with identical hardware configurations.

If you need to deploy cluster nodes on non-identical hardware, refer to chapter Automated Installation, section Rule-Based Autoinstallation in the SUSE Linux Enterprise 11 SP4 Deployment Guide, available at https://documentation.suse.com/.

  1. Make sure the node you want to clone is correctly installed and configured. For details, refer to Section 3.3, “Installation as Add-on”, and Section 3.4, “Automatic Cluster Setup (sleha-bootstrap)” or Section 3.5, “Manual Cluster Setup (YaST)”, respectively.

  2. Follow the description outlined in the SUSE Linux Enterprise 11 SP4 Deployment Guide for simple mass installation. This includes the following basic steps:

    1. Creating an AutoYaST profile. Use the AutoYaST GUI to create and modify a profile based on the existing system configuration. In AutoYaST, choose the High Availability module and click the Clone button. If needed, adjust the configuration in the other modules and save the resulting control file as XML.

    2. Determining the source of the AutoYaST profile and the parameter to pass to the installation routines for the other nodes.

    3. Determining the source of the SUSE Linux Enterprise Server and SUSE Linux Enterprise High Availability Extension installation data.

    4. Determining and setting up the boot scenario for autoinstallation.

    5. Passing the command line to the installation routines, either by adding the parameters manually or by creating an info file.

    6. Starting and monitoring the autoinstallation process.

After the clone has been successfully installed, execute the following steps to make the cloned node join the cluster:

Procedure 3.15: Bringing the Cloned Node Online
  1. Transfer the key configuration files from the already configured nodes to the cloned node with Csync2 as described in Section 3.5.4, “Transferring the Configuration to All Nodes”.

  2. To bring the node online, start the OpenAIS service on the cloned node as described in Section 3.5.7, “Bringing the Cluster Online”.

The cloned node will now join the cluster because the /etc/corosync/corosync.conf file has been applied to the cloned node via Csync2. The CIB is automatically synchronized among the cluster nodes.

Print this page