Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
Applies to SUSE Linux Enterprise High Availability Extension 11 SP4

2 System Requirements and Recommendations

The following section informs you about system requirements, and some prerequisites for SUSE® Linux Enterprise High Availability Extension. It also includes recommendations for cluster setup.

2.1 Hardware Requirements

The following list specifies hardware requirements for a cluster based on SUSE® Linux Enterprise High Availability Extension. These requirements represent the minimum hardware configuration. Additional hardware might be necessary, depending on how you intend to use your cluster.

  • 1 to 32 Linux servers with software as specified in Section 2.2, “Software Requirements”. The servers do not require identical hardware (memory, disk space, etc.), but they must have the same architecture. Cross-platform clusters are not supported.

  • At least two TCP/IP communication media per cluster node. Cluster nodes use multicast or unicast for communication so the network equipment must support the communication means you want to use. The communication media should support a data rate of 100 Mbit/s or higher. Preferably, the Ethernet channels should be bonded as described in Chapter 11, Network Device Bonding. Alternatively, use the second interface for a redundant communication channel in Corosync. See also Procedure 3.7, “Defining a Redundant Communication Channel”.

  • Optional: A shared disk subsystem connected to all servers in the cluster from where it needs to be accessed. See Section 2.3, “Storage Requirements”.

  • A STONITH mechanism. A STONITH device is a power switch which the cluster uses to reset nodes that are thought to be dead or behaving in a strange manner. This is the only reliable way to ensure that no data corruption is performed by nodes that hang and only appear to be dead.

2.2 Software Requirements

Ensure that the following software requirements are met:

  • SUSE® Linux Enterprise Server 11 SP4 (with all available online updates) is installed on all nodes that will be part of the cluster.

  • SUSE Linux Enterprise High Availability Extension 11 SP4 (with all available online updates) is installed on all nodes that will be part of the cluster.

  • If you want to use Geo clusters, make sure that Geo Clustering for SUSE Linux Enterprise High Availability Extension 11 SP4 (with all available online updates) is installed on all nodes that will be part of the cluster.

2.3 Storage Requirements

To make data highly available, a shared disk system (Storage Area Network, or SAN) is recommended for your cluster. If a shared disk subsystem is used, ensure the following:

  • The shared disk system is properly set up and functional according to the manufacturer’s instructions.

  • The disks contained in the shared disk system should be configured to use mirroring or RAID to add fault tolerance to the shared disk system. Hardware-based RAID is recommended. Host-based software RAID is not supported for all configurations.

  • If you are using iSCSI for shared disk system access, ensure that you have properly configured iSCSI initiators and targets.

  • When using DRBD* to implement a mirroring RAID system that distributes data across two machines, make sure to only access the device provided by DRBD—never the backing device. Use the same (bonded) NICs that the rest of the cluster uses to leverage the redundancy provided there.

2.4 Other Requirements and Recommendations

For a supported and useful High Availability setup, consider the following recommendations:

Number of Cluster Nodes

Each cluster must consist of at least two cluster nodes.

Important
Important: Odd Number of Cluster Nodes

It is strongly recommended to use an odd number of cluster nodes with a minimum of three nodes.

A cluster needs quorum to keep services running. Therefore a three-node cluster can tolerate only failure of one node at a time, whereas a five-node cluster can tolerate failures of two nodes etc.

STONITH
Important
Important: No Support Without STONITH

A cluster without STONITH is not supported.

For a supported High Availability setup, ensure the following:

  • Each node in the High Availability cluster must have at least one STONITH device (usually a piece of hardware). We strongly recommend multiple STONITH devices per node, unless SBD is used. SBD provides a way to enable STONITH and fencing in clusters without external power switches, but it requires shared storage.

  • The global cluster options stonith-enabled and startup-fencing must be set to true. As soon as you change them, you will lose support.

Redundant Communication Paths

For a supported High Availability setup, it is required to set up cluster communication via two or more redundant paths. This can be done via:

If possible, choose network device bonding.

Time Synchronization

Cluster nodes should synchronize to an NTP server outside the cluster. For more information, see the Administration Guide for SUSE Linux Enterprise Server 11 SP4, available at https://documentation.suse.com/. Refer to the chapter Time Synchronization with NTP.

If nodes are not synchronized, log files and cluster reports are very hard to analyze.

NIC Names

Must be identical on all nodes.

Host Name and IP Address

Configure host name resolution by editing the /etc/hosts file on each server in the cluster. To ensure that cluster communication is not slowed down or tampered with by any DNS:

  • Use static IP addresses.

  • List all cluster nodes in this file with their fully qualified host name and short host name. It is essential that members of the cluster can find each other by name. If the names are not available, internal cluster communication will fail.

For more information, see the Administration Guide for SUSE Linux Enterprise Server 11 SP4, available at https://documentation.suse.com/. Refer to chapter Basic Networking, section Configuring a Network Connection with YaST > Configuring Host Name and DNS.

Storage Requirements

Some services may require shared storage. For requirements, see Section 2.3, “Storage Requirements”. You can also use an external NFS share or DRBD. If using an external NFS share, it must be reliably accessible from all cluster nodes via redundant communication paths. See Redundant Communication Paths.

When using SBD as STONITH device, additional requirements apply for the shared storage. For details, see http://linux-ha.org/wiki/SBD_Fencing, section Requirements.

SSH

All cluster nodes must be able to access each other via SSH. Tools like hb_report or crm_report (for troubleshooting) and Hawk's History Explorer require passwordless SSH access between the nodes, otherwise they can only collect data from the current node. In case you use a non-standard SSH port, use the -X option (see man page). For example, if your SSH port is 3479, invoke an hb_report with:

root # hb_report -X "-p 3479" [...]
Note
Note: Regulatory Requirements

If passwordless SSH access does not comply with regulatory requirements, you can use the following work-around for hb_report:

Create a user that can log in without a password (for example, using public key authentication). Configure sudo for this user so it does not require a root password. Start hb_report from command line with the -u option to specify the user's name. For more information, see the hb_report man page.

For the history explorer there is currently no alternative for passwordless login.

Print this page