Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
Applies to SUSE Linux Enterprise High Availability Extension 11 SP4

20 Troubleshooting

Strange problems may occur that are not easy to understand, especially when starting to experiment with High Availability. However, there are several utilities that allow you to take a closer look at the High Availability internal processes. This chapter recommends various solutions.

20.1 Installation and First Steps

Troubleshooting difficulties when installing the packages or bringing the cluster online.

Are the HA packages installed?

The packages needed for configuring and managing a cluster are included in the High Availability installation pattern, available with the High Availability Extension.

Check if High Availability Extension is installed as an add-on to SUSE Linux Enterprise Server 11 SP4 on each of the cluster nodes and if the High Availability pattern is installed on each of the machines as described in Section 3.3, “Installation as Add-on”.

Is the initial configuration the same for all cluster nodes?

To communicate with each other, all nodes belonging to the same cluster need to use the same bindnetaddr, mcastaddr and mcastport as described in Section 3.5, “Manual Cluster Setup (YaST)”.

Check if the communication channels and options configured in /etc/corosync/corosync.conf are the same for all cluster nodes.

In case you use encrypted communication, check if the /etc/corosync/authkey file is available on all cluster nodes.

All corosync.conf settings except for nodeid must be the same; authkey files on all nodes must be identical.

Does the Firewall allow communication via the mcastport?

If the mcastport used for communication between the cluster nodes is blocked by the firewall, the nodes cannot see each other. When configuring the initial setup with YaST or the bootstrap scripts as described in Section 3.5, “Manual Cluster Setup (YaST)” and Section 3.4, “Automatic Cluster Setup (sleha-bootstrap)”, the firewall settings are usually automatically adjusted.

To make sure the mcastport is not blocked by the firewall, check the settings in /etc/sysconfig/SuSEfirewall2 on each node. Alternatively, start the YaST firewall module on each cluster node. After clicking Allowed Service › Advanced, add the mcastport to the list of allowed UDP Ports and confirm your changes.

Is OpenAIS started on each cluster node?

Check the OpenAIS status on each cluster node with /etc/init.d/openais status. In case OpenAIS is not running, start it by executing /etc/init.d/openais start.

20.2 Logging

Where to find the log files?

For the Pacemaker log files, see the settings configured in the logging section of /etc/corosync/corosync.conf. In case the log file specified there should be ignored by Pacemaker, check the logging settings in /etc/sysconfig/pacemaker, Pacemaker's own configuration file. In case PCMK_logfile is configured there, Pacemaker will use the path that is defined by this parameter.

If you need a cluster-wide report showing all relevant log files, see How can I create a report with an analysis of all my cluster nodes? for more information.

I enabled monitoring but there is no trace of monitoring operations in the log files?

The lrmd daemon does not log recurring monitor operations unless an error occurred. Logging all recurring operations would produce too much noise. Therefore recurring monitor operations are logged only once an hour.

I only get a failed message. Is it possible to get more information?

Add the --verbose parameter to your commands. If you do that multiple times, the debug output becomes quite verbose. See /var/log/messages for useful hints.

How can I get an overview of all my nodes and resources?

Use the crm_mon command. The following displays the resource operation history (option -o) and inactive resources (-r):

root # crm_mon -o -r

The display is refreshed when the status changes (to cancel this press CtrlC). An example may look like:

Example 20.1: Stopped Resources
Last updated: Fri Aug 15 10:42:08 2014
Last change: Fri Aug 15 10:32:19 2014
 Stack: corosync
Current DC: bob (175704619) - partition with quorum
Version: 1.1.12-ad083a8
2 Nodes configured
3 Resources configured
       
Online: [ alice bob ]
       
Full list of resources:
       
my_ipaddress    (ocf::heartbeat:Dummy): Started barett-2
my_filesystem   (ocf::heartbeat:Dummy): Stopped
my_webserver    (ocf::heartbeat:Dummy): Stopped
       
Operations:
* Node bob: 
    my_ipaddress: migration-threshold=3
      + (14) start: rc=0 (ok)
      + (15) monitor: interval=10000ms rc=0 (ok)
      * Node alice:

The Pacemaker Explained PDF, available at http://www.clusterlabs.org/doc/, covers three different recovery types in the How are OCF Return Codes Interpreted? section.

How can I create a report with an analysis of all my cluster nodes?

On the crm shell, use either crm_report or hb_report to create a report. The tools are used to compile:

  • Cluster-wide log files,

  • Package states,

  • DLM/OCFS2 states,

  • System information,

  • CIB history,

  • Parsing of core dump reports, if a debuginfo package is installed.

Usually run crm_report with the following command:

root # crm_report -f 0:00 -n jupiter -n venus

The command extracts all information since 0am on the hosts jupiter and venus and creates a *.tar.bz2 archive named crm_report-DATE.tar.bz2 in the current directory, for example, crm_report-Wed-03-Mar-2012. If you are only interested in a specific time frame, add the end time with the -t option.

Warning
Warning: Remove Sensitive Information

The crm_report tool tries to remove any sensitive information from the CIB and the peinput files, however, it cannot do everything. If you have more sensitive information, supply additional patterns. The log files and the crm_mon, ccm_tool, and crm_verify output are not sanitized.

Before sharing your data in any way, check the archive and remove all information you do not want to expose.

Customize the command execution with further options. For example, if you have an OpenAIS cluster, you certainly want to add the option -A. In case you have another user who has permissions to the cluster, use the -u option and specify this user (in addition to root and hacluster). In case you have a non-standard SSH port, use the -X option to add the port (for example, with the port 3479, use -X "-p 3479"). Further options can be found in the man page of crm_report.

After crm_report has analyzed all the relevant log files and created the directory (or archive), check the log files for an uppercase ERROR string. The most important files in the top level directory of the report are:

analysis.txt

Compares files that should be identical on all nodes.

crm_mon.txt

Contains the output of the crm_mon command.

corosync.txt

Contains a copy of the Corosync configuration file.

description.txt

Contains all cluster package versions on your nodes. There is also the sysinfo.txt file which is node specific. It is linked to the top directory.

Node-specific files are stored in a subdirectory named by the node's name.

20.3 Resources

How can I clean up my resources?

Use the following commands:

root # crm resource list
crm resource cleanup rscid [node]

If you leave out the node, the resource is cleaned on all nodes. More information can be found in Section 7.4.2, “Cleaning Up Resources”.

How can I list my currently known resources?

Use the command crm resource list to display your current resources.

I configured a resource, but it always fails. Why?

To check an OCF script use ocf-tester, for instance:

ocf-tester -n ip1 -o ip=YOUR_IP_ADDRESS \
  /usr/lib/ocf/resource.d/heartbeat/IPaddr

Use -o multiple times for more parameters. The list of required and optional parameters can be obtained by running crm ra info AGENT, for example:

root # crm ra info ocf:heartbeat:IPaddr

Before running ocf-tester, make sure the resource is not managed by the cluster.

Why do resources not fail over and why are there no errors?

If your cluster is a two node cluster, terminating one node will leave the remaining node without quorum. Unless you set the no-quorum-policy property to ignore, nothing happens. For two-node clusters you need:

property no-quorum-policy="ignore"

Another possibility is that the terminated node is considered unclean. Then it is necessary to fence it. If the STONITH resource is not operational or does not exist, the remaining node will waiting for the fencing to happen. The fencing timeouts are typically high, so it may take quite a while to see any obvious sign of problems (if ever).

Yet another possible explanation is that a resource is simply not allowed to run on this node. That may be because of a failure which happened in the past and which was not cleaned. Or it may be because of an earlier administrative action, that is a location constraint with a negative score. Such a location constraint is for instance inserted by the crm resource migrate command.

Why can I never tell where my resource will run?

If there are no location constraints for a resource, its placement is subject to an (almost) random node choice. You are well advised to always express a preferred node for resources. That does not mean that you need to specify location preferences for all resources. One preference suffices for a set of related (colocated) resources. A node preference looks like this:

location rsc-prefers-alice rsc 100: alice

20.4 STONITH and Fencing

Why does my STONITH resource not start?

Start (or enable) operation includes checking the status of the device. If the device is not ready, the STONITH resource will fail to start.

At the same time the STONITH plugin will be asked to produce a host list. If this list is empty, there is no point in running a STONITH resource which cannot shoot anything. The name of the host on which STONITH is running is filtered from the list, since the node cannot shoot itself.

If you want to use single-host management devices such as lights-out devices, make sure that the STONITH resource is not allowed to run on the node which it is supposed to fence. Use an infinitely negative location node preference (constraint). The cluster will move the STONITH resource to another place where it can start, but not before informing you.

Why does fencing not happen, although I have the STONITH resource?

Each STONITH resource must provide a host list. This list may be inserted by hand in the STONITH resource configuration or retrieved from the device itself, for instance from outlet names. That depends on the nature of the STONITH plugin. stonithd uses the list to find out which STONITH resource can fence the target node. Only if the node appears in the list can the STONITH resource shoot (fence) the node.

If stonithd does not find the node in any of the host lists provided by running STONITH resources, it will ask stonithd instances on other nodes. If the target node does not show up in the host lists of other stonithd instances, the fencing request ends in a timeout at the originating node.

Why does my STONITH resource fail occasionally?

Power management devices may give up if there is too much broadcast traffic. Space out the monitor operations. Given that fencing is necessary only once in a while (and hopefully never), checking the device status once a few hours is more than enough.

Also, some of these devices may refuse to talk to more than one party at the same time. This may be a problem if you keep a terminal or browser session open while the cluster tries to test the status.

20.5 Miscellaneous

How can I run commands on all cluster nodes?

Use the command pssh for this task. If necessary, install pssh. Create a file (for example hosts.txt) where you collect all your IP addresses or host names you want to visit. Make sure you can log in with ssh to each host listed in your hosts.txt file. If everything is correctly prepared, execute pssh and use the hosts.txt file (option -h) and the interactive mode (option -i) as shown in this example:

pssh -i -h hosts.txt "ls -l /corosync/*.conf"
[1] 08:28:32 [SUCCESS] root@venus.example.com
-rw-r--r-- 1 root root 1480 Nov 14 13:37 /etc/corosync/corosync.conf
[2] 08:28:32 [SUCCESS] root@192.168.2.102
-rw-r--r-- 1 root root 1480 Nov 14 13:37 /etc/corosync/corosync.conf
What is the state of my cluster?

To check the current state of your cluster, use one of the programs crm_mon or crm status. This displays the current DC and all the nodes and resources known by the current node.

Why can several nodes of my cluster not see each other?

There could be several reasons:

  • Look first in the configuration file /etc/corosync/corosync.conf. Check if the multicast or unicast address is the same for every node in the cluster (look in the interface section with the key mcastaddr).

  • Check your firewall settings.

  • Check if your switch supports multicast or unicast addresses.

  • Check if the connection between your nodes is broken. Most often, this is the result of a badly configured firewall. This also may be the reason for a split brain condition, where the cluster is partitioned.

Why can an OCFS2 device not be mounted?

Check /var/log/messages for the following line:

Jan 12 09:58:55 alice lrmd: [3487]: info: RA output: [...] 
  ERROR: Could not load ocfs2_stackglue
Jan 12 16:04:22 alice modprobe: FATAL: Module ocfs2_stackglue not found.

In this case the Kernel module ocfs2_stackglue.ko is missing. Install the package ocfs2-kmp-default, ocfs2-kmp-pae or ocfs2-kmp-xen, depending on the installed Kernel.

How can I create a report with an analysis of all my cluster nodes?

On the crm shell, use crm report to create a report. This tool compiles:

  • Cluster-wide log files,

  • Package states,

  • DLM/OCFS2 states,

  • System information,

  • CIB history,

  • Parsing of core dump reports, if a debuginfo package is installed.

Usually run crm report with the following command:

root # crm report -f 0:00 -n alice -n bob

The command extracts all information since 0am on the hosts alice and bob and creates a *.tar.bz2 archive named crm_report-DATE.tar.bz2 in the current directory, for example, crm_report-Wed-03-Mar-2012. If you are only interested in a specific time frame, add the end time with the -t option.

Warning
Warning: Remove Sensitive Information

The crm report tool tries to remove any sensitive information from the CIB and the peinput files, however, it cannot do everything. If you have more sensitive information, supply additional patterns. The log files and the crm_mon, ccm_tool, and crm_verify output are not sanitized.

Before sharing your data in any way, check the archive and remove all information you do not want to expose.

Customize the command execution with further options. For example, if you have a Pacemaker cluster, you certainly want to add the option -A. In case you have another user who has permissions to the cluster, use the -u option and specify this user (in addition to root and hacluster). In case you have a non-standard SSH port, use the -X option to add the port (for example, with the port 3479, use -X "-p 3479"). Further options can be found in the man page of crm report.

After crm report has analyzed all the relevant log files and created the directory (or archive), check the log files for an uppercase ERROR string. The most important files in the top level directory of the report are:

analysis.txt

Compares files that should be identical on all nodes.

corosync.txt

Contains a copy of the Corosync configuration file.

crm_mon.txt

Contains the output of the crm_mon command.

description.txt

Contains all cluster package versions on your nodes. There is also the sysinfo.txt file which is node specific. It is linked to the top directory.

This file can be used as a template to describe the issue you encountered and post it to https://github.com/ClusterLabs/crmsh/issues.

members.txt

A list of all nodes

sysinfo.txt

Contains a list of all relevant package names and their versions. Additionally, there is also a list of configuration files which are different from the original RPM package.

Node-specific files are stored in a subdirectory named by the node's name. It contains a copy of the directory /etc of the respective node.

ERROR: Tag Not Supported by the RNG Schema

See Note: Upgrading the CIB Syntax Version.

20.6 For More Information

For additional information about high availability on Linux, including configuring cluster resources and managing and customizing a High Availability cluster, see http://clusterlabs.org/wiki/Documentation.

Print this page