26 Executing maintenance tasks #
To perform maintenance tasks on the cluster nodes, you might need to stop the resources running on that node, to move them, or to shut down or reboot the node. It might also be necessary to temporarily take over the control of resources from the cluster, or even to stop the cluster service while resources remain running.
This chapter explains how to manually take down a cluster node without negative side-effects. It also gives an overview of different options the cluster stack provides for executing maintenance tasks.
26.1 Implications of taking down a cluster node #
In SUSE Linux Enterprise Server High Availability Extension 15 SP2, the way cluster services were started and
stopped has changed. SUSE recommends using the crm shell. The
old commands with systemctl
are still available, but
are recommended for experienced users only.
For details, refer to the SUSE blog article https://www.suse.com/c/suse-high-availability-cluster-services-how-to-stop-start-or-view-the-status/.
The recommended way to start, stop, or view the status is:
crm cluster start
Start cluster services on one node
crm cluster stop
Stop cluster services on one node
crm cluster restart
Restart cluster services on one node
crm cluster status
View status of the cluster stack on one node
Execute the above commands as user root
or as a user who
has the required privileges.
When you shut down or reboot a cluster node (or stop the cluster services on a node), the following processes will be triggered:
The resources that are running on the node will be stopped or moved off the node.
If stopping the resources should fail or time out, the STONITH mechanism will fence the node and shut it down.
If your aim is to move the services off the node in an orderly fashion before shutting down or rebooting the node, proceed as follows:
On the node you want to reboot or shut down, log in as
root
or equivalent.Put the node into
standby
mode:#
crm -w node standbyThat way, services can migrate off the node without being limited by the shutdown timeout of the cluster services.
Check the cluster status with:
#
crm statusIt shows the respective node in
standby
mode:[...] Node bob: standby [...]
Stop the cluster services on that node:
#
crm cluster stopReboot the node.
To check if the node joins the cluster again:
Log in to the node as
root
or equivalent.Check if the cluster services have started:
#
crm cluster statusIf not, start it:
#
crm cluster startCheck the cluster status with:
#
crm statusIt should show the node coming online again.
26.2 Different options for maintenance tasks #
Pacemaker offers a variety of options for performing system maintenance:
- Putting the cluster in maintenance mode
The global cluster property
maintenance-mode
puts all resources into maintenance state at once. The cluster stops monitoring them and becomes oblivious to their status. Note that only the resource management by Pacemaker is disabled. Corosync and SBD are still functional. Use maintenance mode for any tasks involving cluster resources. For any tasks involving infrastructure such as storage or networking, the safest method is to stop the cluster services completely. See Section 26.1, “Implications of taking down a cluster node”.- Putting a node in maintenance mode
This option allows you to put all resources running on a specific node into maintenance state at once. The cluster will cease monitoring them and thus become oblivious to their status.
- Putting a node in standby mode
A node that is in standby mode can no longer run resources. Any resources running on the node will be moved away or stopped (in case no other node is eligible to run the resource). Also, all monitoring operations will be stopped on the node (except for those with
role="Stopped"
).You can use this option if you need to stop a node in a cluster while continuing to provide the services running on another node.
- Putting a resource into maintenance mode
When this mode is enabled for a resource, no monitoring operations will be triggered for the resource.
Use this option if you need to manually touch the service that is managed by this resource and do not want the cluster to run any monitoring operations for the resource during that time.
- Putting a resource into unmanaged mode
The
is-managed
meta attribute allows you to temporarily “release” a resource from being managed by the cluster stack. This means you can manually touch the service that is managed by this resource (for example, to adjust any components). However, the cluster will continue to monitor the resource and to report any failures.If you want the cluster to also cease monitoring the resource, use the per-resource maintenance mode instead (see Putting a resource into maintenance mode).
26.3 Preparing and finishing maintenance work #
If you need to do testing or maintenance work, follow the general steps below.
Otherwise you risk unwanted side effects, like resources not starting in an orderly fashion, unsynchronized CIBs across the cluster nodes, or even data loss.
Before you start, choose which of the options outlined in Section 26.2 is appropriate for your situation.
Apply this option with Hawk2 or crmsh.
Execute your maintenance task or tests.
After you have finished, put the resource, node or cluster back to “normal” operation.
26.4 Putting the cluster in maintenance mode #
When putting a cluster into maintenance mode, only the resource management by Pacemaker is disabled. Corosync and SBD are still functional. Depending on your maintainence tasks, this might lead to fence operations.
Use maintenance mode for any tasks involving cluster resources. For any tasks involving infrastructure such as storage or networking, the safest method is to stop the cluster services completely. See Section 26.1, “Implications of taking down a cluster node”.
To put the cluster in maintenance mode on the crm shell, use the following command:
#
crm
configure property maintenance-mode=true
To put the cluster back to normal mode after your maintenance work is done, use the following command:
#
crm
configure property maintenance-mode=false
Start a Web browser and log in to the cluster as described in Section 6.2, “Logging in”.
In the left navigation bar, select
.In the
group, select the attribute from the empty drop-down box and click the plus icon to add it.To set
maintenance-mode=true
, activate the check box next tomaintenance-mode
and confirm your changes.After you have finished the maintenance task for the whole cluster, deactivate the check box next to the
maintenance-mode
attribute.From this point on, High Availability Extension will take over cluster management again.
26.5 Putting a node in maintenance mode #
To put a node in maintenance mode on the crm shell, use the following command:
#
crm
node maintenance NODENAME
To put the node back to normal mode after your maintenance work is done, use the following command:
#
crm
node ready NODENAME
Start a Web browser and log in to the cluster as described in Section 6.2, “Logging in”.
In the left navigation bar, select
.In one of the individual nodes' views, click the wrench icon next to the node and select
.After you have finished your maintenance task, click the wrench icon next to the node and select
.
26.6 Putting a node in standby mode #
To put a node in standby mode on the crm shell, use the following command:
#
crm node standby NODENAME
To bring the node back online after your maintenance work is done, use the following command:
#
crm node online NODENAME
Start a Web browser and log in to the cluster as described in Section 6.2, “Logging in”.
In the left navigation bar, select
.In one of the individual nodes' views, click the wrench icon next to the node and select
.Finish the maintenance task for the node.
To deactivate the standby mode, click the wrench icon next to the node and select
.
26.7 Putting a resource into maintenance mode #
To put a resource into maintenance mode on the crm shell, use the following command:
#
crm
resource maintenance RESOURCE_ID true
To put the resource back into normal mode after your maintenance work is done, use the following command:
#
crm
resource maintenance RESOURCE_ID false
Start a Web browser and log in to the cluster as described in Section 6.2, “Logging in”.
In the left navigation bar, select
.Select the resource you want to put in maintenance mode or unmanaged mode, click the wrench icon next to the resource and select
.Open the
category.From the empty drop-down list, select the
attribute and click the plus icon to add it.Activate the check box next to
maintenance
to set the maintenance attribute toyes
.Confirm your changes.
After you have finished the maintenance task for that resource, deactivate the check box next to the
maintenance
attribute for that resource.From this point on, the resource will be managed by the High Availability Extension software again.
26.8 Putting a resource into unmanaged mode #
To put a resource into unmanaged mode on the crm shell, use the following command:
#
crm
resource unmanage RESOURCE_ID
To put it into managed mode again after your maintenance work is done, use the following command:
#
crm
resource manage RESOURCE_ID
Start a Web browser and log in to the cluster as described in Section 6.2, “Logging in”.
From the left navigation bar, select
and go to the list.In the
column, click the arrow down icon next to the resource you want to modify and select .The resource configuration screen opens.
Below
, select the entry from the empty drop-down box.Set its value to
No
and click .After you have finished your maintenance task, set
toYes
(which is the default value) and apply your changes.From this point on, the resource will be managed by the High Availability Extension software again.
26.9 Rebooting a cluster node while in maintenance mode #
If the cluster or a node is in maintenance mode, you can use tools external
to the cluster stack (for example, systemctl
) to manually
operate the components that are managed by the cluster as resources.
The High Availability Extension will not monitor them or attempt to restart them.
If you stop the cluster services on a node, all daemons and processes (originally started as Pacemaker-managed cluster resources) will continue to run.
If you attempt to start cluster services on a node while the cluster or node is in maintenance mode, Pacemaker will initiate a single one-shot monitor operation (a “probe”) for every resource to evaluate which resources are currently running on that node. However, it will take no further action other than determining the resources' status.
On the node you want to reboot or shut down, log in as
root
or equivalent.If you have a DLM resource (or other resources depending on DLM), make sure to explicitly stop those resources before stopping the cluster services:
crm(live)resource#
stop RESOURCE_IDThe reason is that stopping Pacemaker also stops the Corosync service, on whose membership and messaging services DLM depends. If Corosync stops, the DLM resource will assume a split brain scenario and trigger a fencing operation.
Stop the cluster services on that node:
#
crm cluster stopShut down or reboot the node.