This chapter covers two different scenarios: upgrading a cluster to another version of SUSE Linux Enterprise High Availability Extension (either a major release or a service pack) as opposed to updating individual packages on cluster nodes. See Section 5.2, “Upgrading your Cluster to the Latest Product Version” versus Section 5.3, “Updating Software Packages on Cluster Nodes”.
If you want to upgrade your cluster, check Section 5.2.1, “Supported Upgrade Paths for SLE HA and SLE HA Geo” and Section 5.2.2, “Required Preparations Before Upgrading” before starting to upgrade.
In the following, find definitions of the most important terms used in this chapter:
A major release is a new product version that brings new features and tools, and decommissions previously deprecated components. It comes with backward incompatible changes.
If a new product version includes major changes that are backward incompatible, the cluster needs to be upgraded by an offline migration. You need to take all nodes offline and upgrade the cluster as a whole, before you can bring all nodes back online.
In a rolling upgrade one cluster node at a time is upgraded while the rest of the cluster is still running. You take the first node offline, upgrade it and bring it back online to join the cluster. Then you continue one by one until all cluster nodes are upgraded to a major version.
Combines several patches into a form that is easy to install or deploy. Service packs are numbered and usually contain security fixes, updates, upgrades, or enhancements of programs.
Installation of a newer minor version of a package, which usually contains security fixes and other important fixes.
Installation of a newer major version of a package or distribution, which brings new features. See also Offline Migration versus Rolling Upgrade.
Which upgrade path is supported, and how to perform the upgrade depends on the current product version and on the target version you want to migrate to.
Rolling upgrades are only supported within the same major release (from the GA of a product version to the next service pack, and from one service pack to the next).
Offline migrations are required to upgrade from one major release to the next (for example, from SLE HA 12 to SLE HA 15) or from a service pack within one major release to the next major release (for example, from SLE HA 12 SP3 to SLE HA 15).
Section 5.2.1 gives an overview of the supported upgrade paths for SLE HA (Geo). The column For Details lists the specific upgrade documentation you should refer (including also the base system and Geo Clustering for SUSE Linux Enterprise High Availability Extension). This documentation is available from:
Mixed clusters running on SUSE Linux Enterprise High Availability Extension 12/SUSE Linux Enterprise High Availability Extension 15 are not supported.
After the upgrade process to product version 15, reverting back to product version 12 is not supported.
Upgrade From ... To |
Upgrade Path |
For Details |
---|---|---|
SLE HA 11 SP3 to SLE HA (Geo) 12 |
Offline Migration |
|
SLE HA (Geo) 11 SP4 to SLE HA (Geo) 12 SP1 |
Offline Migration |
|
SLE HA (Geo) 12 to SLE HA (Geo) 12 SP1 |
Rolling Upgrade |
|
SLE HA (Geo) 12 SP1 to SLE HA (Geo) 12 SP2 |
Rolling Upgrade |
|
SLE HA (Geo) 12 SP2 to SLE HA (Geo) 12 SP3 |
Rolling Upgrade |
|
SLE HA (Geo) 12 SP3 to SLE HA (Geo) 12 SP4 |
Rolling Upgrade |
|
SLE HA (Geo) 12 SP3 to SLE HA (Geo) 15 |
Offline Migration |
|
SLE HA (Geo) 15 to SLE HA (Geo) 15 SP1 |
Rolling Upgrade |
|
Ensure that your system backup is up to date and restorable.
Test the upgrade procedure on a staging instance of your cluster setup first, before performing it in a production environment. This gives you an estimation of the time frame required for the maintenance window. It also helps to detect and solve any unexpected problems that might arise.
This section applies to the following scenarios:
Upgrading from SLE HA 11 SP3 to SLE HA 12—for details see Procedure 5.1, “Upgrading from Product Version 11 to 12: Cluster-Wide Offline Migration”.
Upgrading from SLE HA 11 SP4 to SLE HA 12 SP1—for details see Procedure 5.1, “Upgrading from Product Version 11 to 12: Cluster-Wide Offline Migration”.
Upgrading from SLE HA 12 SP3 to SLE HA 15—for details see Procedure 5.2, “Upgrading from Product Version 12 to 15: Cluster-Wide Offline Migration”.
If your cluster is still based on an older product version than the ones listed above, first upgrade it to a version of SLES and SLE HA that can be used as a source for upgrading to the desired target version.
The High Availability Extension 12 cluster stack comes with major changes in various
components (for example, /etc/corosync/corosync.conf
, disk formats of OCFS2).
Therefore, a rolling upgrade
from any SUSE Linux Enterprise High Availability Extension
11 version is not supported. Instead, all cluster nodes must be offline
and the cluster needs to be migrated as a whole as described in
Procedure 5.1, “Upgrading from Product Version 11 to 12: Cluster-Wide Offline Migration”.
Log in to each cluster node and stop the cluster stack with:
root #
rcopenais
stop
For each cluster node, perform an upgrade to the desired target version of SUSE Linux Enterprise Server and SUSE Linux Enterprise High Availability Extension—see Section 5.2.1, “Supported Upgrade Paths for SLE HA and SLE HA Geo”.
After the upgrade process has finished, reboot each node with the upgraded version of SUSE Linux Enterprise Server and SUSE Linux Enterprise High Availability Extension.
If you use OCFS2 in your cluster setup, update the on-device structure by executing the following command:
root #
o2cluster
--update PATH_TO_DEVICE
It adds additional parameters to the disk. They are needed for the updated OCFS2 version that is shipped with SUSE Linux Enterprise High Availability Extension 12 and 12 SPx.
To update /etc/corosync/corosync.conf
for Corosync version 2:
Log in to one node and start the YaST cluster module.
Switch to the Procedure 4.1, “Defining the First Communication Channel (Multicast)” or Procedure 4.2, “Defining the First Communication Channel (Unicast)”, respectively.
category and enter values for the following new parameters: and . For details, seeIf YaST should detect any other options that are invalid or missing according to Corosync version 2, it will prompt you to change them.
Confirm your changes in YaST. YaST will write them to
/etc/corosync/corosync.conf
.
If Csync2 is configured for your cluster, use the following command to push the updated Corosync configuration to the other cluster nodes:
root #
csync2
-xv
For details on Csync2, see Section 4.5, “Transferring the Configuration to All Nodes”.
Alternatively, synchronize the updated Corosync configuration by
manually copying /etc/corosync/corosync.conf
to all cluster nodes.
Log in to each node and start the cluster stack with:
root #
crm
cluster start
Check the cluster status with crm status
or with
Hawk2.
Configure the following services to start at boot time:
root #
systemctl enable pacemakerroot #
systemctl enable hawkroot #
systemctl enable sbd
Sometimes new features are only available with the latest CIB syntax version. When you upgrade to a new product version, your CIB syntax version will not be upgraded by default.
Check your version with:
cibadmin -Q | grep validate-with
Upgrade to the latest CIB syntax version with:
root #
cibadmin
--upgrade --force
If you decide to install the cluster nodes from scratch (instead of upgrading them), see Section 2.2, “Software Requirements” for the list of modules required for SUSE Linux Enterprise High Availability Extension 15 SP1. Find more information about modules, extensions and related products in the release notes for SUSE Linux Enterprise Server 15. They are available at https://www.suse.com/releasenotes/.
Before starting the offline migration to SUSE Linux Enterprise High Availability Extension 15, manually upgrade the CIB syntax in your current cluster as described in Note: Upgrading the CIB Syntax Version.
Log in to each cluster node and stop the cluster stack with:
root #
crm
cluster stop
For each cluster node, perform an upgrade to the desired target version of SUSE Linux Enterprise Server and SUSE Linux Enterprise High Availability Extension—see Section 5.2.1, “Supported Upgrade Paths for SLE HA and SLE HA Geo”.
After the upgrade process has finished, log in to each node and boot it with the upgraded version of SUSE Linux Enterprise Server and SUSE Linux Enterprise High Availability Extension.
If you use Cluster LVM, you need to migrate from clvmd to lvmlockd.
See the man page of lvmlockd
, section
changing a clvm VG to a lockd VG and Section 21.4, “Online Migration from Mirror LV to Cluster MD”.
Start the cluster stack with:
root #
crm
cluster start
Check the cluster status with crm status
or with
Hawk2.
This section applies to the following scenarios:
Upgrading from SLE HA 12 to SLE HA 12 SP1
Upgrading from SLE HA 12 SP1 to SLE HA 12 SP2
Upgrading from SLE HA 12 SP2 to SLE HA 12 SP3
Upgrading from SLE HA 15 to SLE HA 15 SP1
Before starting an upgrade for a node, stop the cluster stack on that node.
If the cluster resource manager on a node is active during the software update, this can lead to unpredictable results like fencing of active nodes.
Log in as root
on the node that you want to upgrade and stop the
cluster stack:
root #
crm
cluster stop
Perform an upgrade to the desired target version of SUSE Linux Enterprise Server and SUSE Linux Enterprise High Availability Extension. To find the details for the individual upgrade processes, see Section 5.2.1, “Supported Upgrade Paths for SLE HA and SLE HA Geo”.
Start the cluster stack on the upgraded node to make the node rejoin the cluster:
root #
crm
cluster start
Take the next node offline and repeat the procedure for that node.
Check the cluster status with crm status
or with
Hawk2.
The new features shipped with the latest product version will only be available after all cluster nodes have been upgraded to the latest product version. Mixed version clusters are only supported for a short time frame during the rolling upgrade. Complete the rolling upgrade within one week.
The Hawk2
screen also shows a warning if different CRM versions are detected for your cluster nodes.Before starting an update for a node, either stop the cluster stack on that node or put the node into maintenance mode, depending on whether the cluster stack is affected or not. See Step 1 for details.
If the cluster resource manager on a node is active during the software update, this can lead to unpredictable results like fencing of active nodes.
Before installing any package updates on a node, check the following:
Does the update affect any packages belonging to SUSE Linux Enterprise High Availability Extension or Geo Clustering for SUSE Linux Enterprise High Availability Extension?
If yes
: Stop the cluster stack on
the node before starting the software update:
root #
crm
cluster stop
Does the package update require a reboot? If yes
:
Stop the cluster stack on the node before starting the software
update:
root #
crm
cluster stop
If none of the situations above apply, you do not need to stop the cluster stack. In that case, put the node into maintenance mode before starting the software update:
root #
crm
node maintenance NODE_NAME
For more details on maintenance mode, see Section 16.2, “Different Options for Maintenance Tasks”.
Install the package update using either YaST or Zypper.
After the update has been successfully installed:
Either start the cluster stack on the respective node (if you stopped it in Step 1):
root #
crm
cluster start
or remove the maintenance flag to bring the node back to normal mode:
root #
crm
node ready NODE_NAME
Check the cluster status with crm status
or with
Hawk2.
For detailed information about any changes and new features of the product you are upgrading to, refer to its release notes. They are available from https://www.suse.com/releasenotes/.