Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
Applies to SUSE Linux Enterprise High Availability Extension 15 SP1

5 Upgrading Your Cluster and Updating Software Packages

Abstract

This chapter covers two different scenarios: upgrading a cluster to another version of SUSE Linux Enterprise High Availability Extension (either a major release or a service pack) as opposed to updating individual packages on cluster nodes. See Section 5.2, “Upgrading your Cluster to the Latest Product Version” versus Section 5.3, “Updating Software Packages on Cluster Nodes”.

If you want to upgrade your cluster, check Section 5.2.1, “Supported Upgrade Paths for SLE HA and SLE HA Geo” and Section 5.2.2, “Required Preparations Before Upgrading” before starting to upgrade.

5.1 Terminology

In the following, find definitions of the most important terms used in this chapter:

Major Release, General Availability (GA) Version

A major release is a new product version that brings new features and tools, and decommissions previously deprecated components. It comes with backward incompatible changes.

Offline Migration

If a new product version includes major changes that are backward incompatible, the cluster needs to be upgraded by an offline migration. You need to take all nodes offline and upgrade the cluster as a whole, before you can bring all nodes back online.

Rolling Upgrade

In a rolling upgrade one cluster node at a time is upgraded while the rest of the cluster is still running. You take the first node offline, upgrade it and bring it back online to join the cluster. Then you continue one by one until all cluster nodes are upgraded to a major version.

Service Pack (SP)

Combines several patches into a form that is easy to install or deploy. Service packs are numbered and usually contain security fixes, updates, upgrades, or enhancements of programs.

Update

Installation of a newer minor version of a package, which usually contains security fixes and other important fixes.

Upgrade

Installation of a newer major version of a package or distribution, which brings new features. See also Offline Migration versus Rolling Upgrade.

5.2 Upgrading your Cluster to the Latest Product Version

Which upgrade path is supported, and how to perform the upgrade depends on the current product version and on the target version you want to migrate to.

  • Rolling upgrades are only supported within the same major release (from the GA of a product version to the next service pack, and from one service pack to the next).

  • Offline migrations are required to upgrade from one major release to the next (for example, from SLE HA 12 to SLE HA 15) or from a service pack within one major release to the next major release (for example, from SLE HA 12 SP3 to SLE HA 15).

Section 5.2.1 gives an overview of the supported upgrade paths for SLE HA (Geo). The column For Details lists the specific upgrade documentation you should refer (including also the base system and Geo Clustering for SUSE Linux Enterprise High Availability Extension). This documentation is available from:

Important
Important: No Support for Mixed Clusters and Reversion After Upgrade
  • Mixed clusters running on SUSE Linux Enterprise High Availability Extension 12/SUSE Linux Enterprise High Availability Extension 15 are not supported.

  • After the upgrade process to product version 15, reverting back to product version 12 is not supported.

5.2.1 Supported Upgrade Paths for SLE HA and SLE HA Geo

Upgrade From ... To

Upgrade Path

For Details

SLE HA 11 SP3 to

SLE HA (Geo) 12

Offline Migration

SLE HA (Geo) 11 SP4 to SLE HA (Geo) 12 SP1

Offline Migration

SLE HA (Geo) 12 to SLE HA (Geo) 12 SP1

Rolling Upgrade

  • Base System: Deployment Guide for SLES 12 SP1, part Updating and Upgrading SUSE Linux Enterprise

  • SLE HA: Performing a Cluster-wide Rolling Upgrade

  • SLE HA Geo: Geo Clustering Quick Start for SLE HA 12 SP1, section Upgrading to the Latest Product Version

SLE HA (Geo) 12 SP1 to SLE HA (Geo) 12 SP2

Rolling Upgrade

SLE HA (Geo) 12 SP2 to SLE HA (Geo) 12 SP3

Rolling Upgrade

  • Base System: Deployment Guide for SLES 12 SP3, part Updating and Upgrading SUSE Linux Enterprise

  • SLE HA: Performing a Cluster-wide Rolling Upgrade

  • SLE HA Geo: Geo Clustering Guide for SLE HA 12 SP3, section Upgrading to the Latest Product Version

SLE HA (Geo) 12 SP3 to SLE HA (Geo) 12 SP4

Rolling Upgrade

  • Base System: SUSE Linux Enterprise Server 12 SP4 Deployment Guide, part Updating and Upgrading SUSE Linux Enterprise

  • SLE HA: Performing a Cluster-wide Rolling Upgrade

  • SLE HA Geo: Geo Clustering for SUSE Linux Enterprise High Availability Extension 12 SP4 Geo Clustering Quick Start, section Upgrading to the Latest Product Version

SLE HA (Geo) 12 SP3 to SLE HA (Geo) 15

Offline Migration

SLE HA (Geo) 15 to SLE HA (Geo) 15 SP1

Rolling Upgrade

5.2.2 Required Preparations Before Upgrading

Backup

Ensure that your system backup is up to date and restorable.

Testing

Test the upgrade procedure on a staging instance of your cluster setup first, before performing it in a production environment. This gives you an estimation of the time frame required for the maintenance window. It also helps to detect and solve any unexpected problems that might arise.

5.2.3 Offline Migration

This section applies to the following scenarios:

If your cluster is still based on an older product version than the ones listed above, first upgrade it to a version of SLES and SLE HA that can be used as a source for upgrading to the desired target version.

The High Availability Extension 12 cluster stack comes with major changes in various components (for example, /etc/corosync/corosync.conf, disk formats of OCFS2). Therefore, a rolling upgrade from any SUSE Linux Enterprise High Availability Extension 11 version is not supported. Instead, all cluster nodes must be offline and the cluster needs to be migrated as a whole as described in Procedure 5.1, “Upgrading from Product Version 11 to 12: Cluster-Wide Offline Migration”.

Procedure 5.1: Upgrading from Product Version 11 to 12: Cluster-Wide Offline Migration
  1. Log in to each cluster node and stop the cluster stack with:

    root # rcopenais stop
  2. For each cluster node, perform an upgrade to the desired target version of SUSE Linux Enterprise Server and SUSE Linux Enterprise High Availability Extension—see Section 5.2.1, “Supported Upgrade Paths for SLE HA and SLE HA Geo”.

  3. After the upgrade process has finished, reboot each node with the upgraded version of SUSE Linux Enterprise Server and SUSE Linux Enterprise High Availability Extension.

  4. If you use OCFS2 in your cluster setup, update the on-device structure by executing the following command:

    root # o2cluster --update PATH_TO_DEVICE

    It adds additional parameters to the disk. They are needed for the updated OCFS2 version that is shipped with SUSE Linux Enterprise High Availability Extension 12 and 12 SPx.

  5. To update /etc/corosync/corosync.conf for Corosync version 2:

    1. Log in to one node and start the YaST cluster module.

    2. Switch to the Communication Channels category and enter values for the following new parameters: Cluster Name and Expected Votes. For details, see Procedure 4.1, “Defining the First Communication Channel (Multicast)” or Procedure 4.2, “Defining the First Communication Channel (Unicast)”, respectively.

      If YaST should detect any other options that are invalid or missing according to Corosync version 2, it will prompt you to change them.

    3. Confirm your changes in YaST. YaST will write them to /etc/corosync/corosync.conf.

    4. If Csync2 is configured for your cluster, use the following command to push the updated Corosync configuration to the other cluster nodes:

      root # csync2 -xv

      For details on Csync2, see Section 4.5, “Transferring the Configuration to All Nodes”.

      Alternatively, synchronize the updated Corosync configuration by manually copying /etc/corosync/corosync.conf to all cluster nodes.

  6. Log in to each node and start the cluster stack with:

    root # crm cluster start
  7. Check the cluster status with crm status or with Hawk2.

  8. Configure the following services to start at boot time:

    root # systemctl enable pacemaker
    root # systemctl enable hawk
    root # systemctl enable sbd
Note
Note: Upgrading the CIB Syntax Version

Sometimes new features are only available with the latest CIB syntax version. When you upgrade to a new product version, your CIB syntax version will not be upgraded by default.

  1. Check your version with:

    cibadmin -Q | grep validate-with
  2. Upgrade to the latest CIB syntax version with:

    root # cibadmin --upgrade --force
Procedure 5.2: Upgrading from Product Version 12 to 15: Cluster-Wide Offline Migration
Important
Important: Installation from Scratch

If you decide to install the cluster nodes from scratch (instead of upgrading them), see Section 2.2, “Software Requirements” for the list of modules required for SUSE Linux Enterprise High Availability Extension 15 SP1. Find more information about modules, extensions and related products in the release notes for SUSE Linux Enterprise Server 15. They are available at https://www.suse.com/releasenotes/.

  1. Before starting the offline migration to SUSE Linux Enterprise High Availability Extension 15, manually upgrade the CIB syntax in your current cluster as described in Note: Upgrading the CIB Syntax Version.

  2. Log in to each cluster node and stop the cluster stack with:

    root # crm cluster stop
  3. For each cluster node, perform an upgrade to the desired target version of SUSE Linux Enterprise Server and SUSE Linux Enterprise High Availability Extension—see Section 5.2.1, “Supported Upgrade Paths for SLE HA and SLE HA Geo”.

  4. After the upgrade process has finished, log in to each node and boot it with the upgraded version of SUSE Linux Enterprise Server and SUSE Linux Enterprise High Availability Extension.

  5. If you use Cluster LVM, you need to migrate from clvmd to lvmlockd. See the man page of lvmlockd, section changing a clvm VG to a lockd VG and Section 21.4, “Online Migration from Mirror LV to Cluster MD”.

  6. Start the cluster stack with:

    root # crm cluster start
  7. Check the cluster status with crm status or with Hawk2.

5.2.4 Rolling Upgrade

This section applies to the following scenarios:

  • Upgrading from SLE HA 12 to SLE HA 12 SP1

  • Upgrading from SLE HA 12 SP1 to SLE HA 12 SP2

  • Upgrading from SLE HA 12 SP2 to SLE HA 12 SP3

  • Upgrading from SLE HA 15 to SLE HA 15 SP1

Warning
Warning: Active Cluster Stack

Before starting an upgrade for a node, stop the cluster stack on that node.

If the cluster resource manager on a node is active during the software update, this can lead to unpredictable results like fencing of active nodes.

Procedure 5.3: Performing a Cluster-wide Rolling Upgrade
  1. Log in as root on the node that you want to upgrade and stop the cluster stack:

    root # crm cluster stop
  2. Perform an upgrade to the desired target version of SUSE Linux Enterprise Server and SUSE Linux Enterprise High Availability Extension. To find the details for the individual upgrade processes, see Section 5.2.1, “Supported Upgrade Paths for SLE HA and SLE HA Geo”.

  3. Start the cluster stack on the upgraded node to make the node rejoin the cluster:

    root # crm cluster start
  4. Take the next node offline and repeat the procedure for that node.

  5. Check the cluster status with crm status or with Hawk2.

Important
Important: Time Limit for Rolling Upgrade

The new features shipped with the latest product version will only be available after all cluster nodes have been upgraded to the latest product version. Mixed version clusters are only supported for a short time frame during the rolling upgrade. Complete the rolling upgrade within one week.

The Hawk2 Status screen also shows a warning if different CRM versions are detected for your cluster nodes.

5.3 Updating Software Packages on Cluster Nodes

Warning
Warning: Active Cluster Stack

Before starting an update for a node, either stop the cluster stack on that node or put the node into maintenance mode, depending on whether the cluster stack is affected or not. See Step 1 for details.

If the cluster resource manager on a node is active during the software update, this can lead to unpredictable results like fencing of active nodes.

  1. Before installing any package updates on a node, check the following:

    • Does the update affect any packages belonging to SUSE Linux Enterprise High Availability Extension or Geo Clustering for SUSE Linux Enterprise High Availability Extension? If yes: Stop the cluster stack on the node before starting the software update:

      root # crm cluster stop
    • Does the package update require a reboot? If yes: Stop the cluster stack on the node before starting the software update:

      root # crm cluster stop
    • If none of the situations above apply, you do not need to stop the cluster stack. In that case, put the node into maintenance mode before starting the software update:

      root # crm node maintenance NODE_NAME

      For more details on maintenance mode, see Section 16.2, “Different Options for Maintenance Tasks”.

  2. Install the package update using either YaST or Zypper.

  3. After the update has been successfully installed:

    • Either start the cluster stack on the respective node (if you stopped it in Step 1):

      root # crm cluster start
    • or remove the maintenance flag to bring the node back to normal mode:

      root # crm node ready NODE_NAME
  4. Check the cluster status with crm status or with Hawk2.

5.4 For More Information

For detailed information about any changes and new features of the product you are upgrading to, refer to its release notes. They are available from https://www.suse.com/releasenotes/.

Print this page