Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
documentation.suse.com / Documentação da SUSE Linux Enterprise High Availability Extension / Administration Guide / Maintenance and Upgrade / Upgrading Your Cluster and Updating Software Packages
Applies to SUSE Linux Enterprise High Availability 15 SP2

28 Upgrading Your Cluster and Updating Software Packages

This chapter covers two different scenarios: upgrading a cluster to another version of SUSE Linux Enterprise High Availability (either a major release or a service pack) as opposed to updating individual packages on cluster nodes. See Section 28.2, “Upgrading your Cluster to the Latest Product Version” versus Section 28.3, “Updating Software Packages on Cluster Nodes”.

If you want to upgrade your cluster, check Section 28.2.1, “Supported Upgrade Paths for SLE HA and SLE HA Geo” and Section 28.2.2, “Required Preparations Before Upgrading” before starting to upgrade.

28.1 Terminology

In the following, find definitions of the most important terms used in this chapter:

Major Release, General Availability (GA) Version

A major release is a new product version that brings new features and tools, and decommissions previously deprecated components. It comes with backward incompatible changes.

Cluster Offline Upgrade

If a new product version includes major changes that are backward incompatible, the cluster needs to be upgraded by a cluster offline upgrade. You need to take all nodes offline and upgrade the cluster as a whole, before you can bring all nodes back online.

Cluster Rolling Upgrade

In a cluster rolling upgrade one cluster node at a time is upgraded while the rest of the cluster is still running. You take the first node offline, upgrade it and bring it back online to join the cluster. Then you continue one by one until all cluster nodes are upgraded to a major version.

Service Pack (SP)

Combines several patches into a form that is easy to install or deploy. Service packs are numbered and usually contain security fixes, updates, upgrades, or enhancements of programs.

Update

Installation of a newer minor version of a package, which usually contains security fixes and other important fixes.

Upgrade

Installation of a newer major version of a package or distribution, which brings new features. See also Cluster Offline Upgrade versus Cluster Rolling Upgrade.

28.2 Upgrading your Cluster to the Latest Product Version

Which upgrade path is supported, and how to perform the upgrade, depends on the current product version as well as on the target version you want to migrate to.

SUSE Linux Enterprise High Availability has the same supported upgrade paths as the underlying base system. For a complete overview, see the section Supported Upgrade Paths to SUSE Linux Enterprise Server 15 SP2 in the SUSE Linux Enterprise Server Upgrade Guide.

In addition, the following rules apply, as the High Availability cluster stack offers two methods for upgrading the cluster:

  • Cluster Rolling UpgradeA cluster rolling upgrade is only supported within the same major release (from one service pack to the next, or from the GA version of a product to SP1).

  • Cluster Offline UpgradeA cluster offline upgrade is required to upgrade from one major release to the next (for example, from SLE HA 12 to SLE HA 15) or from a service pack within one major release to the next major release (for example, from SLE HA 12 SP3 to SLE HA 15).

Section 28.2.1 list the supported upgrade paths and methods for SLE HA (Geo), moving from one version to the next. The column For Details lists the specific upgrade documentation you should refer to (including also the base system and Geo Clustering for SUSE Linux Enterprise High Availability). This documentation is available from:

Important
Important: No Support for Mixed Clusters and Reversion After Upgrade
  • Mixed clusters running on SUSE Linux Enterprise High Availability 12/SUSE Linux Enterprise High Availability 15 are not supported.

  • After the upgrade process to product version 15, reverting back to product version 12 is not supported.

28.2.1 Supported Upgrade Paths for SLE HA and SLE HA Geo

Upgrade From ... To

Upgrade Path

For Details

SLE HA 11 SP3 to

SLE HA (Geo) 12

Cluster Offline Upgrade

SLE HA (Geo) 11 SP4 to SLE HA (Geo) 12 SP1

Cluster Offline Upgrade

SLE HA (Geo) 12 to SLE HA (Geo) 12 SP1

Cluster Rolling Upgrade

  • Base System: Deployment Guide for SLES 12 SP1, part Updating and Upgrading SUSE Linux Enterprise

  • SLE HA: Performing a Cluster Rolling Upgrade

  • SLE HA Geo: Geo Clustering Quick Start for SLE HA 12 SP1, section Upgrading to the Latest Product Version

SLE HA (Geo) 12 SP1 to SLE HA (Geo) 12 SP2

Cluster Rolling Upgrade

SLE HA (Geo) 12 SP2 to SLE HA (Geo) 12 SP3

Cluster Rolling Upgrade

  • Base System: Deployment Guide for SLES 12 SP3, part Updating and Upgrading SUSE Linux Enterprise

  • SLE HA: Performing a Cluster Rolling Upgrade

  • SLE HA Geo: Geo Clustering Guide for SLE HA 12 SP3, section Upgrading to the Latest Product Version

SLE HA (Geo) 12 SP3 to SLE HA (Geo) 12 SP4

Cluster Rolling Upgrade

  • Base System: Deployment Guide for SLES 12 SP4, part Updating and Upgrading SUSE Linux Enterprise

  • SLE HA: Performing a Cluster Rolling Upgrade

  • SLE HA Geo: Geo Clustering Guide for SLE HA 12 SP4, section Upgrading to the Latest Product Version

SLE HA (Geo) 12 SP3 to SLE HA 15

Cluster Offline Upgrade

SLE HA (Geo) 12 SP4 to SLE HA (Geo) 12 SP5

Cluster Rolling Upgrade

SLE HA (Geo) 12 SP4 to SLE HA 15 SP1

Cluster Offline Upgrade

SLE HA (Geo) 12 SP5 to SLE HA 15 SP2

Cluster Offline Upgrade

SLE HA 15 to SLE HA 15 SP1

Cluster Rolling Upgrade

SLE HA 15 SP1 to SLE HA 15 SP2

Cluster Rolling Upgrade

Note
Note: Skipping Service Packs

The easiest upgrade path is consecutively installing all service packs. For the SUSE Linux Enterprise 15 product line (GA and the subsequent service packs) it is also supported to skip one service pack when upgrading. For example, upgrading from SLE HA 15 GA to 15 SP2 or from SLE HA 15 SP1 to 15 SP3 is supported.

28.2.2 Required Preparations Before Upgrading

Backup

Ensure that your system backup is up to date and restorable.

Testing

Test the upgrade procedure on a staging instance of your cluster setup first, before performing it in a production environment. This gives you an estimation of the time frame required for the maintenance window. It also helps to detect and solve any unexpected problems that might arise.

28.2.3 Cluster Offline Upgrade

This section applies to the following scenarios:

If your cluster is still based on an older product version than the ones listed above, first upgrade it to a version of SLES and SLE HA that can be used as a source for upgrading to the desired target version.

Procedure 28.1: Upgrading from Product Version 11 to 12: Cluster Offline Upgrade

SUSE Linux Enterprise High Availability 12 cluster stack comes with major changes in various components (for example, /etc/corosync/corosync.conf, disk formats of OCFS2). Therefore, a cluster rolling upgrade from any SUSE Linux Enterprise High Availability 11 version is not supported. Instead, all cluster nodes must be offline and the cluster needs to be upgraded as a whole as described below.

  1. Log in to each cluster node and stop the cluster stack with:

    # rcopenais stop
  2. For each cluster node, perform an upgrade to the desired target version of SUSE Linux Enterprise Server and SUSE Linux Enterprise High Availability—see Section 28.2.1, “Supported Upgrade Paths for SLE HA and SLE HA Geo”.

  3. After the upgrade process has finished, reboot each node with the upgraded version of SUSE Linux Enterprise Server and SUSE Linux Enterprise High Availability.

  4. If you use OCFS2 in your cluster setup, update the on-device structure by executing the following command:

    # o2cluster --update PATH_TO_DEVICE

    It adds additional parameters to the disk. They are needed for the updated OCFS2 version that is shipped with SUSE Linux Enterprise High Availability 12 and 12 SPx.

  5. To update /etc/corosync/corosync.conf for Corosync version 2:

    1. Log in to one node and start the YaST cluster module.

    2. Switch to the Communication Channels category and enter values for the following new parameters: Cluster Name and Expected Votes. For details, see Procedure 4.1, “Defining the First Communication Channel (Multicast)” or Procedure 4.2, “Defining the First Communication Channel (Unicast)”, respectively.

      If YaST should detect any other options that are invalid or missing according to Corosync version 2, it will prompt you to change them.

    3. Confirm your changes in YaST. YaST will write them to /etc/corosync/corosync.conf.

    4. If Csync2 is configured for your cluster, use the following command to push the updated Corosync configuration to the other cluster nodes:

      # csync2 -xv

      For details on Csync2, see Section 4.7, “Transferring the configuration to all nodes”.

      Alternatively, synchronize the updated Corosync configuration by manually copying /etc/corosync/corosync.conf to all cluster nodes.

  6. Log in to each node and start the cluster stack with:

    # crm cluster start
  7. Check the cluster status with crm status or with Hawk2.

  8. Configure the following services to start at boot time:

    # systemctl enable pacemaker
    # systemctl enable hawk
    # systemctl enable sbd
Note
Note: Upgrading the CIB Syntax Version

Sometimes new features are only available with the latest CIB syntax version. When you upgrade to a new product version, your CIB syntax version will not be upgraded by default.

  1. Check your version with:

    cibadmin -Q | grep validate-with
  2. Upgrade to the latest CIB syntax version with:

    # cibadmin --upgrade --force
Procedure 28.2: Upgrading from Product Version 12 to 15: Cluster Offline Upgrade
Important
Important: Installation from Scratch

If you decide to install the cluster nodes from scratch (instead of upgrading them), see Section 2.2, “Software Requirements” for the list of modules required for SUSE Linux Enterprise High Availability 15 SP2. Find more information about modules, extensions and related products in the release notes for SUSE Linux Enterprise Server 15. They are available at https://www.suse.com/releasenotes/.

  1. Before starting the offline upgrade to SUSE Linux Enterprise High Availability 15, manually upgrade the CIB syntax in your current cluster as described in Note: Upgrading the CIB Syntax Version.

  2. Log in to each cluster node and stop the cluster stack with:

    # crm cluster stop
  3. For each cluster node, perform an upgrade to the desired target version of SUSE Linux Enterprise Server and SUSE Linux Enterprise High Availability—see Section 28.2.1, “Supported Upgrade Paths for SLE HA and SLE HA Geo”.

  4. After the upgrade process has finished, log in to each node and boot it with the upgraded version of SUSE Linux Enterprise Server and SUSE Linux Enterprise High Availability.

  5. If you use Cluster LVM, you need to migrate from clvmd to lvmlockd. See the man page of lvmlockd, section Changing a clvm/clustered VG to a shared VG.

    If you also use cmirrord, we highly recommend migrating to Cluster MD. See Section 23.4, “Online Migration from Mirror LV to Cluster MD”.

  6. Start the cluster stack with:

    # crm cluster start
  7. Check the cluster status with crm status or with Hawk2.

28.2.4 Cluster Rolling Upgrade

This section applies to the following scenarios:

  • Upgrading from SLE HA 12 to SLE HA 12 SP1

  • Upgrading from SLE HA 12 SP1 to SLE HA 12 SP2

  • Upgrading from SLE HA 12 SP2 to SLE HA 12 SP3

  • Upgrading from SLE HA 12 SP3 to SLE HA 12 SP4

  • Upgrading from SLE HA 12 SP4 to SLE HA 12 SP5

  • Upgrading from SLE HA 15 to SLE HA 15 SP1

  • Upgrading from SLE HA 15 SP1 to SLE HA 15 SP2

Use one of the following procedures for your scenario:

Warning
Warning: Active Cluster Stack

Before starting an upgrade for a node, stop the cluster stack on that node.

If the cluster resource manager on a node is active during the software update, this can lead to results such as fencing of active nodes.

Important
Important: Time Limit for Cluster Rolling Upgrade

The new features shipped with the latest product version will only be available after all cluster nodes have been upgraded to the latest product version. Mixed version clusters are only supported for a short time frame during the cluster rolling upgrade. Complete the cluster rolling upgrade within one week.

Once all the online nodes are running the upgraded version, it is not possible for any other nodes with the old version to (re-)join without having been upgraded.

Procedure 28.3: Performing a Cluster Rolling Upgrade
  1. Log in as root on the node that you want to upgrade and stop the cluster stack:

    # crm cluster stop
  2. Perform an upgrade to the desired target version of SUSE Linux Enterprise Server and SUSE Linux Enterprise High Availability. To find the details for the individual upgrade processes, see Section 28.2.1, “Supported Upgrade Paths for SLE HA and SLE HA Geo”.

  3. Start the cluster stack on the upgraded node to make the node rejoin the cluster:

    # crm cluster start
  4. Take the next node offline and repeat the procedure for that node.

  5. Check the cluster status with crm status or with Hawk2.

    The Hawk2 Status screen also shows a warning if different CRM versions are detected for your cluster nodes.

Important
Important: Time Limit for Rolling Upgrade

The new features shipped with the latest product version will only be available after all cluster nodes have been upgraded to the latest product version. Mixed version clusters are only supported for a short time frame during the rolling upgrade. Complete the rolling upgrade within one week.

The Hawk2 Status screen also shows a warning if different CRM versions are detected for your cluster nodes.

Beside an in-place upgrade, many customers prefer a fresh installation even for moving to the next service pack. The following procedure shows a scenario where a two-node cluster with the nodes alice and bob is upgraded to the next service pack (SP):

Procedure 28.4: Performing a Cluster-wide Fresh Installation of a New Service Pack
  1. Make a backup of your cluster configuration. A minimum set of files are shown in the following list:

    /etc/corosync/corosync.conf
    /etc/corosync/authkey
    /etc/sysconfig/sbd
    /etc/modules-load.d/watchdog.conf
    /etc/hosts
    /etc/chrony.conf

    Depending on your resources, you may also need the following files:

    /etc/services
    /etc/passwd
    /etc/shadow
    /etc/groups
    /etc/drbd/*
    /etc/lvm/lvm.conf
    /etc/mdadm.conf
    /etc/mdadm.SID.conf
  2. Start with node alice.

    1. Put the node into standby node. That way, resources can move off the node:

      # crm --wait node standby alice reboot

      With the option --wait, the command returns only when the cluster finishes the transition and becomes idle. The reboot option has the effect that the node will be already out of standby mode when it is online again. Despite its name, the reboot option works as long as the node goes offline and online.

    2. Stop the cluster services on node alice:

      # crm cluster stop
    3. At this point, alice does not have running resources anymore. Upgrade the node alice and reboot it afterward. Cluster services are assumed not to start on boot.

    4. Copy your backup files from Step 1 to the original places.

    5. Bring back node alice into cluster:

      # crm cluster start
    6. Check that resources are fine.

  3. Repeat Step 2 for node bob.

28.3 Updating Software Packages on Cluster Nodes

Warning
Warning: Active Cluster Stack

Before starting a package update for a node, either stop the cluster stack on that node or put the node into maintenance mode, depending on whether the cluster stack is affected or not. See Step 1 for details.

If the cluster resource manager on a node is active during the software update, this can lead to results such as fencing of active nodes.

  1. Before installing any package updates on a node, check the following:

    • Does the update affect any packages belonging to SUSE Linux Enterprise High Availability? If yes: Stop the cluster stack on the node before starting the software update:

      # crm cluster stop
    • Does the package update require a reboot? If yes: Stop the cluster stack on the node before starting the software update:

      # crm cluster stop
    • If none of the situations above apply, you do not need to stop the cluster stack. In that case, put the node into maintenance mode before starting the software update:

      # crm node maintenance NODE_NAME

      For more details on maintenance mode, see Section 27.2, “Different Options for Maintenance Tasks”.

  2. Install the package update using either YaST or Zypper.

  3. After the update has been successfully installed:

    • Either start the cluster stack on the respective node (if you stopped it in Step 1):

      # crm cluster start
    • or remove the maintenance flag to bring the node back to normal mode:

      # crm node ready NODE_NAME
  4. Check the cluster status with crm status or with Hawk2.

28.4 For More Information

For detailed information about any changes and new features of the product you are upgrading to, refer to its release notes. They are available from https://www.suse.com/releasenotes/.