Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
documentation.suse.com / SUSE Enterprise Storage 7.1 Documentation / Deployment Guide / Upgrading from Previous Releases / Upgrade from SUSE Enterprise Storage 7 to 7.1
Applies to SUSE Enterprise Storage 7.1

11 Upgrade from SUSE Enterprise Storage 7 to 7.1

This chapter introduces steps to upgrade SUSE Enterprise Storage 7 to version 7.1.

The upgrade includes the following tasks:

  • Upgrading the underlying SUSE Linux Enterprise Server 15 SP2 to version SUSE Linux Enterprise Server 15 SP3.

  • Upgrading from Ceph Octopus to Pacific.

11.1 Before upgrading

The following tasks must be completed before you start the upgrade. This can be done at any time during the SUSE Enterprise Storage 7 lifetime.

11.1.1 Points to consider

Before upgrading, ensure you read through the following sections to ensure you understand all tasks that need to be executed.

  • Read the release notes. In them, you can find additional information on changes since the previous release of SUSE Enterprise Storage. Check the release notes to see whether:

    • Your hardware needs special considerations.

    • Any used software packages have changed significantly.

    • Special precautions are necessary for your installation.

    The release notes also provide information that could not make it into the manual on time. They also contain notes about known issues.

    You can find SES 7.1 release notes online at https://www.suse.com/releasenotes/.

    Additionally, after having installed the package release-notes-ses from the SES 7.1 repository, you can find the release notes locally in the directory /usr/share/doc/release-notes or online at https://www.suse.com/releasenotes/.

  • Read Part II, “Deploying Ceph Cluster” to familiarise yourself with ceph-salt and the Ceph orchestrator, and in particular the information on service specifications.

11.1.2 Backing Up cluster configuration and data

We strongly recommend backing up all cluster configuration and data before starting the upgrade. For instructions on how to back up all your data, see Chapter 15, Backup and restore.

11.1.3 Verifying access to software repositories and container images

Verify that each cluster node has access to the SUSE Linux Enterprise Server 15 SP3 and SUSE Enterprise Storage 7.1 software repositories, as well as the registry of container images.

11.1.3.1 Software repositories

If all nodes are registered with SCC, you will be able to use the zypper migration command to upgrade. Refer to https://documentation.suse.com/sles/15-SP3/html/SLES-all/cha-upgrade-online.html#sec-upgrade-online-zypper for more details.

If nodes are not registered with SCC, disable all existing software repositories and add both the Pool and Updates repositories for each of the following extensions:

  • SLE-Product-SLES/15-SP3

  • SLE-Module-Basesystem/15-SP3

  • SLE-Module-Server-Applications/15-SP3

  • SUSE-Enterprise-Storage-7.1

11.1.3.2 Container images

All cluster nodes need access to the container image registry. In most cases, you will use the public SUSE registry at registry.suse.com. You need the following images:

  • registry.suse.com/ses/7.1/ceph/ceph

  • registry.suse.com/ses/7.1/ceph/grafana

  • registry.suse.com/ses/7.1/ceph/prometheus-server

  • registry.suse.com/ses/7.1/ceph/prometheus-node-exporter

  • registry.suse.com/ses/7.1/ceph/prometheus-alertmanager

Alternatively—for example, for air-gapped deployments—configure a local registry and verify that you have the correct set of container images available. Refer to Section 7.2.10, “Using the container registry” for more details about configuring a local container image registry.

Tip
Tip: Remove unused container images

Optionally, remove unused container images remaining on the system after the upgrade:

root@master # salt '*' cmd.shell "podman image prune --all --force"

11.2 Migrate SUSE Linux Enterprise Server on each cluster node to version SUSE Linux Enterprise Server 15 SP3

If the cluster nodes are configured to use SUSE Customer Center, you can use the zypper migration command.

If the cluster nodes have software repositories configured manually, you need to upgrade the nodes manually.

For detailed information about upgrading SUSE Linux Enterprise Server using zypper, refer to https://documentation.suse.com/sles/15-SP3/html/SLES-all/cha-upgrade-online.html#sec-upgrade-online-zypper.

11.3 Update SUSE Enterprise Storage related packages on each cluster node

To update SUSE Enterprise Storage packages to the latest version, use the following command:

   root@master # salt -G 'ceph-salt:member' saltutil.sync_all
   cephuser@adm > ceph-salt update

For more details, refer to Section 13.6, “Updating the cluster nodes”.

11.4 Upgrade existing Ceph cluster services

Perform the upgrade of the whole Ceph cluster to version Pacific by running the following command from the Admin Node:

cephuser@adm > ceph orch upgrade start --image registry.suse.com/ses/7.1/ceph/ceph
Note
Note

For upgrading the monitoring container images, refer to Section 16.2, “Updating monitoring services” and Section 16.1, “Configuring custom or local images”. Upgrading container images involves the same steps as upgrading container images during a maintenance update.

11.5 Gateway service upgrade

11.5.1 Upgrading the Object Gateway

Important
Important

SUSE Enterprise Storage 7.1 does not use the rgw_frontend_ssl_key option. Instead, both the SSL key and certificate are concatenated under the rgw_frontend_ssl_certificate option. If the Object Gateway deployment uses the rgw_frontend_ssl_key option, it will not be available after the upgrade to SUSE Enterprise Storage 7.1. In this case, the Object Gateway must be redeployed with the rgw_frontend_ssl_certificate option. Refer to Section 8.3.4.1, “Using secure SSL access” for more details.

11.5.2 Upgrading NFS Ganesha

Important
Important

The upgrade process disables the nfs module in the Ceph Manager daemon. You can re-enable it by executing the following command from the Admin Node:

cephuser@adm > ceph mgr module enable nfs