Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
Applies to SUSE Enterprise Storage 5.5 (SES 5 & SES 5.5)

5 Upgrading from Previous Releases Edit source

This chapter introduces steps to upgrade SUSE Enterprise Storage from the previous release(s) to version 5.5.

5.1 Read the Release Notes Edit source

In the release notes you can find additional information on changes since the previous release of SUSE Enterprise Storage. Check the release notes to see whether:

  • your hardware needs special considerations.

  • any used software packages have changed significantly.

  • special precautions are necessary for your installation.

The release notes also provide information that could not make it into the manual on time. They also contain notes about known issues.

After having installed the package release-notes-ses , find the release notes locally in the directory /usr/share/doc/release-notes or online at https://www.suse.com/releasenotes/.

5.2 General Upgrade Procedure Edit source

Consider the following items before starting the upgrade procedure:

Upgrade Order

Before upgrading the Ceph cluster, you need to have both the underlying SUSE Linux Enterprise Server and SUSE Enterprise Storage correctly registered against SCC or SMT. You can upgrade daemons in your cluster while the cluster is online and in service. Certain types of daemons depend upon others. For example Ceph Object Gateways depend upon Ceph monitors and Ceph OSD daemons. We recommend upgrading in this order:

  1. Ceph Monitors

  2. Ceph Managers

  3. Ceph OSDs

  4. Metadata Servers

  5. Object Gateways

  6. iSCSI Gateways

  7. NFS Ganesha

Delete Unnecessary Operating System Snapshots

Remove not needed file system snapshots on the operating system partitions of nodes. This ensures that there is enough free disk space during the upgrade.

Check Cluster Health

We recommend to check the cluster health before starting the upgrade procedure.

Upgrade One by One

We recommend upgrading all the daemons of a specific type—for example all monitor daemons or all OSD daemons—one by one to ensure that they are all on the same release. We also recommend that you upgrade all the daemons in your cluster before you try to exercise new functionality in a release.

After all the daemons of a specific type are upgraded, check their status.

Ensure each monitor has rejoined the quorum after all monitors are upgraded:

cephadm > ceph mon stat

Ensure each Ceph OSD daemon has rejoined the cluster after all OSDs are upgraded:

cephadm > ceph osd stat
Set require-osd-release luminous Flag

When the last OSD is upgraded to SUSE Enterprise Storage 5.5, the monitor nodes will detect that all OSDs are running the 'luminous' version of Ceph and they may complain that the require-osd-release luminous osdmap flag is not set. In that case, you need to set this flag manually to acknowledge that—now that the cluster has been upgraded to 'luminous'—it cannot be downgraded back to Ceph 'jewel'. Set the flag by running the following command:

cephadm > ceph osd require-osd-release luminous

After the command completes, the warning disappears.

On fresh installs of SUSE Enterprise Storage 5.5, this flag is set automatically when the Ceph monitors create the initial osdmap, so no end user action is needed.

5.3 Encrypting OSDs during Upgrade Edit source

Since SUSE Enterprise Storage 5.5, OSDs are by default deployed using BlueStore instead of FileStore. Although BlueStore supports encryption, Ceph OSDs are deployed unencrypted by default. The following procedure describes steps to encrypt OSDs during the upgrade process. Let us assume that both data and WAL/DB disks to be used for OSD deployment are clean with no partitions. If the disk were previously used, wipe them following the procedure described in Step 12.

Important
Important: One OSD at a Time

You need to deploy encrypted OSDs one by one, not simultaneously. The reason is that OSD's data is drained, and the cluster goes through several iterations of rebalancing.

  1. Determine the bluestore block db size and bluestore block wal size values for your deployment and add them to the /srv/salt/ceph/configuration/files/ceph.conf.d/global.conf file on the Salt master. The values need to be specified in bytes.

    [global]
    bluestore block db size = 48318382080
    bluestore block wal size = 2147483648

    For more information on customizing the ceph.conf file, refer to Section 1.12, “Adjusting ceph.conf with Custom Settings”.

  2. Run DeepSea Stage 3 to distribute the changes:

    root@master # salt-run state.orch ceph.stage.3
  3. Verify that the ceph.conf file is updated on the relevant OSD nodes:

    root@minion > cat /etc/ceph/ceph.conf
  4. Edit the *.yml files in the /srv/pillar/ceph/proposals/profile-default/stack/default/ceph/minions directory that are relevant to the OSDs you are encrypting. Double check their path with the one defined in the /srv/pillar/ceph/proposals/policy.cfg file to ensure that you modify the correct *.yml files.

    Important
    Important: Long Disk Identifiers

    When identifying OSD disks in the /srv/pillar/ceph/proposals/profile-default/stack/default/ceph/minions/*.yml files, use long disk identifiers.

    An example of an OSD configuration follows. Note that because we need encryption, the db_size and wal_size options are removed:

    ceph:
     storage:
       osds:
         /dev/disk/by-id/scsi-SDELL_PERC_H730_Mini_007027b1065faa972100d34d7aa06d86:
           format: bluestore
           encryption: dmcrypt
           db: /dev/disk/by-id/nvme-INTEL_SSDPEDMD020T4D_HHHL_NVMe_2000GB_PHFT642400HV2P0EGN
           wal: /dev/disk/by-id/nvme-INTEL_SSDPEDMD020T4D_HHHL_NVMe_2000GB_PHFT642400HV2P0EGN
         /dev/disk/by-id/scsi-SDELL_PERC_H730_Mini_00d146b1065faa972100d34d7aa06d86:
           format: bluestore
           encryption: dmcrypt
           db: /dev/disk/by-id/nvme-INTEL_SSDPEDMD020T4D_HHHL_NVMe_2000GB_PHFT642400HV2P0EGN
           wal: /dev/disk/by-id/nvme-INTEL_SSDPEDMD020T4D_HHHL_NVMe_2000GB_PHFT642400HV2P0EGN
  5. Deploy the new Block Storage OSDs with encryption by running DeepSea Stages 2 and 3:

    root@master # salt-run state.orch ceph.stage.2
    root@master # salt-run state.orch ceph.stage.3

    You can watch the progress with ceph -s or ceph osd tree. It is critical that you let the cluster rebalance before repeating the process on the next OSD node.

5.4 Upgrade from SUSE Enterprise Storage 4 (DeepSea Deployment) to 5 Edit source

Important
Important: Software Requirements

You need to have the following software installed and updated to the latest package versions on all the Ceph nodes you want to upgrade before you can start with the upgrade procedure:

  • SUSE Linux Enterprise Server 12 SP2

  • SUSE Enterprise Storage 4

Warning
Warning: Points to Consider before the Upgrade
  • Although the cluster is fully functional during the upgrade, DeepSea sets the 'noout' flag which prevents Ceph from rebalancing data during downtime and therefore avoids unnecessary data transfers.

  • To optimize the upgrade process, DeepSea upgrades your nodes in the order, based on their assigned role as recommended by Ceph upstream: MONs, MGRs, OSDs, MDS, RGW, IGW, and NFS Ganesha.

    Note that DeepSea cannot prevent the prescribed order from being violated if a node runs multiple services.

  • Although the Ceph cluster is operational during the upgrade, nodes may get rebooted in order to apply, for example, new kernel versions. To reduce waiting I/O operations, we recommend declining incoming requests for the duration of the upgrade process.

  • The cluster upgrade may take a very long time—approximately the time it takes to upgrade one machine multiplied by the number of cluster nodes.

  • Since Ceph Luminous, the osd crush location configuration option is no longer supported. Update your DeepSea configuration files to use crush location before upgrading.

  • There are two ways to obtain SUSE Linux Enterprise Server and SUSE Enterprise Storage 5.5 update repositories:

    • If your cluster nodes are registered with SUSEConnect and use SCC/SMT, you will use the zypper migration method and the update repositories will be assigned automatically.

    • If you are not using SCC/SMT but a Media-ISO or other package source, you will use the zypper dup method. In this case, you need to add the following repositories to all cluster nodes manually: SLE12-SP3 Base, SLE12-SP3 Update, SES5 Base, and SES5 Update. You can do so using the zypper command. First remove all existing software repositories, then add the required new ones, and finally refresh the repositories sources:

      root # zypper sd {0..99}
      root # zypper ar \
       http://REPO_SERVER/repo/SUSE/Products/Storage/5/x86_64/product/ SES5-POOL
      root # zypper ar \
       http://REPO_SERVER/repo/SUSE/Updates/Storage/5/x86_64/update/ SES5-UPDATES
      root # zypper ar \
       http://REPO_SERVER/repo/SUSE/Products/SLE-SERVER/12-SP3/x86_64/product/ SLES12-SP3-POOL
      root # zypper ar \
       http://REPO_SERVER/repo/SUSE/Updates/SLE-SERVER/12-SP3/x86_64/update/ SLES12-SP3-UPDATES
      root # zypper ref

To upgrade the SUSE Enterprise Storage 4 cluster to version 5, follow these steps:

  1. Upgrade the Salt master node to SUSE Linux Enterprise Server 12 SP3 and SUSE Enterprise Storage 5.5. Depending on your upgrade method, use either zypper migration or zypper dup.

    Using rpm -q deepsea, verify that the version of the DeepSea package on the Salt master node starts with at least 0.7. For example:

    root # rpm -q deepsea
    deepsea-0.7.27+git.0.274c55d-5.1

    If the DeepSea package version number starts with 0.6, double check whether you successfully migrated the Salt master node to SUSE Linux Enterprise Server 12 SP3 and SUSE Enterprise Storage 5.5.

  2. Set the new internal object sort order, run:

    cephadm > ceph osd set sortbitwise
    Tip
    Tip

    To verify that the command was successful, we recommend running

    cephadm > ceph osd dump --format json-pretty | grep sortbitwise
     "flags": "sortbitwise,recovery_deletes,purged_snapdirs",
  3. If your cluster nodes are not registered with SUSEConnect and do not use SCC/SMT, you will use the zypper dup method. Change your Pillar data in order to use the different strategy. Edit

    /srv/pillar/ceph/stack/name_of_cluster/cluster.yml

    and add the following line:

    upgrade_init: zypper-dup
  4. Update your Pillar:

    root@master # salt target saltutil.sync_all

    See Section 4.2.2, “Targeting the Minions” for details about Salt minions targeting.

  5. Verify that you successfully wrote to the Pillar:

    root@master # salt target pillar.get upgrade_init

    The command's output should mirror the entry you added.

  6. Upgrade Salt minions:

    root@master # salt target state.apply ceph.updates.salt
  7. Verify that all Salt minions are upgraded:

    root@master # salt target test.version
  8. Include the cluster's Salt minions. Refer to Section 4.2.2, “Targeting the Minions” of Procedure 4.1, “Running Deployment Stages” for more details.

  9. Start the upgrade of SUSE Linux Enterprise Server and Ceph:

    root@master # salt-run state.orch ceph.maintenance.upgrade

    Refer to Section 5.4.2, “Details on the salt target ceph.maintenance.upgrade Command” for more information.

    Tip
    Tip: Re-run on Reboot

    If the process results in a reboot of the Salt master, re-run the command to start the upgrade process for the Salt minions again.

  10. After the upgrade, the Ceph Managers are not installed yet. To reach a healthy cluster state, do the following:

    1. Run Stage 0 to enable the Salt REST API:

      root@master # salt-run state.orch ceph.stage.0
    2. Run Stage 1 to create the role-mgr/ subdirectory:

      root@master # salt-run state.orch ceph.stage.1
    3. Edit policy.cfg as described in Section 4.5.1, “The policy.cfg File” and add a Ceph Manager role to the nodes where Ceph Monitors are deployed, or uncomment the 'role-mgr' lines if you followed the steps of Section 5.5, “Upgrade from SUSE Enterprise Storage 4 (ceph-deploy Deployment) to 5” until here. Also, add the openATTIC role to one of the cluster nodes. Refer to Chapter 17, openATTIC for more details.

    4. Run Stage 2 to update the Pillar:

      root@master # salt-run state.orch ceph.stage.2
    5. DeepSea uses a different approach to generate the ceph.conf configuration file now, refer to Section 1.12, “Adjusting ceph.conf with Custom Settings” for more details.

    6. Set any of the three AppArmor states to all DeepSea minions. For example to disable them, run

       root@master # salt 'TARGET' state.apply ceph.apparmor.default-disable

      For more information, refer to Section 1.13, “Enabling AppArmor Profiles”.

    7. Run Stage 3 to deploy Ceph Managers:

      root@master # salt-run state.orch ceph.stage.3
    8. Run Stage 4 to configure openATTIC properly:

      root@master # salt-run state.orch ceph.stage.4
    Note
    Note: Ceph Key Caps Mismatch

    If ceph.stage.3 fails with "Error EINVAL: entity client.bootstrap-osd exists but caps do not match", it means the key capabilities (caps) for the existing cluster's client.bootstrap.osd key do not match the caps that DeepSea is trying to set. Above the error message, in red text, you can see a dump of the ceph auth command that failed. Look at this command to check the key ID and file being used. In the case of client.bootstrap-osd, the command will be

    cephadm > ceph auth add client.bootstrap-osd \
     -i /srv/salt/ceph/osd/cache/bootstrap.keyring

    To fix mismatched key caps, check the content of the keyring file DeepSea is trying to deploy, for example:

    cephadm > cat /srv/salt/ceph/osd/cache/bootstrap.keyring
    [client.bootstrap-osd]
         key = AQD6BpVZgqVwHBAAQerW3atANeQhia8m5xaigw==
         caps mgr = "allow r"
         caps mon = "allow profile bootstrap-osd"

    Compare this with the output of ceph auth get client.bootstrap-osd:

    cephadm > ceph auth get client.bootstrap-osd
    exported keyring for client.bootstrap-osd
    [client.bootstrap-osd]
         key = AQD6BpVZgqVwHBAAQerW3atANeQhia8m5xaigw==
         caps mon = "allow profile bootstrap-osd"

    Note how the latter key is missing caps mgr = "allow r". To fix this, run:

    cephadm > ceph auth caps client.bootstrap-osd mgr \
     "allow r" mon "allow profile bootstrap-osd"

    Running ceph.stage.3 should now succeed.

    The same issue can occur with other daemon and gateway keyrings when running ceph.stage.3 and ceph.stage.4. The same procedure as above applies: check the command that failed, the keyring file being deployed, and the capabilities of the existing key. Then run ceph auth caps to update the existing key capabilities to match to what is being deployed by DeepSea. The keyring files that DeepSea tries to deploy are typically placed under the /srv/salt/ceph/DAEMON_OR_GATEWAY_NAME/cache directory.

Important
Important: Upgrade Failure

If the cluster is in 'HEALTH_ERR' state for more than 300 seconds, or one of the services for each assigned role is down for more than 900 seconds, the upgrade failed. In that case, try to find the problem, resolve it, and re-run the upgrade procedure. Note that in virtualized environments, the timeouts are shorter.

Important
Important: Rebooting OSDs

After upgrading to SUSE Enterprise Storage 5.5, FileStore OSDs need approximately five minutes longer to start as the OSD will do a one-off conversion of its on-disk files.

Tip
Tip: Check for the Version of Cluster Components/Nodes

When you need to find out the versions of individual cluster components and nodes—for example to find out if all your nodes are actually on the same patch level after the upgrade—you can run

root@master # salt-run status.report

The command goes through the connected Salt minions and scans for the version numbers of Ceph, Salt, and SUSE Linux Enterprise Server, and gives you a report displaying the version that the majority of nodes have and showing nodes whose version is different from the majority.

5.4.1 OSD Migration to BlueStore Edit source

OSD BlueStore is a new back end for the OSD daemons. It is the default option since SUSE Enterprise Storage 5.5. Compared to FileStore, which stores objects as files in an XFS file system, BlueStore can deliver increased performance because it stores objects directly on the underlying block device. BlueStore also enables other features, such as built-in compression and EC overwrites, that are unavailable with FileStore.

Specifically for BlueStore, an OSD has a 'wal' (Write Ahead Log) device and a 'db' (RocksDB database) device. The RocksDB database holds the metadata for a BlueStore OSD. These two devices will reside on the same device as an OSD by default, but either can be placed on faster/different media.

In SES5, both FileStore and BlueStore are supported and it is possible for FileStore and BlueStore OSDs to co-exist in a single cluster. During the SUSE Enterprise Storage upgrade procedure, FileStore OSDs are not automatically converted to BlueStore. Be aware that the BlueStore-specific features will not be available on OSDs that have not been migrated to BlueStore.

Before converting to BlueStore, the OSDs need to be running SUSE Enterprise Storage 5.5. The conversion is a slow process as all data gets re-written twice. Though the migration process can take a long time to complete, there is no cluster outage and all clients can continue accessing the cluster during this period. However, do expect lower performance for the duration of the migration. This is caused by rebalancing and backfilling of cluster data.

Use the following procedure to migrate FileStore OSDs to BlueStore:

Tip
Tip: Turn Off Safety Measures

Salt commands needed for running the migration are blocked by safety measures. In order to turn these precautions off, run the following command:

root@master # salt-run disengage.safety
  1. Migrate hardware profiles:

    root@master # salt-run state.orch ceph.migrate.policy

    This runner migrates any hardware profiles currently in use by the policy.cfg file. It processes policy.cfg, finds any hardware profile using the original data structure, and converts it to the new data structure. The result is a new hardware profile named 'migrated-original_name'. policy.cfg is updated as well.

    If the original configuration had separate journals, the BlueStore configuration will use the same device for the 'wal' and 'db' for that OSD.

  2. DeepSea migrates OSDs by setting their weight to 0 which 'vacuums' the data until the OSD is empty. You can either migrate OSDs one by one, or all OSDs at once. In either case, when the OSD is empty, the orchestration removes it and then re-creates it with the new configuration.

    Tip
    Tip: Recommended Method

    Use ceph.migrate.nodes if you have a large number of physical storage nodes or almost no data. If one node represents less than 10% of your capacity, then the ceph.migrate.nodes may be marginally faster moving all the data from those OSDs in parallel.

    If you are not sure about which method to use, or the site has few storage nodes (for example each node has more than 10% of the cluster data), then select ceph.migrate.osds.

    1. To migrate OSDs one at a time, run:

      root@master # salt-run state.orch ceph.migrate.osds
    2. To migrate all OSDs on each node in parallel, run:

      root@master # salt-run state.orch ceph.migrate.nodes
    Tip
    Tip

    As the orchestration gives no feedback about the migration progress, use

    cephadm > ceph osd tree

    to see which OSDs have a weight of zero periodically.

After the migration to BlueStore, the object count will remain the same and disk usage will be nearly the same.

5.4.2 Details on the salt target ceph.maintenance.upgrade Command Edit source

During an upgrade via salt targetceph.maintenance.upgrade, DeepSea applies all available updates/patches on all servers in the cluster in parallel without rebooting them. After these updates/patches are applied, the actual upgrade begins:

  1. The admin node is upgraded to SUSE Linux Enterprise Server 12 SP3. This also upgrades the salt-master and deepsea packages.

  2. All Salt minions are upgraded to a version that corresponds to the Salt master.

  3. The migration is performed sequentially on all cluster nodes in the recommended order (the Ceph Monitors first, see Upgrade Order) using the preferred method. As a consequence, the ceph package is upgraded.

  4. After updating all Ceph Monitors, their services are restarted but the nodes are not rebooted. This way we ensure that all running Ceph Monitors have identical version.

    Important
    Important: Do Not Reboot Monitor Nodes

    If the cluster monitor nodes host OSDs, do not reboot the nodes during this stage because the shared OSDs will not join the cluster after the reboot.

  5. All the remaining cluster nodes are updated and rebooted in the recommended order.

  6. After all nodes are on the same patch-level, the following command is run:

    ceph require osd release RELEASE

In case this process is interrupted by an accident or intentionally by the administrator, never reboot the nodes manually because after rebooting the first OSD node and OSD daemon, it will not be able to join the cluster anymore.

5.5 Upgrade from SUSE Enterprise Storage 4 (ceph-deploy Deployment) to 5 Edit source

Important
Important: Software Requirements

You need to have the following software installed and updated to the latest package versions on all the Ceph nodes you want to upgrade before you can start with the upgrade procedure:

  • SUSE Linux Enterprise Server 12 SP2

  • SUSE Enterprise Storage 4

Choose the Salt master for your cluster. If your cluster has Calamari deployed, then the Calamari node already is the Salt master. Alternatively, the admin node from which you ran the ceph-deploy command will become the Salt master.

Before starting the procedure below, you need to upgrade the Salt master node to SUSE Linux Enterprise Server 12 SP3 and SUSE Enterprise Storage 5.5 by running zypper migration (or your preferred way of upgrading).

To upgrade the SUSE Enterprise Storage 4 cluster which was deployed with ceph-deploy to version 5, follow these steps:

Procedure 5.1: Steps to Apply to All Cluster Nodes (including the Calamari Node)
  1. Install the salt package from SLE-12-SP2/SES4:

    root # zypper install salt
  2. Install the salt-minion package from SLE-12-SP2/SES4, then enable and start the related service:

    root # zypper install salt-minion
    root # systemctl enable salt-minion
    root # systemctl start salt-minion
  3. Ensure that the host name 'salt' resolves to the IP address of the Salt master node. If your Salt master is not reachable by the host name salt, edit the file /etc/salt/minion or create a new file /etc/salt/minion.d/master.conf with the following content:

    master: host_name_of_salt_master
    Tip
    Tip

    The existing Salt minions have the master: option already set in /etc/salt/minion.d/calamari.conf. The configuration file name does not matter, the /etc/salt/minion.d/ directory is important.

    If you performed any changes to the configuration files mentioned above, restart the Salt service on all Salt minions:

    root@minion > systemctl restart salt-minion.service
    1. If you registered your systems with SUSEConnect and use SCC/SMT, no further actions need to be taken.

    2. If you are not using SCC/SMT but a Media-ISO or other package source, add the following repositories manually: SLE12-SP3 Base, SLE12-SP3 Update, SES5 Base, and SES5 Update. You can do so using the zypper command. First remove all existing software repositories, then add the required new ones, and finally refresh the repositories sources:

      root # zypper sd {0..99}
      root # zypper ar \
       http://172.17.2.210:82/repo/SUSE/Products/Storage/5/x86_64/product/ SES5-POOL
      root # zypper ar \
       http://172.17.2.210:82/repo/SUSE/Updates/Storage/5/x86_64/update/ SES5-UPDATES
      root # zypper ar \
       http://172.17.2.210:82/repo/SUSE/Products/SLE-SERVER/12-SP3/x86_64/product/ SLES12-SP3-POOL
      root # zypper ar \
       http://172.17.2.210:82/repo/SUSE/Updates/SLE-SERVER/12-SP3/x86_64/update/ SLES12-SP3-UPDATES
      root # zypper ref
Procedure 5.2: Steps to Apply to the Salt master Node
  1. Set the new internal object sort order, run:

    cephadm > ceph osd set sortbitwise
    Tip
    Tip

    To verify that the command was successful, we recommend running

    cephadm > ;ceph osd dump --format json-pretty | grep sortbitwise
     "flags": "sortbitwise,recovery_deletes,purged_snapdirs",
  2. Upgrade the Salt master node to SUSE Linux Enterprise Server 12 SP3 and SUSE Enterprise Storage 5.5. For SCC-registered systems, use zypper migration. If you provide the required software repositories manually, use zypper dup. After the upgrade, ensure that only repositories for SUSE Linux Enterprise Server 12 SP3 and SUSE Enterprise Storage 5.5 are active (and refreshed) on the Salt master node before proceeding.

  3. If not already present, install the salt-master package, then enable and start the related service:

    root@master # zypper install salt-master
    root@master # systemctl enable salt-master
    root@master # systemctl start salt-master
  4. Verify the presence of all Salt minions by listing their keys:

    root@master # salt-key -L
  5. Add all Salt minions keys to Salt master including the minion master:

    root@master # salt-key -A -y
  6. Ensure that all Salt minions' keys were accepted:

    root@master # salt-key -L
  7. Make sure that the software on your Salt master node is up to date:

    root@master # zypper migration
  8. Install the deepsea package:

    root@master # zypper install deepsea
  9. Include the cluster's Salt minions. Refer to Section 4.2.2, “Targeting the Minions” of Procedure 4.1, “Running Deployment Stages” for more details.

  10. Import the existing ceph-deploy installed cluster:

    root@master # salt-run populate.engulf_existing_cluster

    The command will do the following:

    • Distribute all the required Salt and DeepSea modules to all the Salt minions.

    • Inspect the running Ceph cluster and populate /srv/pillar/ceph/proposals with a layout of the cluster.

      /srv/pillar/ceph/proposals/policy.cfg will be created with roles matching all detected running Ceph services. If no ceph-mgr daemons are detected a 'role-mgr' is added for every node with 'role-mon'. View this file to verify that each of your existing MON, OSD, RGW and MDS nodes have the appropriate roles. OSD nodes will be imported into the profile-import/ subdirectory, so you can examine the files in /srv/pillar/ceph/proposals/profile-import/cluster/ and /srv/pillar/ceph/proposals/profile-import/stack/default/ceph/minions/ to confirm that the OSDs were correctly picked up.

      Note
      Note

      The generated policy.cfg will only apply roles for detected Ceph services 'role-mon', 'role-mds', 'role-rgw', 'role-admin', and 'role-master' for the Salt master node. Any other desired roles will need to be added to the file manually (see Section 4.5.1.2, “Role Assignment”).

    • The existing cluster's ceph.conf will be saved to /srv/salt/ceph/configuration/files/ceph.conf.import.

    • /srv/pillar/ceph/proposals/config/stack/default/ceph/cluster.yml will include the cluster's fsid, cluster and public networks, and also specifies the configuration_init: default-import option, which makes DeepSea use the ceph.conf.import configuration file mentioned previously, rather than using DeepSea's default /srv/salt/ceph/configuration/files/ceph.conf.j2 template.

      Note
      Note: Custom Settings in ceph.conf

      If you need to integrate the ceph.conf file with custom changes, wait until the engulf/upgrade process successfully finishes. Then edit the /srv/pillar/ceph/proposals/config/stack/default/ceph/cluster.yml file and comment the following line:

      configuration_init: default-import

      Save the file and follow the information in Section 1.12, “Adjusting ceph.conf with Custom Settings”.

    • The cluster's various keyrings will be saved to the following directories:

      /srv/salt/ceph/admin/cache/
      /srv/salt/ceph/mon/cache/
      /srv/salt/ceph/osd/cache/
      /srv/salt/ceph/mds/cache/
      /srv/salt/ceph/rgw/cache/

      Verify that the keyring files exist, and that there is no keyring file in the following directory (the Ceph Manager did not exist before SUSE Enterprise Storage 5.5):

      /srv/salt/ceph/mgr/cache/
  11. If the salt-run populate.engulf_existing_cluster command cannot detect ceph-mgr daemons, the policy.cfg file will contain a 'mgr' role for each node that has the 'role-mon' assigned. This will deploy ceph-mgr daemons together with the monitor daemons in a later step. Since there are no ceph-mgr daemons running at this time, please edit /srv/pillar/ceph/proposals/policy.cfg and comment out all lines starting with 'role-mgr' by prepending a '#' character.

  12. The salt-run populate.engulf_existing_cluster command does not handle importing the openATTIC configuration. You need to manually edit the policy.cfg file and add a role-openattic line. Refer to Section 4.5.1, “The policy.cfg File” for more details.

  13. The salt-run populate.engulf_existing_cluster command does not handle importing the iSCSI Gateways configurations. If your cluster includes iSCSI Gateways, import their configurations manually:

    1. On one of iSCSI Gateway nodes, export the current lrbd.conf and copy it to the Salt master node:

      root@minion > lrbd -o >/tmp/lrbd.conf
      root@minion > scp /tmp/lrbd.conf admin:/srv/salt/ceph/igw/cache/lrbd.conf
    2. On the Salt master node, add the default iSCSI Gateway configuration to the DeepSea setup:

      root@master # mkdir -p /srv/pillar/ceph/stack/ceph/
      root@master # echo 'igw_config: default-ui' >> /srv/pillar/ceph/stack/ceph/cluster.yml
      root@master # chown salt:salt /srv/pillar/ceph/stack/ceph/cluster.yml
    3. Add the iSCSI Gateway roles to policy.cfg and save the file:

      role-igw/stack/default/ceph/minions/ses-1.ses.suse.yml
      role-igw/cluster/ses-1.ses.suse.sls
      [...]
  14. Run Stages 0 and 1 to update packages and create all possible roles:

    root@master # salt-run state.orch ceph.stage.0
    root@master # salt-run state.orch ceph.stage.1
  15. Generate required subdirectories under /srv/pillar/ceph/stack:

    root@master # salt-run push.proposal
  16. Verify that there is a working DeepSea-managed cluster with correctly assigned roles:

    root@master # salt target pillar.get roles

    Compare the output with the actual layout of the cluster.

  17. Calamari leaves a scheduled Salt job running to check the cluster status. Remove the job:

    root@master # salt target schedule.delete ceph.heartbeat
  18. From this point on, follow the procedure described in Section 5.4, “Upgrade from SUSE Enterprise Storage 4 (DeepSea Deployment) to 5”.

5.6 Upgrade from SUSE Enterprise Storage 4 (Crowbar Deployment) to 5 Edit source

Important
Important: Software Requirements

You need to have the following software installed and updated to the latest package versions on all the Ceph nodes you want to upgrade before you can start with the upgrade procedure:

  • SUSE Linux Enterprise Server 12 SP2

  • SUSE Enterprise Storage 4

To upgrade SUSE Enterprise Storage 4 deployed using Crowbar to version 5, follow these steps:

  1. For each Ceph node (including the Calamari node), stop and disable all Crowbar-related services :

    root@minion > systemctl stop chef-client
    root@minion > systemctl disable chef-client
    root@minion > systemctl disable crowbar_join
    root@minion > systemctl disable crowbar_notify_shutdown
  2. For each Ceph node (including the Calamari node), verify that the software repositories point to SUSE Enterprise Storage 5.5 and SUSE Linux Enterprise Server 12 SP3 products. If repositories pointing to older product versions are still present, disable them.

  3. For each Ceph node (including the Calamari node), verify that the salt-minion is installed. If not, install it:

    root@minion > zypper in salt salt-minion
  4. For the Ceph nodes that did not have the salt-minion package installed, create the file /etc/salt/minion.d/master.conf with the master option pointing to the full Calamari node host name:

    master: full_calamari_hostname
    Tip
    Tip

    The existing Salt minions have the master: option already set in /etc/salt/minion.d/calamari.conf. The configuration file name does not matter, the /etc/salt/minion.d/ directory is important.

    Enable and start the salt-minion service:

    root@minion > systemctl enable salt-minion
    root@minion > systemctl start salt-minion
  5. On the Calamari node, accept any remaining salt minion keys:

    root@master # salt-key -L
    [...]
    Unaccepted Keys:
    d52-54-00-16-45-0a.example.com
    d52-54-00-70-ac-30.example.com
    [...]
    
    root@master # salt-key -A
    The following keys are going to be accepted:
    Unaccepted Keys:
    d52-54-00-16-45-0a.example.com
    d52-54-00-70-ac-30.example.com
    Proceed? [n/Y] y
    Key for minion d52-54-00-16-45-0a.example.com accepted.
    Key for minion d52-54-00-70-ac-30.example.com accepted.
  6. If Ceph was deployed on the public network and no VLAN interface is present, add a VLAN interface on Crowbar's public network to the Calamari node.

  7. Upgrade the Calamari node to SUSE Linux Enterprise Server 12 SP3 and SUSE Enterprise Storage 5.5, either by using zypper migration or your favorite method. From here onward, the Calamari node becomes the Salt master. After the upgrade, reboot the Salt master.

  8. Install DeepSea on the Salt master:

    root@master # zypper in deepsea
  9. Specify the deepsea_minions option to include the correct group of Salt minions into deployment stages. Refer to Section 4.2.2.3, “Set the deepsea_minions Option” for more details.

  10. DeepSea expects all Ceph nodes to have an identical /etc/ceph/ceph.conf. Crowbar deploys a slightly different ceph.conf to each node, so you need to consolidate them:

    • Remove the osd crush location hook option, it was included by Calamari.

    • Remove the public addr option from the [mon] section.

    • Remove the port numbers from the mon host option.

  11. If you were running the Object Gateway, Crowbar deployed a separate /etc/ceph/ceph.conf.radosgw file to keep the keystone secrets separated from the regular ceph.conf file. Crowbar also added a custom /etc/systemd/system/ceph-radosgw@.service file. Because DeepSea does not support it, you need to remove it:

    • Append all [client.rgw....] sections from the ceph.conf.radosgw file to /etc/ceph/ceph.conf on all nodes.

    • On the Object Gateway node, run the following:

      root@minion > rm /etc/systemd/system/ceph-radosgw@.service
      systemctl reenable ceph-radosgw@rgw.public.$hostname
  12. Double check that ceph status works when run from the Salt master:

    root@master # ceph status
    cluster a705580c-a7ae-4fae-815c-5cb9c1ded6c2
    health HEALTH_OK
    [...]
  13. Import the existing cluster:

    root@master # salt-run populate.engulf_existing_cluster
    root@master # salt-run state.orch ceph.stage.1
    root@master # salt-run push.proposal
  14. The salt-run populate.engulf_existing_cluster command does not handle importing the iSCSI Gateways configurations. If your cluster includes iSCSI Gateways, import their configurations manually:

    1. On one of iSCSI Gateway nodes, export the current lrbd.conf and copy it to the Salt master node:

      root@minion > lrbd -o > /tmp/lrbd.conf
      root@minion > scp /tmp/lrbd.conf admin:/srv/salt/ceph/igw/cache/lrbd.conf
    2. On the Salt master node, add the default iSCSI Gateway configuration to the DeepSea setup:

      root@master # mkdir -p /srv/pillar/ceph/stack/ceph/
      root@master # echo 'igw_config: default-ui' >> /srv/pillar/ceph/stack/ceph/cluster.yml
      root@master # chown salt:salt /srv/pillar/ceph/stack/ceph/cluster.yml
    3. Add the iSCSI Gateway roles to policy.cfg and save the file:

      role-igw/stack/default/ceph/minions/ses-1.ses.suse.yml
      role-igw/cluster/ses-1.ses.suse.sls
      [...]
    1. If you registered your systems with SUSEConnect and use SCC/SMT, no further actions need to be taken.

    2. If you are not using SCC/SMT but a Media-ISO or other package source, add the following repositories manually: SLE12-SP3 Base, SLE12-SP3 Update, SES5 Base, and SES5 Update. You can do so using the zypper command. First remove all existing software repositories, then add the required new ones, and finally refresh the repositories sources:

      root # zypper sd {0..99}
      root # zypper ar \
       http://172.17.2.210:82/repo/SUSE/Products/Storage/5/x86_64/product/ SES5-POOL
      root # zypper ar \
       http://172.17.2.210:82/repo/SUSE/Updates/Storage/5/x86_64/update/ SES5-UPDATES
      root # zypper ar \
       http://172.17.2.210:82/repo/SUSE/Products/SLE-SERVER/12-SP3/x86_64/product/ SLES12-SP3-POOL
      root # zypper ar \
       http://172.17.2.210:82/repo/SUSE/Updates/SLE-SERVER/12-SP3/x86_64/update/ SLES12-SP3-UPDATES
      root # zypper ref

      Then change your Pillar data in order to use a different strategy. Edit

      /srv/pillar/ceph/stack/name_of_cluster/cluster.yml

      and add the following line:

      upgrade_init: zypper-dup
      Tip
      Tip

      The zypper-dup strategy requires you to manually add the latest software repositories, while the default zypper-migration relies on the repositories provided by SCC/SMT.

  15. Fix host grains to make DeepSea use short host names on the public network for the Ceph daemon instance IDs. For each node, you need to run grains.set with the new (short) host name. Before running grains.set, verify the current monitor instances by running ceph status. A before and after example follows:

    root@master # salt target grains.get host
    d52-54-00-16-45-0a.example.com:
        d52-54-00-16-45-0a
    d52-54-00-49-17-2a.example.com:
        d52-54-00-49-17-2a
    d52-54-00-76-21-bc.example.com:
        d52-54-00-76-21-bc
    d52-54-00-70-ac-30.example.com:
        d52-54-00-70-ac-30
    root@master # salt d52-54-00-16-45-0a.example.com grains.set \
     host public.d52-54-00-16-45-0a
    root@master # salt d52-54-00-49-17-2a.example.com grains.set \
     host public.d52-54-00-49-17-2a
    root@master # salt d52-54-00-76-21-bc.example.com grains.set \
     host public.d52-54-00-76-21-bc
    root@master # salt d52-54-00-70-ac-30.example.com grains.set \
     host public.d52-54-00-70-ac-30
    root@master # salt target grains.get host
    d52-54-00-76-21-bc.example.com:
        public.d52-54-00-76-21-bc
    d52-54-00-16-45-0a.example.com:
        public.d52-54-00-16-45-0a
    d52-54-00-49-17-2a.example.com:
        public.d52-54-00-49-17-2a
    d52-54-00-70-ac-30.example.com:
        public.d52-54-00-70-ac-30
  16. Run the upgrade:

    root@master # salt target state.apply ceph.updates
    root@master # salt target test.version
    root@master # salt-run state.orch ceph.maintenance.upgrade

    Every node will reboot. The cluster will come back up complaining that there is no active Ceph Manager instance. This is normal. Calamari should not be installed/running anymore at this point.

  17. Run all the required deployment stages to get the cluster to a healthy state:

    root@master # salt-run state.orch ceph.stage.0
    root@master # salt-run state.orch ceph.stage.1
    root@master # salt-run state.orch ceph.stage.2
    root@master # salt-run state.orch ceph.stage.3
  18. To deploy openATTIC (see Chapter 17, openATTIC), add an appropriate role-openattic (see Section 4.5.1.2, “Role Assignment”) line to /srv/pillar/ceph/proposals/policy.cfg, then run:

    root@master # salt-run state.orch ceph.stage.2
    root@master # salt-run state.orch ceph.stage.4
  19. During the upgrade, you may receive "Error EINVAL: entity [...] exists but caps do not match" errors. To fix them, refer to Section 5.4, “Upgrade from SUSE Enterprise Storage 4 (DeepSea Deployment) to 5”.

  20. Do the remaining cleanup:

    • Crowbar creates entries in /etc/fstab for each OSD. They are not necessary, so delete them.

    • Calamari leaves a scheduled Salt job running to check the cluster status. Remove the job:

      root@master # salt target schedule.delete ceph.heartbeat
    • There are still some unnecessary packages installed, mostly ruby gems, and chef related. Their removal is not required but you may want to delete them by running zypper rm pkg_name.

5.7 Upgrade from SUSE Enterprise Storage 3 to 5 Edit source

Important
Important: Software Requirements

You need to have the following software installed and updated to the latest package versions on all the Ceph nodes you want to upgrade before you can start with the upgrade procedure:

  • SUSE Linux Enterprise Server 12 SP1

  • SUSE Enterprise Storage 3

To upgrade the SUSE Enterprise Storage 3 cluster to version 5, follow the steps described in Procedure 5.1, “Steps to Apply to All Cluster Nodes (including the Calamari Node)” and then Procedure 5.2, “Steps to Apply to the Salt master Node”.

Print this page