15 Backup and restore #
This chapter explains which parts of the Ceph cluster you should back up in order to be able to restore its functionality.
15.1 Back Up Cluster Configuration and Data #
15.1.1 Back up ceph-salt
configuration #
Export the cluster configuration. Find more information in Section 7.2.14, “Exporting cluster configurations”.
15.1.2 Back up Ceph configuration #
Back up the /etc/ceph
directory. It contains crucial
cluster configuration. For example, you will need a backup of
/etc/ceph
when you need to replace the Admin Node.
15.1.3 Back up Salt configuration #
You need to back up the /etc/salt/
directory. It
contains the Salt configuration files, for example the Salt Master key and
accepted client keys.
The Salt files are not strictly required for backing up the Admin Node, but make redeploying the Salt cluster easier. If there is no backup of these files, the Salt minions need to be registered again at the new Admin Node.
Make sure that the backup of the Salt Master private key is stored in a safe location. The Salt Master key can be used to manipulate all cluster nodes.
15.1.4 Back up custom configurations #
Prometheus data and customization.
Grafana customization.
Manual changes to the iSCSI configuration.
Ceph keys.
CRUSH Map and CRUSH rules. Save the decompiled CRUSH Map including CRUSH rules into
crushmap-backup.txt
by running the following command:cephuser@adm >
ceph osd getcrushmap | crushtool -d - -o crushmap-backup.txtSamba Gateway configuration. If you are using a single gateway, backup
/etc/samba/smb.conf
. If you are using an HA setup, also back up the CTDB and Pacemaker configuration files. Refer to Chapter 24, Export Ceph data via Samba for details on what configuration is used by Samba Gateways.NFS Ganesha configuration. Only needed when using an HA setup. Refer to Chapter 25, NFS Ganesha for details on what configuration is used by NFS Ganesha.
15.2 Restoring a Ceph node #
The procedure to recover a node from backup is to reinstall the node, replace its configuration files, and then re-orchestrate the cluster so that the replacement node is re-added.
If you need to redeploy the Admin Node, refer to Section 13.5, “Moving the Salt Master to a new node”.
For minions, it is usually easier to simply rebuild and redeploy.
Re-install the node. Find more information in Chapter 5, Installing and configuring SUSE Linux Enterprise Server
Install Salt Find more information in Chapter 6, Deploying Salt
After restoring the
/etc/salt
directory from a backup, enable and restart applicable Salt services, for example:root@master #
systemctl
enable salt-masterroot@master #
systemctl
start salt-masterroot@master #
systemctl
enable salt-minionroot@master #
systemctl
start salt-minionRemove the public master key for the old Salt Master node from all the minions.
root@master #
rm
/etc/salt/pki/minion/minion_master.pubroot@master #
systemctl
restart salt-minionRestore anything that was local to the Admin Node.
Import the cluster configuration from the previously exported JSON file. Refer to Section 7.2.14, “Exporting cluster configurations” for more details.
Apply the imported cluster configuration:
root@master #
ceph-salt apply