Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
Applies to SUSE Enterprise Storage 5.5 (SES 5 & SES 5.5)

3 Ceph Admin Node HA Setup Edit source

Ceph admin node is a Ceph cluster node where the Salt master service is running. The admin node is a central point of the Ceph cluster because it manages the rest of the cluster nodes by querying and instructing their Salt minion services. It usually includes other services as well, for example the openATTIC Web UI with the Grafana dashboard backed by the Prometheus monitoring toolkit.

In case of the Ceph admin node failure, you usually need to provide a new working hardware for the node and restore the complete cluster configuration stack from a recent backup. Such method is time consuming and causes cluster outage.

To prevent the Ceph cluster performance downtime caused by the admin node failure, we recommend to make use of the High Availability (HA) cluster for the Ceph admin node.

3.1 Outline of the HA Cluster for Ceph Admin Node Edit source

The idea of an HA cluster is that in case of one cluster node failure, the other node automatically takes over its role including the virtualized Ceph admin node. This way other Ceph cluster nodes do not notice that the Ceph admin node failed.

The minimal HA solution for the Ceph admin node requires the following hardware:

  • Two bare metal servers able to run SUSE Linux Enterprise with the High Availability extension and virtualize the Ceph admin node.

  • Two or more redundant network communication paths, for example via Network Device Bonding.

  • Shared storage to host the disk image(s) of the Ceph admin node virtual machine. The shared storage needs to be accessible form both servers. It can be for example an NFS export, a Samba share, or iSCSI target.

Find more details on the cluster requirements at https://documentation.suse.com/sle-ha/12-SP5/single-html/SLE-HA-install-quick/#sec-ha-inst-quick-req.

2-Node HA Cluster for Ceph Admin Node
Figure 3.1: 2-Node HA Cluster for Ceph Admin Node

3.2 Building HA Cluster with Ceph Admin Node Edit source

The following procedure summarizes the most important steps of building the HA cluster for virtualizing the Ceph admin node. For details, refer to the indicated links.

  1. Set up a basic 2-node HA cluster with shared storage as described in https://documentation.suse.com/sle-ha/12-SP5/single-html/SLE-HA-install-quick/#art-sleha-install-quick.

  2. On both cluster nodes, install all packages required for running the KVM hypervisor and the libvirt toolkit as described in https://documentation.suse.com/sles/12-SP5/single-html/SLES-virtualization/#sec-vt-installation-kvm.

  3. On the first cluster node, create a new KVM virtual machine (VM) making use of libvirt as described in https://documentation.suse.com/sles/12-SP5/single-html/SLES-virtualization/#sec-libvirt-inst-virt-install. Use the preconfigured shared storage to store the disk images of the VM.

  4. After the VM setup is complete, export its configuration to an XML file on the shared storage. Use the following syntax:

    root # virsh dumpxml VM_NAME > /path/to/shared/vm_name.xml
  5. Create a resource for the Admin Node VM. Refer to https://documentation.suse.com/sle-ha/12-SP5/single-html/SLE-HA-guide/#cha-conf-hawk2 for general info on creating HA resources. Detailed info on creating resource for a KVM virtual machine is described in http://www.linux-ha.org/wiki/VirtualDomain_%28resource_agent%29.

  6. On the newly created VM guest, deploy the Ceph admin node including the additional services you need there. Follow relevant steps in Section 4.3, “Cluster Deployment”. At the same time, deploy the remaining Ceph cluster nodes on the non-HA cluster servers.

Print this page