Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
Applies to SUSE Enterprise Storage 6

13 SUSE Enterprise Storage 6 on Top of SUSE CaaS Platform 4 Kubernetes Cluster Edit source

Warning
Warning: Technology Preview

Running containerized Ceph cluster on SUSE CaaS Platform is a technology preview. Do not deploy on a production Kubernetes cluster. This is not a supported version.

This chapter describes how to deploy containerized SUSE Enterprise Storage 6 on top of SUSE CaaS Platform 4 Kubernetes cluster.

13.1 Considerations Edit source

Before you start deploying, consider the following points:

  • To run Ceph in Kubernetes, SUSE Enterprise Storage 6 uses an upstream project called Rook (https://rook.io/).

  • Depending on the configuration, Rook may consume all unused disks on all nodes in a Kubernetes cluster.

  • The setup requires privileged containers.

13.2 Prerequisites Edit source

The minimum requirements and prerequisites to deploy SUSE Enterprise Storage 6 on top of SUSE CaaS Platform 4 Kubernetes cluster are as follows:

  • A running SUSE CaaS Platform 4 cluster. You need to have an account with a SUSE CaaS Platform subscription. You can activate a 60-day free evaluation here https://www.suse.com/products/caas-platform/download/MkpwEt3Ub98~/?campaign_name=Eval:_CaaSP_4.

  • At least three SUSE CaaS Platform worker nodes, with at least one additional disk attached to each worker node as storage for the OSD. We recommend four SUSE CaaS Platform worker nodes.

  • At least one OSD per worker node, with a minimum disk size of 5 GB.

  • Access to SUSE Enterprise Storage 6. You can get a trial subscription from here https://www.suse.com/products/suse-enterprise-storage/download/.

  • Access to a workstation that has access to the SUSE CaaS Platform cluster via kubectl. We recommend using the SUSE CaaS Platform master node as the workstation.

  • Ensure that the SUSE-Enterprise-Storage-6-Pool and SUSE-Enterprise-Storage-6-Updates repositories are configured on the management node to install the rook-k8s-yaml RPM package.

13.3 Get Rook Manifests Edit source

The Rook orchestrator uses configuration files in YAML format called manifests. The manifests you need are included in the rook-k8s-yaml RPM package. You can find this package in the SUSE Enterprise Storage 6 repository. Install it by running the following:

root # zypper install rook-k8s-yaml

13.4 Installation Edit source

Rook-Ceph includes two main components: the 'operator' which is run by Kubernetes and allows creation of Ceph clusters, and the Ceph 'cluster' itself which is created and partially managed by the operator.

13.4.1 Configuration Edit source

13.4.1.1 Global Configuration Edit source

The manifests used in this setup install all Rook and Ceph components in the 'rook-ceph' namespace. If you need to change it, adopt all references to the namespace in the Kubernetes manifests accordingly.

Depending on which features of Rook you intend to use, alter the 'Pod Security Policy' configuration in common.yaml to limit Rook's security requirements. Follow the comments in the manifest file.

13.4.1.2 Operator Configuration Edit source

The manifest operator.yaml configures the Rook operator. Normally, you do not need to change it. Find more information following the comments in the manifest file.

13.4.1.3 Ceph Cluster Configuration Edit source

The manifest cluster.yaml is responsible for configuring the actual Ceph cluster which will run in Kubernetes. Find detailed description of all available options in the upstream Rook documentation at https://rook.io/docs/rook/v1.0/ceph-cluster-crd.html.

By default, Rook is configured to use all nodes that are not tainted with node-role.kubernetes.io/master:NoSchedule and will obey configured placement settings (see https://rook.io/docs/rook/v1.0/ceph-cluster-crd.html#placement-configuration-settings). The following example disables such behavior and only uses the nodes explicitly listed in the nodes section:

storage:
  useAllNodes: false
  nodes:
    - name: caasp4-worker-0
    - name: caasp4-worker-1
    - name: caasp4-worker-2
Note
Note

By default, Rook is configured to use all free and empty disks on each node for use as Ceph storage.

13.4.1.4 Documentation Edit source

13.4.2 Create the Rook Operator Edit source

Install the Rook-Ceph common components, CSI roles, and the Rook-Ceph operator by executing the following command on the SUSE CaaS Platform master node:

root # kubectl apply -f common.yaml -f operator.yaml

common.yaml will create the 'rook-ceph' namespace, Ceph Custom Resource Definitions (CRDs) (see https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) to make Kubernetes aware of Ceph Objects (for example, 'CephCluster'), and the RBAC roles and Pod Security Policies (see https://kubernetes.io/docs/concepts/policy/pod-security-policy/) which are necessary for allowing Rook to manage the cluster-specific resources.

Tip
Tip: hostNetwork and hostPorts Usage

Allowing the usage of hostNetwork is required when using hostNetwork: true in the Cluster Resource Definition. Allowing the usage of hostPorts in the PodSecurityPolicy is also required.

Verify the installation by running kubectl get pods -n rook-ceph on SUSE CaaS Platform master node, for example:

root # kubectl get pods -n rook-ceph
NAME                                     READY   STATUS      RESTARTS   AGE
rook-ceph-agent-57c9j                    1/1     Running     0          22h
rook-ceph-agent-b9j4x                    1/1     Running     0          22h
rook-ceph-operator-cf6fb96-lhbj7         1/1     Running     0          22h
rook-discover-mb8gv                      1/1     Running     0          22h
rook-discover-tztz4                      1/1     Running     0          22h

13.4.3 Create the Ceph Cluster Edit source

After you modify cluster.yaml according to your needs, you can create the Ceph cluster. Run the following command on the SUSE CaaS Platform master node:

root # kubectl apply -f cluster.yaml

Watch the 'rook-ceph' namespace to see the Ceph cluster being created. You will see as many Ceph Monitors as configured in the cluster.yaml manifest (default is 3), one Ceph Manager, and as many Ceph OSDs as you have free disks.

Tip
Tip: Temporary OSD Pods

While bootstrapping the Ceph cluster, you will see some pods with the name rook-ceph-osd-prepare-NODE-NAME run for a while and then terminate with the status 'Completed'. As their name implies, these pods provision Ceph OSDs. They are left without being deleted so that you can inspect their logs after their termination. For example:

root # kubectl get pods --namespace rook-ceph
NAME                                         READY  STATUS     RESTARTS  AGE
rook-ceph-agent-57c9j                        1/1    Running    0         22h
rook-ceph-agent-b9j4x                        1/1    Running    0         22h
rook-ceph-mgr-a-6d48564b84-k7dft             1/1    Running    0         22h
rook-ceph-mon-a-cc44b479-5qvdb               1/1    Running    0         22h
rook-ceph-mon-b-6c6565ff48-gm9wz             1/1    Running    0         22h
rook-ceph-operator-cf6fb96-lhbj7             1/1    Running    0         22h
rook-ceph-osd-0-57bf997cbd-4wspg             1/1    Running    0         22h
rook-ceph-osd-1-54cf468bf8-z8jhp             1/1    Running    0         22h
rook-ceph-osd-prepare-caasp4-worker-0-f2tmw  0/2    Completed  0         9m35s
rook-ceph-osd-prepare-caasp4-worker-1-qsfhz  0/2    Completed  0         9m33s
rook-ceph-tools-76c7d559b6-64rkw             1/1    Running    0         22h
rook-discover-mb8gv                          1/1    Running    0         22h
rook-discover-tztz4                          1/1    Running    0         22h

13.5 Using Rook as Storage for Kubernetes Workload Edit source

Rook allows you to use three different types of storage:

Object Storage

Object storage exposes an S3 API to the storage cluster for applications to put and get data. Refer to https://rook.io/docs/rook/v1.0/ceph-object.html for a detailed description.

Shared File System

A shared file system can be mounted with read/write permission from multiple pods. This is useful for applications that are clustered using a shared file system. Refer to https://rook.io/docs/rook/v1.0/ceph-filesystem.html for a detailed description.

Block Storage

Block storage allows you to mount storage to a single pod. Refer to https://rook.io/docs/rook/v1.0/ceph-block.html for a detailed description.

13.6 Uninstalling Rook Edit source

To uninstall Rook, follow these steps:

  1. Delete any Kubernetes applications that are consuming Rook storage.

  2. Delete all object, file, and/or block storage artifacts that you created by following Section 13.5, “Using Rook as Storage for Kubernetes Workload”.

  3. Delete the Ceph cluster, operator, and related resources:

    root # kubectl delete -f cluster.yaml
    root # kubectl delete -f operator.yaml
    root # kubectl delete -f common.yaml
  4. Delete the data on hosts:

    root # rm -rf /var/lib/rook
  5. If necessary, wipe the disks that were used by Rook. Refer to https://rook.io/docs/rook/master/ceph-teardown.html for more details.