Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
documentation.suse.com / Dokumentation zu SUSE Enterprise Storage 7.1 / Deploying and Administering SUSE Enterprise Storage with Rook / Quick Start: Deploying and Upgrading Ceph on SUSE CaaS Platform / Quick start
Applies to SUSE Enterprise Storage 7.1

1 Quick start

SUSE Enterprise Storage is a distributed storage system designed for scalability, reliability, and performance, which is based on the Ceph technology. The traditional way to run a Ceph cluster is setting up a dedicated cluster to provide block, file, and object storage to a variety of clients.

Rook manages Ceph as a containerized application on Kubernetes and allows a hyper-converged setup, in which a single Kubernetes cluster runs applications and storage together. The primary purpose of SUSE Enterprise Storage deployed with Rook is to provide storage to other applications running in the Kubernetes cluster. This can be block, file, or object storage.

This chapter describes how to quickly deploy containerized SUSE Enterprise Storage 7.1 on top of a SUSE CaaS Platform 4.5 Kubernetes cluster.

1.1 Recommended hardware specifications

For SUSE Enterprise Storage deployed with Rook, the minimal configuration is preliminary, we will update it based on real customer needs.

For the purpose of this document, consider the following minimum configuration:

  • A highly available Kubernetes cluster with 3 master nodes

  • Four physical Kubernetes worker nodes, each with two OSD disks and 5GB of RAM per OSD disk

  • Allow additional 4GB of RAM per additional daemon deployed on a node

  • Dual-10 Gb ethernet as bonded network

  • If you are running a hyper-converged infrastructure (HCI), ensure you add any additional requirements for your workloads.

1.2 Prerequisites

Ensure the following prerequisites are met before continuing with this quickstart guide:

  • Installation of SUSE CaaS Platform 4.5. See the SUSE CaaS Platform documentation for more details on how to install: https://documentation.suse.com/en-us/suse-caasp/4.5/.

  • Ensure ceph-csi (and required sidecars) are running in your Kubernetes cluster.

  • Installation of the LVM2 package on the host where the OSDs are running.

  • Ensure you have one of the following storage options to configure Ceph properly:

    • Raw devices (no partitions or formatted file systems)

    • Raw partitions (no formatted file system)

  • Ensure the SUSE CaaS Platform 4.5 repository is enabled for the installation of Helm 3.

1.3 Getting started with Rook

Note
Note

The following instructions are designed for a quick start deployment only. For more information on installing Helm, see https://documentation.suse.com/en-us/suse-caasp/4.5/.

  1. Install Helm v3:

    # zypper in helm
  2. On a node with access to the Kubernetes cluster, execute the following:

    > export HELM_EXPERIMENTAL_OCI=1
  3. Create a local copy of the Helm chart to your local registry:

    > helm chart pull registry.suse.com/ses/7.1/charts/rook-ceph:latest

    If you are using a version of Helm >= 3.7, you do not need to specify the subcommand chart. The protocol is also explicit and the version is presumed to be latest.

    > helm pull oci://registry.suse.com/ses/7.1/charts/rook-ceph
  4. Export the Helm charts to a Rook-Ceph sub-directory under your current working directory:

    > helm chart export registry.suse.com/ses/7.1/charts/rook-ceph:latest

    For Helm versions >= 3.7, you just need to extract the tarball yourself:

    > tar -xzf rook-ceph-1.8.6.tar.gz
  5. Create a file named myvalues.yaml based off the rook-ceph/values.yaml file.

  6. Set local parameters in myvalues.yaml.

  7. Create the namespace:

    kubectl@adm > kubectl create namespace rook-ceph
  8. Install the helm charts:

    > helm install -n rook-ceph rook-ceph ./rook-ceph/ -f myvalues.yaml
  9. Verify the rook-operator is running:

    kubectl@adm > kubectl -n rook-ceph get pod -l app=rook-ceph-operator

1.4 Deploying Ceph with Rook

  1. You need to apply labels to your Kubernetes nodes before deploying your Ceph cluster. The key node-role.rook-ceph/cluster accepts one of the following values:

    • any

    • mon

    • mon-mgr

    • mon-mgr-osd

    Run the following the get the names of your cluster's nodes:

    kubectl@adm > kubectl get nodes
  2. On the Master node, run the following:

    kubectl@adm > kubectl label nodes node-name label-key=label-value

    For example:

    kubectl@adm > kubectl label node k8s-worker-node-1 node-role.rook-ceph/cluster=any
  3. Verify the application of the label by re-running the following command:

    kubectl@adm > kubectl get nodes --show-labels

    You can also use the describe command to get the full list of labels given to the node. For example:

    kubectl@adm > kubectl describe node node-name
  4. Next, you need to apply a Ceph cluster manifest file, for example, cluster.yaml, to your Kubernetes cluster. You can apply the example cluster.yaml as is without any additional services or requirements from the Rook Helm chart.

    To apply the example Ceph cluster manifest to your Kubernetes cluster, run the following command:

    > kubectl create -f rook-ceph/examples/cluster.yaml

1.5 Configuring the Ceph cluster

You can have two types of integration with your SUSE Enterprise Storage intregrated cluster. These types are: CephFS or RADOS Block Device (RBD).

Before you start the SUSE CaaS Platform and SUSE Enterprise Storage integration, ensure you have met the following prerequisites:

  • The SUSE CaaS Platform cluster must have ceph-common and xfsprogs installed on all nodes. You can check this by running the rpm -q ceph-common command or the rpm -q xfsprogs command.

  • That the SUSE Enterprise Storage cluster has a pool with a RBD device or CephFS enabled.

1.5.1 Configure CephFS

For more information on configuring CephFS, see https://documentation.suse.com/en-us/suse-caasp/4.5/ for steps and more information. This section will also provide the necessary procedure on attaching a pod to either an CephFS static or dynamic volume.

1.5.2 Configure RADOS block device

For instructions on configuring the RADOS Block Device (RBD) in a pod, see https://documentation.suse.com/en-us/suse-caasp/4.5/ for more information. This section will also provide the necessary procedure on attaching a pod to either an RBD static or dynamic volume.

1.6 Updating local images

  1. To update your local image to the latest tag, apply the new parameters in myvalues.yaml:

    image:
    refix: rook
    repository: registry.suse.com/ses/7.1/rook/ceph
    tag: LATEST_TAG
    pullPolicy: IfNotPresent
  2. Re-pull a new local copy of the Helm chart to your local registry:

    > helm3 chart pull REGISTRY_URL
  3. Export the Helm charts to a Rook-Ceph sub-directory under your current working directory:

    > helm3 chart export REGISTRY_URL
  4. Upgrade the Helm charts:

    > helm3 upgrade -n rook-ceph rook-ceph ./rook-ceph/ -f myvalues.yaml

1.7 Uninstalling

  1. Delete any Kubernetes applications that are consuming Rook storage.

  2. Delete all object, file, and block storage artifacts.

  3. Remove the CephCluster:

    kubectl@adm > >kubectl delete -f cluster.yaml
  4. Uninstall the operator:

    > helm uninstall REGISTRY_URL

    Or, if you are using Helm >= 3.7:

    > helm uninstall -n rook-ceph rook-ceph
  5. Delete any data on the hosts:

    > rm -rf /var/lib/rook
  6. Wipe the disks if necessary.

  7. Delete the namespace:

    > kubectl delete namespace rook-ceph