Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
Applies to SUSE Enterprise Storage 7

1 Quick start Edit source

SUSE Enterprise Storage is a distributed storage system designed for scalability, reliability, and performance, which is based on the Ceph technology. The traditional way to run a Ceph cluster is setting up a dedicated cluster to provide block, file, and object storage to a variety of clients.

Rook manages Ceph as a containerized application on Kubernetes and allows a hyper-converged setup, in which a single Kubernetes cluster runs applications and storage together. The primary purpose of SUSE Enterprise Storage deployed with Rook is to provide storage to other applications running in the Kubernetes cluster. This can be block, file, or object storage.

This chapter describes how to quickly deploy containerized SUSE Enterprise Storage 7 on top of a SUSE CaaS Platform 4.5 Kubernetes cluster.

1.1 Recommended hardware specifications Edit source

For SUSE Enterprise Storage deployed with Rook, the minimal configuration is preliminary, we will update it based on real customer needs.

For the purpose of this document, consider the following minimum configuration:

  • A highly available Kubernetes cluster with 3 master nodes

  • Four physical Kubernetes worker nodes, each with two OSD disks and 5GB of RAM per OSD disk

  • Allow additional 4GB of RAM per additional daemon deployed on a node

  • Dual-10 Gb ethernet as bonded network

  • If you are running a hyper-converged infrastructure (HCI), ensure you add any additional requirements for your workloads.

1.2 Prerequisites Edit source

Ensure the following prerequisites are met before continuing with this quickstart guide:

  • Installation of SUSE CaaS Platform 4.5. See the SUSE CaaS Platform documentation for more details on how to install: https://documentation.suse.com/en-us/suse-caasp/4.5/single-html/caasp-deployment/.

  • Ensure ceph-csi (and required sidecars) are running in your Kubernetes cluster.

  • Installation of the LVM2 package on the host where the OSDs are running.

  • Ensure you have one of the following storage options to configure Ceph properly:

    • Raw devices (no partitions or formatted file systems)

    • Raw partitions (no formatted file system)

  • Ensure the SUSE CaaS Platform 4.5 repository is enabled for the installation of Helm 3.

1.3 Getting started with Rook Edit source

Note
Note

The following instructions are designed for a quick start deployment only. For more information on installing Helm, see https://documentation.suse.com/en-us/suse-caasp/4.5/single-html/caasp-admin/#helm-tiller-install.

  1. Install Helm v3:

    root # zypper in helm3
  2. On a node with access to the Kubernetes cluster, execute the following:

    tux > export HELM_EXPERIMENTAL_OCI=1
  3. Create a local copy of the Helm chart to your local registry:

    tux > helm3 chart pull registry.suse.com/ses/7/charts/rook-ceph:latest
  4. Export the Helm charts to a Rook-Ceph sub-directory under your current working directory:

    tux > helm3 chart export registry.suse.com/ses/7/charts/rook-ceph:latest
  5. Create a file named myvalues.yaml based off the rook-ceph/values.yaml file.

  6. Set local parameters in myvalues.yaml.

  7. Create the namespace:

    kubectl@adm > kubectl create namespace rook-ceph
  8. Install the helm charts:

    tux > helm3 install -n rook-ceph rook-ceph ./rook-ceph/ -f myvalues.yaml
  9. Verify the rook-operator is running:

    kubectl@adm > kubectl -n rook-ceph get pod -l app=rook-ceph-operator

1.4 Deploying Ceph with Rook Edit source

  1. You need to apply Labels to your Kubernetes nodes before deploying your Ceph cluster. The key node-rol.rook-ceph/cluster accepts one of the following values:

    • any

    • mon

    • mon-mgr

    • mon-mgr-osd

    Run the following the get the names of your cluster's nodes:

    kubectl@adm > kubectl get nodes
  2. On the node you want to Label, run the following:

    kubectl@adm > kubectl label nodes node-name label-key=label-value

    For example:

    kubectl@adm > kubectl label node k8s-worker-node-1 node-role.rook-ceph/cluster=any
  3. Verify the application of the Label by re-running the following command:

    kubectl@adm > kubectl get nodes --show-labels

    You can also use the describe command to get the full list of labels given to the node. For example:

    kubectl@adm > kubectl describe node node-name
  4. Next, you need to apply the cluster.yaml to your Kubernetes cluster. The default cluster.yaml can be applied as is without any additional services or requirements from a Helm chart.

    To apply the default Helm chart to your Kubernetes cluster, run the following command:

    tux > helm -n rook-ceph install rook-ceph ./rook-ceph/ -f myvalues.yaml

1.5 Configuring the Ceph cluster Edit source

You can have two types of integration with your SUSE Enterprise Storage intregrated cluster. These types are: CephFS or RADOS Block Device (RBD).

Before you start the SUSE CaaS Platform and SUSE Enterprise Storage integration, ensure you have met the following prerequisites:

  • The SUSE CaaS Platform cluster must have ceph-common and xfsprogs installed on all nodes. You can check this by running the rpm -q ceph-common command or the rpm -q xfsprogs command.

  • That the SUSE Enterprise Storage cluster has a pool with a RBD device or CephFS enabled.

1.5.1 Configure CephFS Edit source

For more information on configuring CephFS, see https://documentation.suse.com/suse-caasp/4.5/single-html/caasp-admin/#_using_cephfs_in_a_pod for steps and more information. This section will also provide the necessary procedure on attaching a pod to either an CephFS static or dynamic volume.

1.5.2 Configure RADOS block device Edit source

For instructions on configuring the RADOS Block Device (RBD) in a pod, see https://documentation.suse.com/suse-caasp/4.5/single-html/caasp-admin/#_using_rbd_in_a_pod for more information. This section will also provide the necessary procedure on attaching a pod to either an RBD static or dynamic volume.

1.6 Updating local images Edit source

  1. To update your local image to the latest tag, apply the new parameters in myvalues.yaml:

    image:
    refix: rook
    repository: registry.suse.com/ses/7/rook/ceph
    tag: LATEST_TAG
    pullPolicy: IfNotPresent
  2. Re-pull a new local copy of the Helm chart to your local registry:

    tux > helm3 chart pull REGISTRY_URL
  3. Export the Helm charts to a Rook-Ceph sub-directory under your current working directory:

    tux > helm3 chart export REGISTRY_URL
  4. Upgrade the Helm charts:

    tux > helm3 upgrade -n rook-ceph rook-ceph ./rook-ceph/ -f myvalues.yaml

1.7 Uninstalling Edit source

  1. Delete any Kubernetes applications that are consuming Rook storage.

  2. Delete all object, file, and block storage artifacts.

  3. Remove the CephCluster:

    kubectl@adm > >kubectl delete -f cluster.yaml
  4. Uninstall the operator:

    tux > helm3 uninstall REGISTRY_URL
  5. Delete any data on the hosts:

    tux > rm -rf /var/lib/rook
  6. Wipe the disks if necessary.

  7. Delete the namespace:

    tux > kubectl delete namespace rook-ceph
Print this page