Vai al contenutoNaviga tra le pagine: pagina precedente [tasto di scelta p]/pagina successiva [tasto di scelta n]
documentation.suse.com / Documentazione di SUSE Enterprise Storage 7 / Deploying and Administering SUSE Enterprise Storage with Rook / Administrating Ceph on SUSE CaaS Platform / Ceph examples
Si applica a SUSE Enterprise Storage 7

9 Ceph examples

9.1 Ceph examples

Configuration for Rook and Ceph can be configured in multiple ways to provide block devices, shared file system volumes, or object storage in a Kubernetes namespace. We have provided several examples to simplify storage setup, but remember there are many tunables and you will need to decide what settings work for your use case and environment.

9.1.1 Creating common resources

The first step to deploy Rook is to create the common resources. The configuration for these resources will be the same for most deployments. The common.yaml sets these resources up.

kubectl@adm > kubectl create -f common.yaml

The examples all assume the operator and all Ceph daemons will be started in the same namespace. If you want to deploy the operator in a separate namespace, see the comments throughout common.yaml.

9.1.2 Creating the operator

After the common resources are created, the next step is to create the Operator deployment.

  • operator.yaml: The most common settings for production deployments

    kubectl@adm > kubectl create -f operator.yaml
  • operator-openshift.yaml: Includes all of the operator settings for running a basic Rook cluster in an OpenShift environment.

    kubectl@adm > oc create -f operator-openshift.yaml

Settings for the operator are configured through environment variables on the operator deployment. The individual settings are documented in the common.yaml.

9.1.3 Creating the cluster CRD

Now that your operator is running, create your Ceph storage cluster. This CR contains the most critical settings that will influence how the operator configures the storage. It is important to understand the various ways to configure the cluster. These examples represent a very small set of the different ways to configure the storage.

  • cluster.yaml: This file contains common settings for a production storage cluster. Requires at least three nodes.

  • cluster-test.yaml: Settings for a test cluster where redundancy is not configured. Requires only a single node.

  • cluster-on-pvc.yaml: This file contains common settings for backing the Ceph MONs and OSDs by PVs. Useful when running in cloud environments or where local PVs have been created for Ceph to consume.

  • cluster-with-drive-groups.yaml: This file contains example configurations for creating advanced OSD layouts on nodes using Ceph Drive Groups.

  • cluster-external: Connect to an external Ceph cluster with minimal access to monitor the health of the cluster and connect to the storage.

  • cluster-external-management: Connect to an external Ceph cluster with the admin key of the external cluster to enable remote creation of pools and configure services such as an Object Storage or Shared file system.

9.1.4 Setting up consumable storage

Now we are ready to setup block, shared file system or object storage in the Rook Ceph cluster. These kinds of storage are respectively referred to as CephBlockPool, Cephfilesystem and CephObjectStore in the spec files.

9.1.4.1 Provisioning block devices

Ceph can provide raw block device volumes to pods. Each example below sets up a storage class which can then be used to provision a block device in Kubernetes pods.

  • storageclass.yaml: This example illustrates replication of three for production scenarios and requires at least three nodes. Your data is replicated on three different Kubernetes worker nodes and intermittent or long-lasting single node failures will not result in data unavailability or loss.

  • storageclass-ec.yaml: Configures erasure coding for data durability rather than replication.

  • storageclass-test.yaml: Replication of one for test scenarios and it requires only a single node. Do not use this for applications that store valuable data or have high-availability storage requirements, since a single node failure can result in data loss.

The storage classes are found in different sub-directories depending on the driver:

  • csi/rbd: The CSI driver for block devices.

9.1.4.2 Shared file system

CephFS (CephFS) allows the user to mount a shared POSIX-compliant folder into one or more hosts (pods in the container world). This storage is similar to NFS shared storage or CIFS shared folders.

File storage contains multiple pools that can be configured for different scenarios:

  • filesystem.yaml: Replication of three for production scenarios. Requires at least three nodes.

  • filesystem-ec.yaml: Erasure coding for production scenarios. Requires at least three nodes.

  • filesystem-test.yaml: Replication of one for test scenarios. Requires only a single node.

Dynamic provisioning is possible with the CSI driver. The storage class for shared file systems is found in the csi/cephfs directory.

9.1.4.3 Object Storage

Ceph supports storing blobs of data called objects that support HTTP[S]-type get/put/post and delete semantics.

Object Storage contains multiple pools that can be configured for different scenarios:

  • object.yaml: Replication of three for production scenarios. Requires at least three nodes.

  • object-openshift.yaml: Replication of three with Object Gateway in a port range valid for OpenShift. Requires at least three nodes.

  • object-ec.yaml: Erasure coding rather than replication for production scenarios. Requires at least three nodes.

  • object-test.yaml: Replication of one for test scenarios. Requires only a single node.

9.1.4.4 Object Storage user

  • object-user.yaml: Creates a simple object storage user and generates credentials for the S3 API.

9.1.4.5 Object Storage buckets

The Ceph operator also runs an object store bucket provisioner which can grant access to existing buckets or dynamically provision new buckets.

  • object-bucket-claim-retain.yaml: Creates a request for a new bucket by referencing a StorageClass which saves the bucket when the initiating OBC is deleted.

  • object-bucket-claim-delete.yaml: Creates a request for a new bucket by referencing a StorageClass which deletes the bucket when the initiating OBC is deleted.

  • storageclass-bucket-retain.yaml: Creates a new StorageClass which defines the Ceph Object Store, a region, and retains the bucket after the initiating OBC is deleted.

  • storageclass-bucket-delete.yaml Creates a new StorageClass which defines the Ceph Object Store, a region, and deletes the bucket after the initiating OBC is deleted.