Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
Applies to SUSE CaaS Platform 4.2.4

9 Integration

Note
Note

Integration with external systems might require you to install additional packages to the base OS. Please refer to Section 3.1, “Software Installation”.

9.1 SUSE Enterprise Storage Integration

SUSE CaaS Platform offers SUSE Enterprise Storage as a storage solution for its containers. This chapter describes the steps required for successful integration.

9.1.1 Prerequisites

Before you start with integrating SUSE Enterprise Storage, you need to ensure the following:

  • The SUSE CaaS Platform cluster must have ceph-common and xfsprogs installed on all nodes. You can check this by running rpm -q ceph-common and rpm -q xfsprogs.

  • The SUSE CaaS Platform cluster can communicate with all of the following SUSE Enterprise Storage nodes: master, monitoring nodes, OSD nodes and the metadata server (in case you need a shared file system). For more details refer to the SUSE Enterprise Storage documentation: https://documentation.suse.com/ses/6/.

  • The SUSE Enterprise Storage cluster has a pool with RADOS Block Device (RBD) enabled.

9.1.2 Procedures According to Type of Integration

The steps will differ in small details depending on whether you are using RBD or CephFS and dynamic or static persistent volumes.

9.1.2.1 Using RBD in a Pod

RBD, also known as the Ceph Block Device or RADOS Block Device, is software that facilitates the storage of block-based data in the open source Ceph distributed storage system. The procedure below describes steps to take when you need to use a RADOS Block Device in a pod.

  1. Retrieve the Ceph admin secret. You can get the key value using the following command:

    ceph auth get-key client.admin

    or directly from /etc/ceph/ceph.client.admin.keyring.

  2. Apply the configuration that includes the Ceph secret by running kubectl apply. Replace <CEPH_SECRET> with your own Ceph secret and run the following:

    kubectl apply -f - << *EOF*
    apiVersion: v1
    kind: Secret
    metadata:
      name: ceph-secret
    type: "kubernetes.io/rbd"
    data:
      key: "$(echo <CEPH_SECRET> | base64)"
    *EOF*
  3. Create an image in the SES cluster. To do that, run the following command on the master node, replacing <SIZE> with the size of the image, for example 2G, and <YOUR_VOLUME> with the name of the image.

    rbd create -s <SIZE> <YOUR_VOLUME>
  4. Create a pod that uses the image by executing the command below. This example is the minimal configuration for using a RADOS Block Device. Fill in the IP addresses and ports of your monitor nodes under <MONITOR_IP> and <MONITOR_PORT>. The default port number is 6789. Substitute <POD_NAME> and <CONTAINER_NAME> for a Kubernetes container and pod name of your choice. <IMAGE_NAME> is the name you decide to give your container image, for example "opensuse/leap". <RBD_POOL>. is the RBD pool name, please refer to the RBD documentation for instructions on how to create the RBD pool: https://docs.ceph.com/docs/mimic/rbd/rados-rbd-cmds/#create-a-block-device-pool

    kubectl apply -f - << *EOF*
    apiVersion: v1
    kind: Pod
    metadata:
      name: <POD_NAME>
    spec:
      containers:
      - name: <CONTAINER_NAME>
        image: <IMAGE_NAME>
        volumeMounts:
        - mountPath: /mnt/rbdvol
          name: rbdvol
      volumes:
      - name: rbdvol
        rbd:
          monitors:
          - '<MONITOR1_IP:MONITOR1_PORT>'
          - '<MONITOR2_IP:MONITOR2_PORT>'
          - '<MONITOR3_IP:MONITOR3_PORT>'
          pool: <RBD_POOL>
          image: <YOUR_VOLUME>
          user: admin
          secretRef:
            name: ceph-secret
          fsType: ext4
          readOnly: false
    *EOF*
  5. Verify that the pod exists and check its status:

    kubectl get pod
  6. Once the pod is running, check the mounted volume:

    kubectl exec -it CONTAINER_NAME -- df -k ...
    Filesystem             1K-block    Used    Available Used%   Mounted on
    /dev/rbd1              999320      1284    929224    0%      /mnt/rbdvol
    ...

In case you need to delete the pod, run the following command:

kubectl delete pod <POD_NAME>

9.1.2.2 Using RBD with Static Persistent Volumes

The following procedure describes how to attach a pod to an RBD static persistent volume:

  1. Retrieve the Ceph admin secret. You can get the key value using the following command:

    ceph auth get-key client.admin

    or directly from /etc/ceph/ceph.client.admin.keyring.

  2. Apply the configuration that includes the Ceph secret by using kubectl apply. Replace <CEPH_SECRET> with your Ceph secret.

    kubectl apply -f - << *EOF*
    apiVersion: v1
    kind: Secret
    metadata:
      name: ceph-secret
    type: "kubernetes.io/rbd"
    data:
      key: "$(echo <CEPH_SECRET> | base64)"
    *EOF*
  3. Create an image in the SES cluster. On the master node, run the following command:

    rbd create -s <SIZE> <YOUR_VOLUME>

    Replace <SIZE> with the size of the image, for example 2G (2 gigabytes), and <YOUR_VOLUME> with the name of the image.

  4. Create the persistent volume:

    kubectl apply -f - << *EOF*
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: <PV_NAME>
    spec:
      capacity:
        storage: <SIZE>
      accessModes:
        - ReadWriteOnce
      rbd:
        monitors:
        - '<MONITOR1_IP:MONITOR1_PORT>'
        - '<MONITOR2_IP:MONITOR2_PORT>'
        - '<MONITOR3_IP:MONITOR3_PORT>'
        pool: <RBD_POOL>
        image: <YOUR_VOLUME>
        user: admin
        secretRef:
          name: ceph-secret
        fsType: ext4
        readOnly: false
    *EOF*

    Replace <SIZE> with the desired size of the volume. Use the gibibit notation, for example 2Gi.

  5. Create a persistent volume claim:

    kubectl apply -f - << *EOF*
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: <PVC_NAME>
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: SIZE
    *EOF*

    Replace <SIZE> with the desired size of the volume. Use the gibibit notation, for example 2Gi.

    Note
    Note: Listing Volumes

    This persistent volume claim does not explicitly list the volume. Persistent volume claims work by picking any volume that meets the criteria from a pool. In this case we specified any volume with a size of 2G or larger. When the claim is removed, the recycling policy will be followed.

  6. Create a pod that uses the persistent volume claim:

    kubectl apply -f - <<*EOF*
    apiVersion: v1
    kind: Pod
    metadata:
      name: <POD_NAME>
    spec:
      containers:
      - name: <CONTAINER_NAME>
        image: <IMAGE_NAME>
        volumeMounts:
        - mountPath: /mnt/rbdvol
          name: rbdvol
      volumes:
      - name: rbdvol
        persistentVolumeClaim:
          claimName: <PV_NAME>
    *EOF*
  7. Verify that the pod exists and its status:

    kubectl get pod
  8. Once the pod is running, check the volume:

    kubectl exec -it CONTAINER_NAME -- df -k ...
    /dev/rbd3               999320      1284    929224   0% /mnt/rbdvol
    ...

In case you need to delete the pod, run the following command:

kubectl delete pod <CONTAINER_NAME>
Note
Note: Deleting A Pod

When you delete the pod, the persistent volume claim is deleted as well. The RBD is not deleted.

9.1.2.3 Using RBD with Dynamic Persistent Volumes

The following procedure describes how to attach a pod to an RBD dynamic persistent volume.

  1. Retrieve the Ceph admin secret. You can get the key value using the following command:

    ceph auth get-key client.admin

    or directly from /etc/ceph/ceph.client.admin.keyring.

  2. Apply the configuration that includes the Ceph secret by using kubectl apply. Replace <CEPH_SECRET> with your Ceph secret.

    kubectl apply -f - << *EOF*
    apiVersion: v1
    kind: Secret
    metadata:
      name: ceph-secret-admin
    type: "kubernetes.io/rbd"
    data:
      key: "$(echo <CEPH_SECRET> | base64)"
    *EOF*
  3. Create Ceph user on the SES cluster.

    ceph auth get-or-create client.user mon "allow r" osd "allow class-read object_prefix rbd_children,
    allow rwx pool=<RBD_POOL>" -o ceph.client.user.keyring

    Replace <RBD_POOL> with the RBD pool name.

  4. For a dynamic persistent volume, you will also need a user key. Retrieve the Ceph user secret by running:

    ceph auth get-key client.user

    or directly from /etc/ceph/ceph.client.user.keyring

  5. Apply the configuration that includes the Ceph secret by running the kubectl apply command, replacing <CEPH_SECRET> with your own Ceph secret.

    kubectl apply -f - << *EOF*
    apiVersion: v1
    kind: Secret
    metadata:
      name: ceph-secret-user
    type: "kubernetes.io/rbd"
    data:
      key: "$(echo <CEPH_SECRET> | base64)"
    *EOF*
  6. Create the storage class:

    kubectl apply -f - << *EOF*
    apiVersion: storage.k8s.io/v1beta1
    kind: StorageClass
    metadata:
      name: <SC_NAME>
      annotations:
        storageclass.beta.kubernetes.io/is-default-class: "true"
    provisioner: kubernetes.io/rbd
    parameters:
      monitors: <MONITOR1_IP:MONITOR1_PORT>, <MONITOR2_IP:MONITOR2_PORT>, <MONITOR3_IP:MONITOR3_PORT>
      adminId: admin
      adminSecretName: ceph-secret-admin
      adminSecretNamespace: default
      pool: <RBD_POOL>
      userId: user
      userSecretName: ceph-secret-user
    *EOF*
  7. Create the persistent volume claim:

    kubectl apply -f - << *EOF*
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: <PVC_NAME>
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: <SIZE>
    *EOF*

    Replace <SIZE> with the desired size of the volume. Use the gibibit notation, for example 2Gi.

  8. Create a pod that uses the persistent volume claim.

    kubectl apply -f - << *EOF*
    apiVersion: v1
    kind: Pod
    metadata:
      name: <POD_NAME>
    spec:
      containers:
      - name: <CONTAINER_NAME>
        image: <IMAGE_NAME>
        volumeMounts:
        - name: rbdvol
          mountPath: /mnt/rbdvol
          readOnly: false
      volumes:
      - name: rbdvol
        persistentVolumeClaim:
          claimName: <PVC_NAME>
    *EOF*
  9. Verify that the pod exists and check its status.

    kubectl get pod
  10. Once the pod is running, check the volume:

    kubectl exec -it <CONTAINER_NAME> -- df -k ...
    /dev/rbd3               999320      1284    929224   0% /mnt/rbdvol
    ...

In case you need to delete the pod, run the following command:

kubectl delete pod <CONTAINER_NAME>
Note
Note: Deleting A Pod

When you delete the pod, the persistent volume claim is deleted as well. The RBD is not deleted.

9.1.2.4 Using CephFS in a Pod

The procedure below describes steps to take when you need to use a CephFS in a pod.

Procedure: Using CephFS In A Pod
  1. Retrieve the Ceph admin secret. You can get the key value using the following command:

    ceph auth get-key client.admin

    or directly from /etc/ceph/ceph.client.admin.keyring.

  2. Apply the configuration that includes the Ceph secret by running kubectl apply. Replace <CEPH_SECRET> with your own Ceph secret and run the following:

    kubectl apply -f - << *EOF*
    apiVersion: v1
    kind: Secret
    metadata:
      name: ceph-secret
    type: "kubernetes.io/rbd"
    data:
      key: "$(echo <CEPH_SECRET> | base64)"
    *EOF*
  3. Create a pod that uses the CephFS filesystem by executing the following command. This example shows the minimal configuration for a CephFS volume. Fill in the IP addresses and ports of your monitor nodes. The default port number is 6789.

    kubectl apply -f - << *EOF*
    apiVersion: v1
    kind: Pod
    metadata:
      name: <POD_NAME>
    spec:
      containers:
      - name: <CONTAINER_NAME>
        image: <IMAGE_NAME>
        volumeMounts:
        - mountPath: /mnt/cephfsvol
          name: ceph-vol
      volumes:
      - name: ceph-vol
        cephfs:
          monitors:
          - '<MONITOR1_IP:MONITOR1_PORT>'
          - '<MONITOR2_IP:MONITOR2_PORT>'
          - '<MONITOR3_IP:MONITOR3_PORT>'
          user: admin
          secretRef:
            name: ceph-secret-admin
          readOnly: false
    *EOF*
  4. Verify that the pod exists and check its status:

    kubectl get pod
  5. Once the pod is running, check the mounted volume:

    kubectl exec -it <CONTAINER_NAME> -- df -k ...
    172.28.0.6:6789,172.28.0.14:6789,172.28.0.7:6789:/  59572224       0  59572224   0% /mnt/cephfsvol
    ...

In case you need to delete the pod, run the following command:

kubectl delete pod <POD_NAME>

9.1.2.5 Using CephFS with Static Persistent Volumes

The following procedure describes how to attach a CephFS static persistent volume to a pod:

  1. Retrieve the Ceph admin secret. You can get the key value using the following command:

    ceph auth get-key client.admin

    or directly from /etc/ceph/ceph.client.admin.keyring.

  2. Apply the configuration that includes the Ceph secret by running kubectl apply. Replace <CEPH_SECRET> with your own Ceph secret and run the following:

    kubectl apply -f - << *EOF*
    apiVersion: v1
    kind: Secret
    metadata:
      name: ceph-secret
    type: "kubernetes.io/rbd"
    data:
      key: "$(echo <CEPH_SECRET> | base64)"
    *EOF*
  3. Create the persistent volume:

    kubectl apply -f - << *EOF*
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: <PV_NAME>
    spec:
      capacity:
        storage: <SIZE>
      accessModes:
        - ReadWriteOnce
      cephfs:
        monitors:
        - '<MONITOR1_IP:MONITOR1_PORT>'
        - '<MONITOR2_IP:MONITOR2_PORT>'
        - '<MONITOR3_IP:MONITOR3_PORT>'
        user: admin
        secretRef:
          name: ceph-secret-admin
        readOnly: false
    *EOF*

    Replace <SIZE> with the desired size of the volume. Use the gibibit notation, for example 2Gi.

  4. Create a persistent volume claim:

    kubectl apply -f - << *EOF*
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: <PVC_NAME>
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: <SIZE>
    *EOF*

    Replace <SIZE> with the desired size of the volume. Use the gibibit notation, for example 2Gi.

  5. Create a pod that uses the persistent volume claim.

    kubectl apply -f - << *EOF*
    apiVersion: v1
    kind: Pod
    metadata:
      name: <POD_NAME>
    spec:
      containers:
      - name: <CONTAINER_NAME>
        image: <IMAGE_NAME>
        volumeMounts:
        - mountPath: /mnt/cephfsvol
          name: cephfsvol
      volumes:
      - name: cephfsvol
        persistentVolumeClaim:
          claimName: <PVC_NAME>
    
    *EOF*
  6. Verify that the pod exists and check its status.

    kubectl get pod
  7. Once the pod is running, check the volume by running:

    kubectl exec -it <CONTAINER_NAME> -- df -k ...
    172.28.0.25:6789,172.28.0.21:6789,172.28.0.6:6789:/  76107776       0  76107776   0% /mnt/cephfsvol
    ...

In case you need to delete the pod, run the following command:

kubectl delete pod <CONTAINER_NAME>
Note
Note: Deleting A Pod

When you delete the pod, the persistent volume claim is deleted as well. The cephFS is not deleted.

9.2 SUSE Cloud Application Platform Integration

For integration with SUSE Cloud Application Platform, refer to: Deploying SUSE Cloud Application Platform on SUSE CaaS Platform.

Print this page