Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
Applies to SUSE CaaS Platform 4.5.2

10 Integration

Note
Note

Integration with external systems might require you to install additional packages to the base OS. Please refer to Section 3.1, “Software Installation”.

10.1 SUSE Enterprise Storage Integration

SUSE CaaS Platform offers SUSE Enterprise Storage as a storage solution for its containers. This chapter describes the steps required for successful integration.

10.1.1 Prerequisites

Before you start with integrating SUSE Enterprise Storage, you need to ensure the following:

  • The SUSE CaaS Platform cluster must have ceph-common and xfsprogs installed on all nodes. You can check this by running rpm -q ceph-common and rpm -q xfsprogs.

  • The SUSE CaaS Platform cluster can communicate with all of the following SUSE Enterprise Storage nodes: master, monitoring nodes, OSD nodes and the metadata server (in case you need a shared file system). For more details refer to the SUSE Enterprise Storage documentation: https://documentation.suse.com/ses/6/.

10.1.2 Procedures According to Type of Integration

The steps will differ in small details depending on whether you are using RBD or CephFS.

10.1.2.1 Using RBD in Pods

RBD, also known as the Ceph Block Device or RADOS Block Device, facilitates the storage of block-based data in the Ceph distributed storage system. The procedure below describes steps to take when you need to use a RADOS Block Device in a Kubernetes Pod.

  1. Create a Ceph Pool:

    ceph osd pool create myPool 64 64
  2. Create a Block Device Pool:

    rbd pool init myPool
  3. Create a Block Device Image:

    rbd create -s 2G myPool/image
  4. Create a Block Device User, and record the key:

    ceph auth get-or-create-key client.myPoolUser mon "allow r" osd "allow class-read object_prefix rbd_children, allow rwx pool=myPool" | tr -d '\n' | base64
  5. Create the Secret containing client.myPoolUser key:

    apiVersion: v1
    kind: Secret
    metadata:
    name: ceph-user
    namespace: default
    type: kubernetes.io/rbd
    data:
      key: QVFESE1rbGRBQUFBQUJBQWxnSmpZalBEeGlXYS9Qb1Jreplace== 1

    1

    The block device user key from the Ceph cluster.

  6. Create the Pod:

    apiVersion: v1
    kind: Pod
    metadata:
      name: ceph-rbd-inline
    spec:
      containers:
      - name: ceph-rbd-inline
        image: opensuse/leap
        command: ["sleep", "infinity"]
        volumeMounts:
        - mountPath: /mnt/ceph_rbd 1
          name: volume
      volumes:
      - name: volume
        rbd:
          monitors:
          - 10.244.2.136:6789 2
          - 10.244.3.123:6789
          - 10.244.4.7:6789
          pool: myPool 3
          image: image 4
          user: myPoolUser 5
          secretRef:
            name: ceph-user 6
          fsType: ext4
          readOnly: false

    1

    The volume mount path inside the Pod.

    2

    A list of Ceph monitor nodes IP and port. The default port is 6789.

    3

    The Ceph pool name.

    4

    The Ceph volume image.

    5

    The Ceph pool user.

    6

    The Kubernetes Secret name contains the Ceph pool user key.

  7. Once the pod is running, check the volume is mounted:

    kubectl exec -it pod/ceph-rbd-inline -- df -k | grep rbd
      Filesystem     1K-blocks    Used Available Use% Mounted on
      /dev/rbd0        1998672    6144   1976144   1% /mnt/ceph_rbd

10.1.2.2 Using RBD in Persistent Volumes

The following procedure describes how to use RBD in a Persistent Volume:

  1. Create a Ceph pool:

    ceph osd pool create myPool 64 64
  2. Create a Block Device Pool:

    rbd pool init myPool
  3. Create a Block Device Image:

    rbd create -s 2G myPool/image
  4. Create a Block Device User, and record the key:

    ceph auth get-or-create-key client.myPoolUser mon "allow r" osd "allow class-read object_prefix rbd_children, allow rwx pool=myPool" | tr -d '\n' | base64
  5. Create the Secret containing client.myPoolUser key:

    apiVersion: v1
    kind: Secret
    metadata:
    name: ceph-user
    namespace: default
    type: kubernetes.io/rbd
    data:
      key: QVFESE1rbGRBQUFBQUJBQWxnSmpZalBEeGlXYS9Qb1Jreplace== 1

    1

    The block device user key from the Ceph cluster.

  6. Create the Persistent Volume:

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: ceph-rbd-pv
    spec:
      capacity:
        storage: 2Gi 1
      accessModes:
        - ReadWriteOnce
      rbd:
        monitors:
        - 172.28.0.25:6789 2
        - 172.28.0.21:6789
        - 172.28.0.6:6789
        pool: myPool  3
        image: image 4
        user: myPoolUser  5
        secretRef:
          name: ceph-user 6
        fsType: ext4
        readOnly: false

    1

    The size of the volume image. Reference to Setting requests and limits for local ephemeral storage to see supported suffixes.

    2

    A list of Ceph monitor nodes IP and port. The default port is 6789.

    3

    The Ceph pool name.

    4

    The Ceph volume image name.

    5

    The Ceph pool user.

    6

    The Kubernetes Secret name contains the Ceph pool user key.

  7. Create the Persistent Volume Claim:

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: ceph-rbd-pv
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 2Gi
      volumeName: ceph-rbd-pv
    Note
    Note

    Deleting Persistent Volume Claim does not remove RBD volume in the Ceph cluster.

  8. Create the Pod:

    apiVersion: v1
    kind: Pod
    metadata:
      name: ceph-rbd-pv
    spec:
      containers:
      - name: ceph-rbd-pv
        image: busybox
        command: ["sleep", "infinity"]
        volumeMounts:
        - mountPath: /mnt/ceph_rbd 1
          name: volume
      volumes:
      - name: volume
        persistentVolumeClaim:
          claimName: ceph-rbd-pv 2

    1

    The volume mount path inside the Pod.

    2

    The Persistent Volume Claim name.

  9. Once the pod is running, check the volume is mounted:

    kubectl exec -it pod/ceph-rbd-pv -- df -k | grep rbd
      Filesystem     1K-blocks    Used Available Use% Mounted on
      /dev/rbd0        1998672    6144   1976144   1% /mnt/ceph_rbd

10.1.2.3 Using RBD in Storage Classes

The following procedure describes how use RBD in Storage Class:

  1. Create a Ceph pool:

    ceph osd pool create myPool 64 64
  2. Create a Block Device User to use as pool admin and record the key:

    ceph auth get-or-create-key client.myPoolAdmin mds 'allow *' mgr 'allow *' mon 'allow *' osd 'allow * pool=myPool'  | tr -d '\n' | base64
  3. Create a Block Device User to use as pool user and record the key:

    ceph auth get-or-create-key client.myPoolUser mon "allow r" osd "allow class-read object_prefix rbd_children, allow rwx pool=myPool" | tr -d '\n' | base64
  4. Create the Secret containing the block device pool admin key:

    apiVersion: v1
    kind: Secret
    metadata:
     name: ceph-admin
    type: kubernetes.io/rbd
    data:
      key: QVFCa0ZJVmZBQUFBQUJBQUp2VzdLbnNIOU1yYll1R0p6T2Zreplace== 1

    1

    The block device pool admin key from the Ceph cluster.

  5. Create the Secret containing the block device pool user key:

    apiVersion: v1
    kind: Secret
    metadata:
     name: ceph-user
    type: kubernetes.io/rbd
    data:
      key: QVFCa0ZJVmZBQUFBQUJBQUp2VzdLbnNIOU1yYll1R0p6T2Zreplace== 1

    1

    The block device pool user key from the Ceph cluster.

  6. Create the Storage Class:

    apiVersion: storage.k8s.io/v1beta1
    kind: StorageClass
    metadata:
      name: ceph-rbd-sc
      annotations:
        storageclass.beta.kubernetes.io/is-default-class: "true"
    provisioner: kubernetes.io/rbd
    parameters:
      monitors: 172.28.0.19:6789, 172.28.0.5:6789, 172.218:6789 1
      adminId: myPoolAdmin 2
      adminSecretName: ceph-admin 3
      adminSecretNamespace: default
      pool: myPool 4
      userId: myPoolUser 5
      userSecretName: ceph-user 6

    1

    A list of Ceph monitory nodes IP and port separate by ,. The default port is 6789.

    2

    The Ceph pool admin name.

    3

    The Kubernetes Secret name contains the Ceph pool admin key.

    4

    The Ceph pool name.

    5

    The Ceph pool user name.

    6

    The Kubernetes Secret name contains the Ceph pool user key.

  7. Create the Persistent Volume Claim:

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: ceph-rbd-sc
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 2Gi 1

    1

    The request volume size. Reference to Setting requests and limits for local ephemeral storage to see supported suffixes.

    Note
    Note

    Deleting Persistent Volume Claim does not remove RBD volume in the Ceph cluster.

  8. Create the Pod:

    apiVersion: v1
    kind: Pod
    metadata:
      name: ceph-rbd-sc
    spec:
      containers:
      - name:  ceph-rbd-sc
        image: busybox
        command: ["sleep", "infinity"]
        volumeMounts:
        - mountPath: /mnt/ceph_rbd 1
          name: volume
      volumes:
      - name: volume
        persistentVolumeClaim:
          claimName: ceph-rbd-sc 2

    1

    The volume mount path inside the Pod.

    2

    The Persistent Volume Claim name.

  9. Once the pod is running, check the volume is mounted:

    kubectl exec -it pod/ceph-rbd-sc -- df -k | grep rbd
      Filesystem     1K-blocks    Used Available Use% Mounted on
      /dev/rbd0        1998672    6144   1976144   1% /mnt/ceph_rbd

10.1.2.4 Using CephFS in Pods

The procedure below describes how to use CephFS in Pod.

Procedure: Using CephFS In Pods
  1. Create a Block Device User to use as CephFS user and record the key:

    ceph auth get-or-create-key client.myCephFSUser mds 'allow *' mgr 'allow *' mon 'allow r' osd 'allow rw pool=cephfs_metadata,allow rwx pool=cephfs_data'  | tr -d '\n' | base64
    Note
    Note

    The cephfs_data pool should be pre-existed with SES deployment, if not you can create and initialize with:

    ceph osd pool create cephfs_data 256 256
    ceph osd pool create cephfs_metadata 64 64
    ceph fs new cephfs cephfs_metadata cephfs_data
    Note
    Note

    Multiple Filesystems Within a Ceph Cluster is still an experimental feature, and disabled by default, to setup more than one filesystem requires to have this feature enabled. See Create a Ceph File System on how to create more filesystems.

    Note
    Note

    Reference to CephFS Client Capabilities to see how to further restrict user authority.

  2. Create the Secret containing the CephFS admin key:

    apiVersion: v1
    kind: Secret
    metadata:
      name: ceph-user
    data:
      key: QVFESE1rbGRBQUFBQUJBQWxnSmpZalBEeGlXYS9Qb1J4ZStreplace== 1

    1

    The CephFS user key from the Ceph cluster.

  3. Create the Pod:

    apiVersion: v1
    kind: Pod
    metadata:
      name: cephfs-inline
    spec:
      containers:
      - name: cephfs-inline
        image: busybox
        command: ["sleep", "infinity"]
        volumeMounts:
        - mountPath: /mnt/cephfs 1
          name: volume
      volumes:
      - name: volume
        cephfs:
          monitors:
          - 172.28.0.19:6789 2
          - 172.28.0.5:6789
          - 172.28.0.18:6789
          user: myCephFSUser 3
          secretRef:
            name: ceph-user 4
          readOnly: false

    1

    The volume mount path inside the Pod.

    2

    A list of Ceph monitor nodes IP and port. The default port is 6789.

    3

    The CephFS user name.

    4

    The Kubernetes Secret name contains the CephFS user key.

  4. Once the pod is running, check the volume is mounted:

    kubectl exec -it pod/cephfs-inline -- df -k | grep cephfs
      Filesystem   1K-blocks    Used Available Use% Mounted on
      172.28.0.19:6789,172.28.0.5:6789,172.28.0.18:6789:/
                    79245312       0  79245312   0% /mnt/cephfs

10.1.2.5 Using CephFS in Persistent Volumes

The following procedure describes how to attach a CephFS static persistent volume to a pod:

  1. Create a Block Device User to use as CephFS user and record the key:

    ceph auth get-or-create-key client.myCephFSUser mds 'allow *' mgr 'allow *' mon 'allow r' osd 'allow rw pool=cephfs_metadata,allow rwx pool=cephfs_data'  | tr -d '\n' | base64
    Note
    Note

    The cephfs_data pool should be pre-existed with SES deployment, if not you can create and initialize with:

    ceph osd pool create cephfs_data 256 256
    ceph osd pool create cephfs_metadata 64 64
    ceph fs new cephfs cephfs_metadata cephfs_data
    Note
    Note

    Multiple Filesystems Within a Ceph Cluster is still an experimental feature, and disabled by default, to setup more than one filesystem requires to have this feature enabled. See Create a Ceph File System on how to create more filesystem.

    Note
    Note

    Reference to CephFS Client Capabilities to see how to further restrict user authority.

  2. Create the Secret that contains the created CephFS admin key:

    apiVersion: v1
    kind: Secret
    metadata:
      name: ceph-user
    data:
      key: QVFESE1rbGRBQUFBQUJBQWxnSmpZalBEeGlXYS9Qb1J4ZStreplace== 1

    1

    The CephFS user key from the Ceph cluster.

  3. Create the Persistent Volume:

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: cephfs-pv
    spec:
      capacity:
        storage: 2Gi 1
      accessModes:
        - ReadWriteOnce
      cephfs:
        monitors:
          - 172.28.0.19:6789 2
          - 172.28.0.5:6789
          - 172.28.0.18:6789
        user: myCephFSUser 3
        secretRef:
          name: ceph-user 4
        readOnly: false

    1

    The desired volume size. Reference to Setting requests and limits for local ephemeral storage to see supported suffixes.

    2

    A list of Ceph monitor nodes IP and port. The default port is 6789.

    3

    The CephFS user name.

    4

    The Kubernetes Secret name contains the CephFS user key.

  4. Create the Persistent Volume Claim:

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: cephfs-pv
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 2Gi 1

    1

    The request volume size.

    Note
    Note

    Deleting Persistent Volume Claim does not remove CephFS volume in the Ceph cluster.

  5. Create the Pod:

    apiVersion: v1
    kind: Pod
    metadata:
      name: cephfs-pv
    spec:
      containers:
      - name: cephfs-pv
        image: busybox
        command: ["sleep", "infinity"]
        volumeMounts:
        - mountPath: /mnt/cephfs 1
          name: volume
      volumes:
      - name: volume
        persistentVolumeClaim:
          claimName: cephfs-pv 2

    1

    The volume mount path inside the Pod.

    2

    The Persistent Volume Claim name.

  6. Once the pod is running, check the CephFS is mounted:

    kubectl exec -it pod/cephfs-pv -- df -k | grep cephfs
      Filesystem   1K-blocks    Used Available Use% Mounted on
      172.28.0.19:6789,172.28.0.5:6789,172.28.0.18:6789:/
                    79245312       0  79245312   0% /mnt/cephfs

10.2 SUSE Cloud Application Platform Integration

For integration with SUSE Cloud Application Platform, refer to: Deploying SUSE Cloud Application Platform on SUSE CaaS Platform.

Print this page