Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
Applies to SUSE Cloud Application Platform 2.1.1

4 Deploying SUSE Cloud Application Platform on SUSE CaaS Platform

Important
Important

Before you start deploying SUSE Cloud Application Platform, review the following documents:

SUSE Cloud Application Platform supports deployment on SUSE CaaS Platform. SUSE CaaS Platform is an enterprise-class container management solution that enables IT and DevOps professionals to more easily deploy, manage, and scale container-based applications and services. It includes Kubernetes to automate lifecycle management of modern applications, and surrounding technologies that enrich Kubernetes and make the platform itself easy to operate. As a result, enterprises that use SUSE CaaS Platform can reduce application delivery cycle times and improve business agility. This chapter describes the steps to prepare a SUSE Cloud Application Platform deployment on SUSE CaaS Platform. See https://documentation.suse.com/suse-caasp/4.5/ for more information on SUSE CaaS Platform.

4.1 Prerequisites

The following are required to deploy and use SUSE Cloud Application Platform on SUSE CaaS Platform:

4.2 Creating a SUSE CaaS Platform Cluster

When creating a SUSE CaaS Platform cluster, take note of the following general guidelines to ensure there are sufficient resources available to run a SUSE Cloud Application Platform deployment:

  • Minimum 2.3 GHz processor

  • 2 vCPU per physical core

  • 4 GB RAM per vCPU

  • Worker nodes need a minimum of 4 vCPU and 16 GB RAM

As a minimum, a SUSE Cloud Application Platform deployment with a basic workload will require:

  • 1 master node

    • vCPU: 2

    • RAM: 8 GB

    • Storage: 60 GB (SSD)

  • 2 worker nodes. Each node configured with:

    • (v)CPU: 4

    • RAM: 16 GB

    • Storage: 100 GB

  • Persistent storage: 40 GB

For steps to deploy a SUSE CaaS Platform cluster, refer to the SUSE CaaS Platform Deployment Guide at https://documentation.suse.com/suse-caasp/4.5/single-html/caasp-deployment/

Before proceeding with the deployment, take note of the following to ensure the SUSE CaaS Platform cluster is suitable for a deployment of SUSE Cloud Application Platform:

  • Additional changes need to be applied to increase the maximum number of processes allowed in a container. If the maximum is insufficient, SUSE Cloud Application Platform clusters with multiple application deployed will observe applications failing to start.

    Operators should be aware there are potential security concerns when raising the PIDs limit/maximum (fork bombs for example). As a best practice, these should be kept as low as possible. The example values are for guidance purposes only. Operators are encouraged to identify the typical PIDs usage for their workloads and adjust the modifications accordingly. If problems persist, these can be raised to a maximum of 32768 provided SUSE Cloud Application Platform is the only workload on the SUSE CaaS Platform cluster.

    For SUSE CaaS Platform 4.5 clusters, apply the following changes directly to each node in the cluster.

    • Prior to rebooting/bootstrapping, modify /etc/crio/crio.conf.d/00-default.conf to increase the PIDs limit:

      tux > sudo sed -i -e 's|pids_limit = 1024|pids_limit = 3072|g' /etc/crio/crio.conf.d/00-default.conf

    For SUSE CaaS Platform 4.2 clusters, apply the following changes directly to each node in the cluster.

    • Prior to rebooting/bootstrapping, modify /etc/crio/crio.conf to increase the PIDs limit:

      tux > sudo sed -i -e 's|pids_limit = 1024|pids_limit = 3072|g' /etc/crio/crio.conf
    • After rebooting/bootstrapping modify /sys/fs/cgroup/pids/kubepods/pids.max to increase the PIDs maximum:

      tux > sudo bash -c \"echo '3072' > /sys/fs/cgroup/pids/kubepods/pids.max\"

    Note that these modifications are not persistent and will need to be reapplied in the event of a SUSE CaaS Platform node restart or update.

  • At the cluster initialization step, do not use the --strict-capability-defaults option when running

    tux > skuba cluster init

    This ensures the presence of extra CRI-O capabilities compatible with Docker containers. For more details refer to the https://documentation.suse.com/suse-caasp/4.5/single-html/caasp-deployment/#_transitioning_from_docker_to_cri_o

4.3 Install the Helm Client

Helm is a Kubernetes package manager used to install and manage SUSE Cloud Application Platform. This requires installing the Helm client, helm, on your remote management workstation. Cloud Application Platform requires Helm 3. For more information regarding Helm, refer to the documentation at https://helm.sh/docs/.

Warning
Warning

Make sure that you are installing and using Helm 3 and not Helm 2.

If your remote management workstation has the SUSE CaaS Platform package repository, install helm by running

tux > sudo zypper install helm3
tux > sudo update-alternatives --set helm /usr/bin/helm3

Otherwise, helm can be installed by referring to the documentation at https://helm.sh/docs/intro/install/.

4.4 Storage Class

In some SUSE Cloud Application Platform instance groups, such as bits, database and singleton-blobstore require a storage class. To learn more about storage classes, see https://kubernetes.io/docs/concepts/storage/storage-classes/. Examples of provisioners include:

By default, SUSE Cloud Application Platform will use the cluster's default storage class. To designate or change the default storage class, refer to https://kubernetes.io/docs/tasks/administer-cluster/change-default-storage-class/ for instructions.

In some cases, the default and predefined storage classes may not be suitable for certain workloads. If this is the case, operators can define their own custom StorageClass resource according to the specification at https://kubernetes.io/docs/concepts/storage/storage-classes/#the-storageclass-resource.

With the storage class defined, run:

tux > kubectl create --filename my-storage-class.yaml

Then verify the storage class is available by running

tux > kubectl get storageclass

If operators do not want to use the default storage class or one does not exist, a storage class must be specified by setting the kube.storage_class value in your kubecf-config-values.yaml configuration file to the name of the storage class as seen in this example.

kube:
  storage_class: my-storage-class

4.5 Deployment Configuration

SUSE Cloud Application Platform is configured using Helm values (see https://helm.sh/docs/chart_template_guide/values_files/ . Helm values can be set as either command line parameters or using a values.yaml file. The following values.yaml file, called kubecf-config-values.yaml in this guide, provides an example of a SUSE Cloud Application Platform configuration.

Warning
Warning: kubecf-config-values.yaml changes

The format of the kubecf-config-values.yaml file has been restructured completely in Cloud Application Platform 2.x. Do not re-use the Cloud Application Platform 1.x version of the file. Instead, see the default file in the appendix in Section A.1, “Complete suse/kubecf values.yaml File” and pick parameters according to your needs.

Ensure system_domain maps to the load balancer configured for your SUSE CaaS Platform cluster (see https://documentation.suse.com/suse-caasp/4.5/single-html/caasp-deployment/#loadbalancer).

Warning
Warning: Supported Domains

When selecting a domain, SUSE Cloud Application Platform expects system_domain to be either a subdomain or a root domain. Setting system_domain to a top-level domain, such as suse, is not supported.

### Example deployment configuration file
### kubecf-config-values.yaml

system_domain: example.com

credentials:
  cf_admin_password: changeme
  uaa_admin_client_secret: alsochangeme

### This block is required due to the log-cache issue described below
properties:
  log-cache:
    log-cache:
      memory_limit_percent: 3

### This block is required due to the log-cache issue described below
###
### The value for key may need to be replaced depending on
### how notes in your cluster are labeled
###
### The value(s) listed under values may need to be
### replaced depending on how notes in your cluster are labeled
operations:
  inline:
  - type: replace
    path: /instance_groups/name=log-cache/env?/bosh/agent/settings/affinity
    value:
      nodeAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
          - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
              - LABEL_VALUE_OF_NODE

4.5.1 Log-cache Memory Allocation

The log-cache component currently has a memory allocation issue where the node memory available is reported instead of the one assigned to the container under cgroups. In such a situation, log-cache would start allocating memory based on these values, causing a varying range of issues (OOMKills, performance degradation, etc.). To address this issue, node affinity must be used to tie log-cache to nodes of a uniform size, and then declaring the cache percentage based on that number. A limit of 3% has been identified as sufficient.

In the node affinity configuration, the values for key and values may need to be changed depending on how notes in your cluster are labeled. For more information on labels, see https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#built-in-node-labels.

4.5.2 Diego Cell Affinities and Tainted Nodes

Note that the diego-cell pods used by the Diego standard scheduler are

  • privileged

  • use large local emptyDir volumes (i.e. require node disk storage)

  • and set kernel parameters on the node

These things all mean that these pods should not live next to other Kubernetes workloads. They should all be placed on their own dedicated nodes instead where possible.

This can be done by setting affinities and tolerations, as explained in the associated tutorial at https://kubecf.io/docs/deployment/affinities-and-tolerations/.

4.6 Certificates

This section describes the process to secure traffic passing through your SUSE Cloud Application Platform deployment. This is achieved by using certificates to set up Transport Layer Security (TLS) for the router component. Providing certificates for the router traffic is optional. In a default deployment, without operator-provided certificates, generated certificates will be used.

4.6.1 Certificate Characteristics

Ensure the certificates you use have the following characteristics:

  • The certificate is encoded in the PEM format.

  • The certificate is signed by an external Certificate Authority (CA).

  • The certificate's Subject Alternative Names (SAN) include the domain *.example.com, where example.com is replaced with the system_domain in your kubecf-config-values.yaml.

4.6.2 Deployment Configuration

The certificate used to secure your deployment is passed through the kubecf-config-values.yaml configuration file. To specify a certificate, set the value of the certificate and its corresponding private key using the router.tls.crt and router.tls.key Helm values in the settings: section.

settings:
  router:
    tls:
      crt: |
        -----BEGIN CERTIFICATE-----
        MIIEEjCCAfoCCQCWC4NErLzy3jANBgkqhkiG9w0BAQsFADBGMQswCQYDVQQGEwJD
        QTETMBEGA1UECAwKU29tZS1TdGF0ZTEOMAwGA1UECgwFTXlPcmcxEjAQBgNVBAMM
        CU15Q0Euc2l0ZTAeFw0xODA5MDYxNzA1MTRaFw0yMDAxMTkxNzA1MTRaMFAxCzAJ
        ...
        xtNNDwl2rnA+U0Q48uZIPSy6UzSmiNaP3PDR+cOak/mV8s1/7oUXM5ivqkz8pEJo
        M3KrIxZ7+MbdTvDOh8lQplvFTeGgjmUDd587Gs4JsormqOsGwKd1BLzQbGELryV9
        1usMOVbUuL8mSKVvgqhbz7vJlW1+zwmrpMV3qgTMoHoJWGx2n5g=
        -----END CERTIFICATE-----
      key: |
        -----BEGIN RSA PRIVATE KEY-----
        MIIEpAIBAAKCAQEAm4JMchGSqbZuqc4LdryJpX2HnarWPOW0hUkm60DL53f6ehPK
        T5Dtb2s+CoDX9A0iTjGZWRD7WwjpiiuXUcyszm8y9bJjP3sIcTnHWSgL/6Bb3KN5
        G5D8GHz7eMYkZBviFvygCqEs1hmfGCVNtgiTbAwgBTNsrmyx2NygnF5uy4KlkgwI
        ...
        GORpbQKBgQDB1/nLPjKxBqJmZ/JymBl6iBnhIgVkuUMuvmqES2nqqMI+r60EAKpX
        M5CD+pq71TuBtbo9hbjy5Buh0+QSIbJaNIOdJxU7idEf200+4anzdaipyCWXdZU+
        MPdJf40awgSWpGdiSv6hoj0AOm+lf4AsH6yAqw/eIHXNzhWLRvnqgA==
        -----END RSA PRIVATE KEY----

4.7 Using an Ingress Controller

This section describes how to use an ingress controller (see https://kubernetes.io/docs/concepts/services-networking/ingress/) to manage access to the services in the cluster. Using an ingress controller is optional. In a default deployment, load balancers are used instead.

Note that only the NGINX Ingress Controller has been verified to be compatible with Cloud Application Platform. Other Ingress controller alternatives may work, but compatibility with Cloud Application Platform is not supported.

4.7.1 Install and Configure the NGINX Ingress Controller

  1. Create a configuration file with the section below. The file is called nginx-ingress.yaml in this example. When using Eirini instead of Diego, replace the first line with 2222: "kubecf/eirinix-ssh-proxy:2222".

    tcp:
      2222: "kubecf/scheduler:2222"
      20000: "kubecf/tcp-router:20000"
      20001: "kubecf/tcp-router:20001"
      20002: "kubecf/tcp-router:20002"
      20003: "kubecf/tcp-router:20003"
      20004: "kubecf/tcp-router:20004"
      20005: "kubecf/tcp-router:20005"
      20006: "kubecf/tcp-router:20006"
      20007: "kubecf/tcp-router:20007"
      20008: "kubecf/tcp-router:20008"
  2. Create the namespace.

    tux > kubectl create namespace nginx-ingress
  3. Install the NGINX Ingress Controller.

    tux > helm install nginx-ingress suse/nginx-ingress \
    --namespace nginx-ingress \
    --values nginx-ingress.yaml
  4. Monitor the progess of the deployment:

    tux > watch --color 'kubectl get pods --namespace nginx-ingress'
  5. After the deployment completes, the Ingress controller service will be deployed with either an external IP or a hostname.

    Find the external IP or hostname.

    tux > kubectl get services nginx-ingress-controller --namespace nginx-ingress

    You will get output similar to the following.

    NAME                       TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)
    nginx-ingress-controller   LoadBalancer   10.63.248.70   35.233.191.177   80:30344/TCP,443:31386/TCP
  6. Set up DNS records corresponding to the controller service IP or hostname and map it to the system_domain defined in your kubecf-config-values.yaml.

  7. Obtain a PEM formatted certificate that is associated with the system_domain defined in your kubecf-config-values.yaml

  8. In your kubecf-config-values.yaml configuration file, enable the ingress feature and set the tls.crt and tls.key for the certificate from the previous step.

    features:
      ingress:
        enabled: true
        tls:
          crt: |
            -----BEGIN CERTIFICATE-----
            MIIE8jCCAtqgAwIBAgIUT/Yu/Sv8AUl5zHXXEKCy5RKJqmYwDQYJKoZIhvcMOQMM
            [...]
            xC8x/+zB7XlvcRJRio6kk670+25ABP==
            -----END CERTIFICATE-----
          key: |
            -----BEGIN RSA PRIVATE KEY-----
            MIIE8jCCAtqgAwIBAgIUSI02lj2b2ImLy/zMrjNgW5d8EygwQSVJKoZIhvcYEGAW
            [...]
            to2WV7rPMb9W9fd2vVUXKKHTc+PiNg==
            -----END RSA PRIVATE KEY-----

4.8 Affinity and Anti-affinity

Important
Important

This feature requires SUSE Cloud Application Platform 2.0.1 or newer.

Operators can set affinity/anti-affinity rules to restrict how the scheduler determines the placement of a given pod on a given node. This can be achieved through node affinity/anti-affinity, where placement is determined by node labels (see https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity), or pod affinity/anti-affinity, where pod placement is determined by labels on pods that are already running on the node (see https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity).

In SUSE Cloud Application Platform, a default configuration will have following affinity/anti-affinity rules already in place:

  • Instance groups have anti-affinity against themselves. This applies to all instance groups, including database, but not to the bits, eirini, and eirini-extensions subcharts.

  • The diego-cell and router instance groups have anti-affinity against each other.

Note that to ensure an optimal spread of the pods across worker nodes we recommend running 5 or more worker nodes to satisfy both of the default anti-affinity constraints. An operator can also specify custom affinity rules via the sizing.instance-group.affinity helm parameter and any affinity rules specified here will overwrite the default rule, not merge with it.

4.8.1 Configuring Rules

To add or override affinity/anti-affinity settings, add a sizing.INSTANCE_GROUP.affinity block to your kubecf-config-values.yaml. Repeat as necessary for each instance group where affinity/anti-affinity settings need to be applied. For information on the available fields and valid values within the affinity: block, see https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity. Repeat as necessary for each instance group where affinity/anti-affinity settings need to be applied.

Example 1, node affinity.

Using this configuration, the Kubernetes scheduler would place both the asactors and asapi instance groups on a node with a label where the key is topology.kubernetes.io/zone and the value is 0.

sizing:
   asactors:
     affinity:
       nodeAffinity:
         requiredDuringSchedulingIgnoredDuringExecution:
           nodeSelectorTerms:
           - matchExpressions:
             - key: topology.kubernetes.io/zone
               operator: In
               values:
               - 0
   asapi:
     affinity:
       nodeAffinity:
         requiredDuringSchedulingIgnoredDuringExecution:
           nodeSelectorTerms:
           - matchExpressions:
             - key: topology.kubernetes.io/zone
               operator: In
               values:
               - 0

Example 2, pod anti-affinity.

sizing:
  api:
    affinity:
      podAntiAffinity:
        preferredDuringSchedulingIgnoredDuringExecution:
        - weight: 100
          podAffinityTerm:
            labelSelector:
              matchExpressions:
              - key: quarks.cloudfoundry.org/quarks-statefulset-name
                operator: In
                values:
                - sample_group
            topologyKey: kubernetes.io/hostname
  database:
    affinity:
      podAntiAffinity:
        preferredDuringSchedulingIgnoredDuringExecution:
        - weight: 100
          podAffinityTerm:
            labelSelector:
              matchExpressions:
              - key: quarks.cloudfoundry.org/quarks-statefulset-name
                operator: In
                values:
                - sample_group
            topologyKey: kubernetes.io/hostname

Example 1 above uses topology.kubernetes.io/zone as its label, which is one of the standard labels that get attached to nodes by default. The list of standard labels can be found at https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#built-in-node-labels.

In addition to the standard labels, custom labels can be specified as in Example 2. To use custom labels, following the process described in this section https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector.

4.9 High Availability

4.9.1 Configuring Cloud Application Platform for High Availability

High availability mode is optional. In a default deployment, SUSE Cloud Application Platform is deployed in single availability mode.

There are two ways to make your SUSE Cloud Application Platform deployment highly available. The first method is to set the high_availability parameter in your deployment configuration file to true. The second method is to create custom configuration files with your own sizing values.

4.9.1.1 Finding Default and Allowable Sizing Values

The sizing: section in the Helm values.yaml files for the kubecf chart describes which roles can be scaled, and the scaling options for each role. You may use helm inspect to read the sizing: section in the Helm chart:

tux > helm show suse/kubecf | less +/sizing:

Another way is to use Perl to extract the information for each role from the sizing: section.

tux > helm inspect values suse/kubecf | \
perl -ne '/^sizing/..0 and do { print $.,":",$_ if /^ [a-z]/ || /high avail|scale|count/ }'

The default values.yaml files are also included in this guide at Section A.1, “Complete suse/kubecf values.yaml File”.

4.9.1.2 Using the high_availability Helm Property

One way to make your SUSE Cloud Application Platform deployment highly available is to use the high_availability Helm property. In your kubecf-config-values.yaml, set this property to true. This changes the size of all roles to the minimum required for a highly available deployment. Your configuration file, kubecf-config-values.yaml, should include the following.

high_availability: true
Important
Important: Sizing Priority

When sizing values are specified, it takes precedence over the high_availability property.

4.9.1.3 Using Custom Sizing Configurations

Another method to make your SUSE Cloud Application Platform deployment highly available is to explicitly configure the instance count of an instance group.

Important
Important: Sizing Priority

When sizing values are specified, it takes precedence over the high_availability property.

To see the full list of configurable instance groups, refer to default KubeCF values.yaml file in the appendix at Section A.1, “Complete suse/kubecf values.yaml File”.

The following is an example High Availability configuration. The example values are not meant to be copied, as these depend on your particular deployment and requirements.

sizing:
  adapter:
    instances: 2
  api:
    instances: 2
  asactors:
    instances: 2
  asapi:
    instances: 2
  asmetrics:
    instances: 2
  asnozzle:
    instances: 2
  auctioneer:
    instances: 2
  bits:
    instances: 2
  cc_worker:
    instances: 2
  credhub:
    instances: 2
  database:
    instances: 1
  diego_api:
    instances: 2
  diego_cell:
    instances: 2
  doppler:
    instances: 2
  eirini:
    instances: 3
  log_api:
    instances: 2
  nats:
    instances: 2
  router:
    instances: 2
  routing_api:
    instances: 2
  scheduler:
    instances: 2
  uaa:
    instances: 2
  tcp_router:
    instances: 2

4.10 External Blobstore

Cloud Foundry Application Runtime (CFAR) uses a blobstore (see https://docs.cloudfoundry.org/concepts/cc-blobstore.html) to store the source code that developers push, stage, and run. This section explains how to configure an external blobstore for the Cloud Controller component of your SUSE Cloud Application Platform deployment. Using an external blobstore is optional. In a default deployment, an internal blobstore is used.

SUSE Cloud Application Platform relies on ops files (see https://github.com/cloudfoundry/cf-deployment/blob/master/operations/README.md) provided by cf-deployment (see https://github.com/cloudfoundry/cf-deployment) releases for external blobstore configurations. The default configuration for the blobstore is singleton.

4.10.1 Configuration

Currently SUSE Cloud Application Platform supports Amazon Simple Storage Service (Amazon S3, see https://aws.amazon.com/s3/) as an external blobstore.

  1. Using the Amazon S3 service, create four buckets. A bucket should be created for app packages, buildpacks, droplets, and resources. For instructions on how to create Amazone S3 buckets, see https://docs.aws.amazon.com/AmazonS3/latest/user-guide/create-bucket.html.

  2. To grant proper access to the create buckets, configure an additional IAM role as described in the first step of https://docs.cloudfoundry.org/deploying/common/cc-blobstore-config.html#fog-aws-iam.

  3. Set the following in your kubecf-config-values.yaml file and replace the example values.

    features:
      blobstore:
        provider: s3
        s3:
          aws_region: "us-east-1"
          blobstore_access_key_id:  AWS-ACCESS-KEY-ID
          blobstore_secret_access_key: AWS-SECRET-ACCESS-KEY>
          # User provided value for the blobstore admin password.
          blobstore_admin_users_password: PASSWORD
          # The following values are used as S3 bucket names. The buckets are automatically created if not present.
          app_package_directory_key: APP-BUCKET-NAME
          buildpack_directory_key: BUILDPACK-BUCKET-NAME
          droplet_directory_key: DROPLET-BUCKET-NAME
          resource_directory_key: RESOURCE-BUCKET-NAME

4.11 External Database

SUSE Cloud Application Platform can be configured to use an external database system, such as a data service offered by a cloud service provider or an existing high availability database server. In a default deployment, an internal single availability database is used.

To configure your deployment to use an external database, please follow the instructions below.

The current SUSE Cloud Application Platform release is compatible with the following types and versions of external databases:

  • MySQL 5.7

4.11.1 Configuration

This section describes how to enable and configure your deployment to connect to an external database. The configuration options are specified through Helm values inside the kubecf-config-values.yaml. The deployment and configuration of the external database itself is the responsibility of the operator and beyond the scope of this documentation. It is assumed the external database has been deployed and accessible.

Important
Important: Configuration during Initial Install Only

Configuration of SUSE Cloud Application Platform to use an external database must be done during the initial installation and cannot be changed afterwards.

All the databases listed in the config snippet below need to exist before installing KubeCF. One way of doing that is manually running CREATE DATABASE IF NOT EXISTS database-name for each database.

The following snippet of the kubecf-config-values.yaml contains an example of an external database configuration.

features:
  embedded_database:
    enabled: false
  external_database:
    enabled: true
    require_ssl: false
    ca_cert: ~
    type: mysql
    host: hostname
    port: 3306
    databases:
      uaa:
        name: uaa
        password: root
        username: root
      cc:
        name: cloud_controller
        password: root
        username: root
      bbs:
        name: diego
        password: root
        username: root
      routing_api:
        name: routing-api
        password: root
        username: root
      policy_server:
        name: network_policy
        password: root
        username: root
      silk_controller:
        name: network_connectivity
        password: root
        username: root
      locket: 
        name: locket
        password: root
        username: root
      credhub:        
        name: credhub
        password: root
        username: root

4.12 Add the Kubernetes Charts Repository

Download the SUSE Kubernetes charts repository with Helm:

tux > helm repo add suse https://kubernetes-charts.suse.com/

You may replace the example suse name with any name. Verify with helm:

tux > helm repo list
NAME       URL
stable     https://kubernetes-charts.storage.googleapis.com
local      http://127.0.0.1:8879/charts
suse       https://kubernetes-charts.suse.com/

List your chart names, as you will need these for some operations:

tux > helm search repo suse
NAME                            CHART VERSION        APP VERSION    DESCRIPTION
suse/cf-operator                7.2.1+0.gaeb6ef3    2.1.1          A Helm chart for cf-operator, the k8s operator ....
suse/console                    4.4.1                2.1.1          A Helm chart for deploying SUSE Stratos Console
suse/kubecf                     2.7.13                2.1.1          A Helm chart for KubeCF
suse/metrics                    1.3.0                2.1.1          A Helm chart for Stratos Metrics
suse/minibroker                 1.2.0                               A minibroker for your minikube
suse/nginx-ingress              0.28.4               0.15.0         An nginx Ingress controller that uses ConfigMap to store ...
...

4.13 Deploying SUSE Cloud Application Platform

Warning
Warning: KubeCF and cf-operator versions

KubeCF and cf-operator interoperate closely. Before you deploy a specific version combination, make sure they were confirmed to work. For more information see Section 3.4, “Releases and Associated Versions”.

4.13.1 Deploy the Operator

  1. First, create the namespace for the operator.

    tux > kubectl create namespace cf-operator
  2. Install the operator.

    The value of global.operator.watchNamespace indicates the namespace the operator will monitor for a KubeCF deployment. This namespace should be separate from the namespace used by the operator. In this example, this means KubeCF will be deployed into a namespace called kubecf.

    tux > helm install cf-operator suse/cf-operator \
    --namespace cf-operator \
    --set "global.singleNamespace.name=kubecf" \
    --version 7.2.1+0.gaeb6ef3
  3. Wait until cf-operator is successfully deployed before proceeding. Monitor the status of your cf-operator deployment using the watch command.

    tux > watch --color 'kubectl get pods --namespace cf-operator'

4.13.2 Deploy KubeCF

  1. Use Helm to deploy KubeCF.

    Note that you do not need to manually create the namespace for KubeCF.

    tux > helm install kubecf suse/kubecf \
    --namespace kubecf \
    --values kubecf-config-values.yaml \
    --version 2.7.13
  2. Monitor the status of your KubeCF deployment using the watch command.

    tux > watch --color 'kubectl get pods --namespace kubecf'
  3. Find the value of EXTERNAL-IP for each of the public services.

    tux > kubectl get service --namespace kubecf router-public
    
    tux > kubectl get service --namespace kubecf tcp-router-public
    
    tux > kubectl get service --namespace kubecf ssh-proxy-public
  4. Create DNS A records for the public services.

    1. For the router-public service, create a record mapping the EXTERNAL-IP value to <system_domain>.

    2. For the router-public service, create a record mapping the EXTERNAL-IP value to *.<system_domain>.

    3. For the tcp-router-public service, create a record mapping the EXTERNAL-IP value to tcp.<system_domain>.

    4. For the ssh-proxy-public service, create a record mapping the EXTERNAL-IP value to ssh.<system_domain>.

  5. When all pods are fully ready, verify your deployment. See Section 3.2, “Status of Pods during Deployment” for more information.

    Connect and authenticate to the cluster.

    tux > cf api --skip-ssl-validation "https://api.<system_domain>"
    
    # Use the cf_admin_password set in kubecf-config-values.yaml
    tux > cf auth admin changeme

4.14 LDAP Integration

SUSE Cloud Application Platform can be integrated with identity providers to help manage authentication of users. Integrating SUSE Cloud Application Platform with other identity providers is optional. In a default deployment, a built-in UAA server (https://docs.cloudfoundry.org/uaa/uaa-overview.html) is used to manage user accounts and authentication.

The Lightweight Directory Access Protocol (LDAP) is an example of an identity provider that Cloud Application Platform integrates with. This section describes the necessary components and steps in order to configure the integration. See User Account and Authentication LDAP Integration for more information.

4.14.1 Prerequisites

The following prerequisites are required in order to complete an LDAP integration with SUSE Cloud Application Platform.

  • cf, the Cloud Foundry command line interface. For more information, see https://docs.cloudfoundry.org/cf-cli/.

    For SUSE Linux Enterprise and openSUSE systems, install using zypper.

    tux > sudo zypper install cf-cli

    For SLE, ensure the SUSE Cloud Application Platform Tools Module has been added. Add the module using YaST or SUSEConnect.

    tux > SUSEConnect --product sle-module-cap-tools/15.1/x86_64

    For other systems, follow the instructions at https://docs.cloudfoundry.org/cf-cli/install-go-cli.html.

  • uaac, the Cloud Foundry uaa command line client (UAAC). See https://docs.cloudfoundry.org/uaa/uaa-user-management.html for more information and installation instructions.

    On SUSE Linux Enterprise systems, ensure the ruby-devel and gcc-c++ packages have been installed before installing the cf-uaac gem.

    tux > sudo zypper install ruby-devel gcc-c++
  • An LDAP server and the credentials for a user/service account with permissions to search the directory.

4.14.2 Example LDAP Integration

Run the following commands to complete the integration of your Cloud Application Platform deployment and LDAP server.

  1. Use UAAC to target your uaa server.

    tux > uaac target --skip-ssl-validation https://uaa.example.com
  2. Authenticate to the uaa server as admin using the uaa_admin_client_secret set in your kubecf-config-values.yaml file.

    tux > uaac token client get admin --secret PASSWORD
  3. List the current identity providers.

    tux > uaac curl /identity-providers --insecure
  4. From the output, locate the default ldap entry and take note of its id. The entry will be similar to the following.

    {
      "type": "ldap",
      "config": "{\"emailDomain\":null,\"additionalConfiguration\":null,\"providerDescription\":null,\"externalGroupsWhitelist\":[],\"attributeMappings\":{},\"addShadowUserOnLogin\":true,\"storeCustomAttributes\":true,\"ldapProfileFile\":\"ldap/ldap-search-and-bind.xml\",\"baseUrl\":\"ldap://localhost:389/\",\"referral\":null,\"skipSSLVerification\":false,\"userDNPattern\":null,\"userDNPatternDelimiter\":null,\"bindUserDn\":\"cn=admin,dc=test,dc=com\",\"userSearchBase\":\"dc=test,dc=com\",\"userSearchFilter\":\"cn={0}\",\"passwordAttributeName\":null,\"passwordEncoder\":null,\"localPasswordCompare\":null,\"mailAttributeName\":\"mail\",\"mailSubstitute\":null,\"mailSubstituteOverridesLdap\":false,\"ldapGroupFile\":null,\"groupSearchBase\":null,\"groupSearchFilter\":null,\"groupsIgnorePartialResults\":null,\"autoAddGroups\":true,\"groupSearchSubTree\":true,\"maxGroupSearchDepth\":10,\"groupRoleAttribute\":null,\"tlsConfiguration\":\"none\"}",
      "id": "53gc6671-2996-407k-b085-2346e216a1p0",
      "originKey": "ldap",
      "name": "UAA LDAP Provider",
      "version": 3,
      "created": 946684800000,
      "last_modified": 1602208214000,
      "active": false,
      "identityZoneId": "uaa"
    },
  5. Delete the default ldap identity provider. If the default entry is not removed, adding another identity provider of type ldap will result in a 409 Conflict response. Replace the example id with one found in the previous step.

    tux > uaac curl /identity-providers/53gc6671-2996-407k-b085-2346e216a1p0 \
        --request DELETE \
        --insecure
  6. Create your own LDAP identity provider. A 201 Created response will be returned when the identity provider is successfully created. See the UAA API Reference and Cloud Foundry UAA-LDAP Documentationfor information regarding the request parameters and additional options available to configure your identity provider.

    The following is an example of a uaac curl command and its request parameters used to create an identity provider. Specify the parameters according to your LDAP server's credentials and directory structure. Ensure the user specifed in the bindUserDn has permissions to search the directory.

    tux > uaac curl /identity-providers?rawConfig=true \
        --request POST \
        --insecure \
        --header 'Content-Type: application/json' \
        --data '{
      "type" : "ldap",
      "config" : {
        "ldapProfileFile" : "ldap/ldap-search-and-bind.xml",
        "baseUrl" : "ldap://ldap.example.com:389",
        "bindUserDn" : "cn=admin,dc=example,dc=com",
        "bindPassword" : "password",
        "userSearchBase" : "dc=example,dc=com",
        "userSearchFilter" : "uid={0}",
        "ldapGroupFile" : "ldap/ldap-groups-map-to-scopes.xml",
        "groupSearchBase" : "dc=example,dc=com",
        "groupSearchFilter" : "member={0}"
      },
      "originKey" : "ldap",
      "name" : "My LDAP Server",
      "active" : true
      }'
  7. Verify the LDAP identify provider has been created. The output should now contain an entry for the ldap type you created.

    tux > uaac curl /identity-providers --insecure
  8. Use the cf CLI to target your SUSE Cloud Application Platform deployment.

    tux > cf api --skip-ssl-validation https://api.example.com
  9. Log in as an administrator.

    tux > cf login
    API endpoint: https://api.example.com
    
    Email> admin
    
    Password>
    Authenticating...
    OK
  10. Create users associated with your LDAP identity provider.

    tux > cf create-user username --origin ldap
    Creating user username...
    OK
    
    TIP: Assign roles with 'cf set-org-role' and 'cf set-space-role'.
  11. Assign the user a role. Roles define the permissions a user has for a given org or space and a user can be assigned multiple roles. See Orgs, Spaces, Roles, and Permissions for available roles and their corresponding permissions. The following example assumes that an org named Org and a space named Space have already been created.

    tux > cf set-space-role username Org Space SpaceDeveloper
    Assigning role RoleSpaceDeveloper to user username in org Org / space Space as admin...
    OK
    tux > cf set-org-role username Org OrgManager
    Assigning role OrgManager to user username in org Org as admin...
    OK
  12. Verify the user can log into your SUSE Cloud Application Platform deployment using their associated LDAP server credentials.

    tux > cf login
    API endpoint: https://api.example.com
    
    Email> username
    
    Password>
    Authenticating...
    OK
    
    
    
    API endpoint:   https://api.example.com (API version: 2.115.0)
    User:           username@ldap.example.com

4.15 Expanding Capacity of a Cloud Application Platform Deployment on SUSE® CaaS Platform

If the current capacity of your Cloud Application Platform deployment is insufficient for your workloads, you can expand the capacity using the procedure in this section.

These instructions assume you have followed the procedure in Chapter 4, Deploying SUSE Cloud Application Platform on SUSE CaaS Platform and have a running Cloud Application Platform deployment on SUSE® CaaS Platform.

  1. Add additional nodes to your SUSE® CaaS Platform cluster as described in https://documentation.suse.com/suse-caasp/4.5/html/caasp-admin/#adding_nodes.

  2. Verify the new nodes are in a Ready state before proceeding.

    tux > kubectl get nodes
  3. Add or update the following in your kubecf-config-values.yaml file to increase the number of diego-cell in your Cloud Application Platform deployment. Replace the example value with the number required by your workflow.

    sizing:
      diego_cell:
        instances: 5
  4. Perform a helm upgrade to apply the change.

    tux > helm upgrade kubecf suse/kubecf \
    --namespace kubecf \
    --values kubecf-config-values.yaml \
    --version 2.7.13
  5. Monitor progress of the additional diego-cell pods:

    tux > watch --color 'kubectl get pods --namespace kubecf'