Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
Applies to SUSE Cloud Application Platform 2.0.1

5 Deploying SUSE Cloud Application Platform on Microsoft Azure Kubernetes Service (AKS)

README first!

Before you start deploying SUSE Cloud Application Platform, review the following documents:

Read the Release Notes: Release Notes SUSE Cloud Application Platform

Read Chapter 3, Deployment and Administration Notes

SUSE Cloud Application Platform supports deployment on Microsoft Azure Kubernetes Service (AKS), Microsoft's managed Kubernetes service. This chapter describes the steps for preparing Azure for a SUSE Cloud Application Platform deployment, deployed with the default Azure Standard SKU load balancer (see https://docs.microsoft.com/en-us/azure/aks/load-balancer-standard).

In Kubernetes terminology a node used to be a minion, which was the name for a worker node. Now the correct term is simply node (see https://kubernetes.io/docs/concepts/architecture/nodes/). This can be confusing, as computing nodes have traditionally been defined as any device in a network that has an IP address. In Azure they are called agent nodes. In this chapter we call them agent nodes or Kubernetes nodes.

5.1 Prerequisites

The following are required to deploy and use SUSE Cloud Application Platform on AKS:


The prerequisites and configurations described is this chapter only reflect the requirements for a minimal SUSE Cloud Application Platform deployment. For a more production-ready environment, consider incoporating some of following optional features in this chapter and the Administration Guide at Part III, “SUSE Cloud Application Platform Administration”

5.2 Create Resource Group and AKS Instance

Log in to your Azure account, which should have the Contributor role.

tux > az login

You can set up an AKS cluster with an automatically generated service principal. Note that to be be able to create a service principal your user account must have permissions to register an application with your Azure Active Directory tenant, and to assign the application to a role in your subscription. For details, see https://docs.microsoft.com/en-us/azure/aks/kubernetes-service-principal#automatically-create-and-use-a-service-principal.

Alternatively, you can specify an existing service principal but the service principal must have sufficient rights to be able to create resources at the appropriate level, for example resource group, subscription etc. For more details please see:

Specify the following additional parameters for creating the cluster: node count, a username for SSH access to the nodes, SSH key, VM type, VM disk size and optionally, the Kubernetes version and a nodepool name.

tux > az aks create --resource-group my-resource-group --name cap-aks \
 --node-count 3 --admin-username cap-user \
 --ssh-key-value /path/to/some_key.pub --node-vm-size Standard_DS4_v2 \
 --node-osdisk-size 100 --nodepool-name mypool

For more az aks create options see https://docs.microsoft.com/en-us/cli/azure/aks?view=azure-cli-latest#az-aks-create.

This takes a few minutes. When it is completed, fetch your kubectl credentials. The default behavior for az aks get-credentials is to merge the new credentials with the existing default configuration, and to set the new credentials as as the current Kubernetes context. The context name is your AKS_NAME value. You should first backup your current configuration, or move it to a different location, then fetch the new credentials:

tux > az aks get-credentials --resource-group $RG_NAME --name $AKS_NAME
 Merged "cap-aks" as current context in /home/tux/.kube/config

Verify that you can connect to your cluster:

tux > kubectl get nodes

When all nodes are in a ready state and all pods are running, proceed to the next steps.

5.3 Install the Helm Client

Helm is a Kubernetes package manager used to install and manage SUSE Cloud Application Platform. This requires installing the Helm client, helm, on your remote management workstation. Cloud Application Platform requires Helm 3. For more information regarding Helm, refer to the documentation at https://helm.sh/docs/.

If your remote management workstation has the SUSE CaaS Platform package repository, install helm by running

tux > sudo zypper install helm3

Otherwise, helm can be installed by referring to the documentation at https://helm.sh/docs/intro/install/.

5.4 Storage Class

In SUSE Cloud Application Platform some instance groups, such as bits, database, diego-cell, and singleton-blobstore require a storage class. To learn more about storage classes, see https://kubernetes.io/docs/concepts/storage/storage-classes/.

By default, SUSE Cloud Application Platform will use the cluster's default storage class. To designate or change the default storage class, refer to https://kubernetes.io/docs/tasks/administer-cluster/change-default-storage-class/

A storage class can be chosen by setting the kube.storage_class value in your kubecf-config-values.yaml configuration file as seen in this example. Note that if there is no storage class designated as the default this value must be set.

  storage_class: my-storage-class

5.5 Deployment Configuration

The following file, kubecf-config-values.yaml, provides a complete example deployment configuration.

The format of the kubecf-config-values.yaml file has been restructured completely. Do not re-use the previous version of the file. Instead, source the default file from the appendix in Section A.1, “Complete suse/kubecf values.yaml File”.

Warning: Supported Domains

When selecting a domain, SUSE Cloud Application Platform expects DOMAIN to be either a subdomain or a root domain. Setting DOMAIN to a top-level domain, such as suse, is not supported.

### example deployment configuration file
### kubecf-config-values.yaml

system_domain: example.com

  cf_admin_password: changeme
  uaa_admin_client_secret: alsochangeme

5.6 Certificates

This section describes the process to secure traffic passing through your SUSE Cloud Application Platform deployment. This is achieved by using certificates to set up Transport Layer Security (TLS) for the router component.

5.6.1 Certificate Characteristics

Ensure the certificates you use have the following characteristics:

  • The certificate is encoded in the PEM format.

  • The certificate is signed by an external Certificate Authority (CA).

  • The certificate's Subject Alternative Names (SAN) include the domain *.example.com, where example.com is replaced with the system_domain in your kubecf-config-values.yaml.

5.6.2 Deployment Configuration

The certificate used to secure your deployment is passed through the kubecf-config-values.yaml configuration file. To specify a certificate, set the value of the certificate and its corresponding private key using the router.tls.crt and router.tls.key Helm values in the settings: section.


Note the use of the "|" character which indicates the use of a literal scalar. See the http://yaml.org/spec/1.2/spec.html#id2795688 for more information.

      crt: |
        -----BEGIN CERTIFICATE-----
        -----END CERTIFICATE-----
      key: |
        -----BEGIN RSA PRIVATE KEY-----
        -----END RSA PRIVATE KEY----

5.7 Using an Ingress Controller

By default, a SUSE Cloud Application Platform cluster is exposed through its Kubernetes services. This section describes how to use an ingress (see https://kubernetes.io/docs/concepts/services-networking/ingress/) to manage access to the services in the cluster.

Note that only the NGINX Ingress Controller has been verified to be compatible with Cloud Application Platform. Other Ingress controller alternatives may work, but compatibility with Cloud Application Platform is not supported.

5.7.1 Install and Configure the NGINX Ingress Controller

  1. Create a configuration file with the section below. The file is called nginx-ingress.yaml in this example.

      2222: "kubecf/scheduler:2222"
      20000: "kubecf/tcp-router:20000"
      20001: "kubecf/tcp-router:20001"
      20002: "kubecf/tcp-router:20002"
      20003: "kubecf/tcp-router:20003"
      20004: "kubecf/tcp-router:20004"
      20005: "kubecf/tcp-router:20005"
      20006: "kubecf/tcp-router:20006"
      20007: "kubecf/tcp-router:20007"
      20008: "kubecf/tcp-router:20008"
  2. Create the namespace.

    tux > kubectl create namespace nginx-ingress
  3. Install the NGINX Ingress Controller.

    tux > helm install nginx-ingress suse/nginx-ingress \
    --namespace nginx-ingress \
    --values nginx-ingress.yaml
  4. Monitor the progess of the deployment:

    tux > watch --color 'kubectl get pods --namespace nginx-ingress'
  5. After the deployment completes, the Ingress controller service will be deployed with either an external IP or a hostname.

    Find the external IP or hostname.

    tux > kubectl get services nginx-ingress-controller --namespace nginx-ingress

    You will get output similar to the following.

    NAME                       TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)
    nginx-ingress-controller   LoadBalancer   80:30344/TCP,443:31386/TCP
  6. Set up DNS records corresponding to the controller service IP or hostname and map it to the system_domain defined in your kubecf-config-values.yaml.

  7. Obtain a PEM formatted certificate that is associated with the system_domain defined in your kubecf-config-values.yaml

  8. In your kubecf-config-values.yaml configuration file, enable the ingress feature and set the tls.crt and tls.key for the certificate from the previous step.

        enabled: true
          crt: |
            -----BEGIN CERTIFICATE-----
            -----END CERTIFICATE-----
          key: |
            -----BEGIN RSA PRIVATE KEY-----
            -----END RSA PRIVATE KEY-----

5.8 Affinity and Anti-affinity

Operators can set affinity/anti-affinity rules to restrict how the scheduler determines the placement of a given pod on a given node. This can be achieved through node affinity/anti-affinity, where placement is determined by node labels (see https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity), or pod affinity/anti-affinity, where pod placement is determined by labels on pods that are already running on the node (see https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity).

In SUSE Cloud Application Platform, a default configuration will have following affinity/anti-affinity rules already in place:

  • Instance groups have anti-affinity against themselves. This applies to all instance groups, including database, but not to the bits, eirini, and eirini-extensions subcharts.

  • The diego-cell and router instance groups have anti-affinity against each other.

Note that to ensure an optimal spread of the pods across worker nodes we recommend running 5 or more worker nodes to satisfy both of the default anti-affinity constraints. An operator can also specify custom affinity rules via the sizing.instance-group.affinity helm parameter and any affinity rules specified here will overwrite the default rule, not merge with it.

5.8.1 Configuring Rules

To add or override affinity/anti-affinity settings, add a sizing.INSTANCE_GROUP.affinity block to your kubecf-config-values.yaml. Repeat as necessary for each instance group where affinity/anti-affinity settings need to be applied. For information on the available fields and valid values within the affinity: block, see https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity. Repeat as necessary for each instance group where affinity/anti-affinity settings need to be applied.

Example 1, node affinity.

Using this configuration, the Kubernetes scheduler would place both the asactors and asapi instance groups on a node with a label where the key is topology.kubernetes.io/zone and the value is 0.

           - matchExpressions:
             - key: topology.kubernetes.io/zone
               operator: In
               - 0
           - matchExpressions:
             - key: topology.kubernetes.io/zone
               operator: In
               - 0

Example 2, pod anti-affinity.

        - weight: 100
              - key: quarks.cloudfoundry.org/quarks-statefulset-name
                operator: In
                - sample_group
            topologyKey: kubernetes.io/hostname
        - weight: 100
              - key: quarks.cloudfoundry.org/quarks-statefulset-name
                operator: In
                - sample_group
            topologyKey: kubernetes.io/hostname

Example 1 above uses topology.kubernetes.io/zone as its label, which is one of the standard labels that get attached to nodes by default. The list of standard labels can be found at https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#built-in-node-labels.

In addition to the standard labels, custom labels can be specified as in Example 2. To use custom labels, following the process described in this section https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector.

5.9 High Availability

5.9.1 Configuring Cloud Application Platform for High Availability

There are two ways to make your SUSE Cloud Application Platform deployment highly available. The first method is to set the high_availability parameter in your deployment configuration file to true. The second method is to create custom configuration files with your own sizing values. Finding Default and Allowable Sizing Values

The sizing: section in the Helm values.yaml files for the kubecf chart describes which roles can be scaled, and the scaling options for each role. You may use helm inspect to read the sizing: section in the Helm chart:

tux > helm show suse/kubecf | less +/sizing:

Another way is to use Perl to extract the information for each role from the sizing: section.

tux > helm inspect values suse/kubecf | \
perl -ne '/^sizing/..0 and do { print $.,":",$_ if /^ [a-z]/ || /high avail|scale|count/ }'

The default values.yaml files are also included in this guide at Section A.1, “Complete suse/kubecf values.yaml File”. Using the high_availability Helm Property

One way to make your SUSE Cloud Application Platform deployment highly available is to use the high_availability Helm property. In your kubecf-config-values.yaml, set this property to true. This changes the size of all roles to the minimum required for a highly available deployment. Your configuration file, kubecf-config-values.yaml, should include the following.

high_availability: true
Important: Sizing Priority

When sizing values are specified, it takes precedence over the high_availability property. Using Custom Sizing Configurations

Another method to make your SUSE Cloud Application Platform deployment highly available is to explicitly configure the instance count of an instance group.

Important: Sizing Priority

When sizing values are specified, it takes precedence over the high_availability property.

To see the full list of configurable instance groups, refer to default KubeCF values.yaml file in the appendix at Section A.1, “Complete suse/kubecf values.yaml File”.

The following is an example High Availability configuration. The example values are not meant to be copied, as these depend on your particular deployment and requirements.

    instances: 2
    instances: 2
    instances: 2
    instances: 2
    instances: 2
    instances: 2
    instances: 2
    instances: 2
    instances: 2
    instances: 2
    instances: 2
    instances: 2
    instances: 2
    instances: 2
    instances: 3
    instances: 2
    instances: 2
    instances: 2
    instances: 2
    instances: 2
    instances: 2
    instances: 2

5.10 External Blobstore

Cloud Foundry Application Runtime (CFAR) uses a blobstore (see https://docs.cloudfoundry.org/concepts/cc-blobstore.html) to store the source code that developers push, stage, and run. This section explains how to configure an external blobstore for the Cloud Controller component of your SUSE Cloud Application Platform deployment.

SUSE Cloud Application Platform relies on ops files (see https://github.com/cloudfoundry/cf-deployment/blob/master/operations/README.md) provided by cf-deployment (see https://github.com/cloudfoundry/cf-deployment) releases for external blobstore configurations. The default configuration for the blobstore is singleton.

5.10.1 Configuration

Currently SUSE Cloud Application Platform supports Amazon Simple Storage Service (Amazon S3, see https://aws.amazon.com/s3/) as an external blobstore . In order to configure Amazon S3 as an external blobstore, set the following in your kubecf-config-values.yaml file and replace the example values.

    provider: s3
      aws_region: "us-east-1"
      blobstore_access_key_id:  AWS-ACCESS-KEY-ID
      blobstore_secret_access_key: AWS-SECRET-ACCESS-KEY>
      # User provided value for the blobstore admin password.
      blobstore_admin_users_password: PASSWORD
      # The following values are used as S3 bucket names. The buckets are automatically created if not present.
      app_package_directory_key: APP-BUCKET-NAME
      buildpack_directory_key: BUILDPACK-BUCKET-NAME
      droplet_directory_key: DROPLET-BUCKET-NAME
      resource_directory_key: RESOURCE-BUCKET-NAME
Warning: us-east-1 as Only Valid Region

Currently, there is a limitation where only us-east-1 can be chosen as the aws_region. For more information about this issue, see https://github.com/cloudfoundry-incubator/kubecf/issues/656.

Ensure the supplied AWS credentials have appropriate permissions as described at https://docs.cloudfoundry.org/deploying/common/cc-blobstore-config.html#fog-aws-iam.

5.11 External Database

By default, SUSE Cloud Application Platform includes a single-availability database provided by the Percona XtraDB Cluster (PXC). SUSE Cloud Application Platform can be configured to use an external database system, such as a data service offered by a cloud service provider or an existing high availability database server.

To configure your deployment to use an external database, please follow the instructions below.

The current SUSE Cloud Application Platform release is compatible with the following types and versions of external databases:

  • MySQL 5.7

5.11.1 Configuration

This section describes how to enable and configure your deployment to connect to an external database. The configuration options are specified through Helm values inside the kubecf-config-values.yaml. The deployment and configuration of the external database itself is the responsibility of the operator and beyond the scope of this documentation. It is assumed the external database has been deployed and accessible.

Important: Configuration during Initial Install Only

Configuration of SUSE Cloud Application Platform to use an external database must be done during the initial installation and cannot be changed afterwards.

All the databases listed in the config snippet below need to exist before installing KubeCF. One way of doing that is manually running CREATE DATABASE IF NOT EXISTS database-name for each database.

The following snippet of the kubecf-config-values.yaml contains an example of an external database configuration.

    enabled: false
    enabled: true
    require_ssl: false
    ca_cert: ~
    type: mysql
    host: hostname
    port: 3306
        name: uaa
        password: root
        username: root
        name: cloud_controller
        password: root
        username: root
        name: diego
        password: root
        username: root
        name: routing-api
        password: root
        username: root
        name: network_policy
        password: root
        username: root
        name: network_connectivity
        password: root
        username: root
        name: locket
        password: root
        username: root
        name: credhub
        password: root
        username: root

5.12 Add the Kubernetes Charts Repository

Download the SUSE Kubernetes charts repository with Helm:

tux > helm repo add suse https://kubernetes-charts.suse.com/

You may replace the example suse name with any name. Verify with helm:

tux > helm repo list
NAME       URL
stable     https://kubernetes-charts.storage.googleapis.com
suse       https://kubernetes-charts.suse.com/

List your chart names, as you will need these for some operations:

tux > helm search repo suse
NAME                            CHART VERSION        APP VERSION    DESCRIPTION
suse/cf-operator                4.5.13+0.gd4738712    2.0.1          A Helm chart for cf-operator, the k8s operator ....
suse/console                    4.0.1                2.0.1          A Helm chart for deploying SUSE Stratos Console
suse/kubecf                     2.2.3                2.0.1          A Helm chart for KubeCF
suse/metrics                    1.2.1                2.0.1          A Helm chart for Stratos Metrics
suse/minibroker                 0.3.1                               A minibroker for your minikube
suse/nginx-ingress              0.28.4               0.15.0         An nginx Ingress controller that uses ConfigMap to store ...

5.13 Deploying SUSE Cloud Application Platform

This section describes how to deploy SUSE Cloud Application Platform with a Azure Standard SKU load balancer.

Warning: KubeCF and cf-operator versions

KubeCF and cf-operator interoperate closely. Before you deploy a specific version combination, make sure they were confirmed to work. For more information see Section 3.5, “Releases and Associated Versions”.

5.13.1 Deploy the Operator

  1. First, create the namespace for the operator.

    tux > kubectl create namespace cf-operator
  2. Install the operator.

    The value of global.operator.watchNamespace indicates the namespace the operator will monitor for a KubeCF deployment. This namespace should be separate from the namespace used by the operator. In this example, this means KubeCF will be deployed into a namespace called kubecf.

    tux > helm install cf-operator suse/cf-operator \
    --namespace cf-operator \
    --set "global.operator.watchNamespace=kubecf" \
    --version 4.5.13+0.gd4738712
  3. Wait until cf-operator is successfully deployed before proceeding. Monitor the status of your cf-operator deployment using the watch command.

    tux > watch --color 'kubectl get pods --namespace cf-operator'

5.13.2 Deploy KubeCF

  1. Use Helm to deploy KubeCF.

    Note that you do not need to manually create the namespace for KubeCF.

    tux > helm install kubecf suse/kubecf \
    --namespace kubecf \
    --values kubecf-config-values.yaml \
    --version 2.2.3
  2. Monitor the status of your KubeCF deployment using the watch command.

    tux > watch --color 'kubectl get pods --namespace kubecf'
  3. Find the value of EXTERNAL-IP for each of the public services.

    tux > kubectl get service --namespace kubecf router-public
    tux > kubectl get service --namespace kubecf tcp-router-public
    tux > kubectl get service --namespace kubecf ssh-proxy-public
  4. Create DNS A records for the public services.

    1. For the router-public service, create a record mapping the EXTERNAL-IP value to <system_domain>.

    2. For the router-public service, create a record mapping the EXTERNAL-IP value to *.<system_domain>.

    3. For the tcp-router-public service, create a record mapping the EXTERNAL-IP value to tcp.<system_domain>.

    4. For the ssh-proxy-public service, create a record mapping the EXTERNAL-IP value to ssh.<system_domain>.

  5. When all pods are fully ready, verify your deployment.

    Connect and authenticate to the cluster.

    tux > cf api --skip-ssl-validation "https://api.<system_domain>"
    # Use the cf_admin_password set in kubecf-config-values.yaml
    tux > cf auth admin changeme

5.14 Configuring and Testing the Native Microsoft AKS Service Broker

Microsoft Azure Kubernetes Service provides a service broker called the Open Service Broker for Azure (see https://github.com/Azure/open-service-broker-azure. This section describes how to use it with your SUSE Cloud Application Platform deployment.

Usage of the broker requires a cluster running Kubernetes 1.15 or earlier.

Start by extracting and setting a batch of environment variables:

tux > SBRG_NAME=$(tr -dc 'a-zA-Z0-9' < /dev/urandom | head -c 8)-service-broker

tux > REGION=eastus

tux > export SUBSCRIPTION_ID=$(az account show | jq -r '.id')

tux > az group create --name ${SBRG_NAME} --location ${REGION}

tux > SERVICE_PRINCIPAL_INFO=$(az ad sp create-for-rbac --name ${SBRG_NAME})

tux > TENANT_ID=$(echo ${SERVICE_PRINCIPAL_INFO} | jq -r '.tenant')

tux > CLIENT_ID=$(echo ${SERVICE_PRINCIPAL_INFO} | jq -r '.appId')

tux > CLIENT_SECRET=$(echo ${SERVICE_PRINCIPAL_INFO} | jq -r '.password')

tux > echo SBRG_NAME=${SBRGNAME}

tux > echo REGION=${REGION}


Add and install the catalog Helm chart. The CPU and memory's requests and limits must be increased, otherewise the installation fails due to a OOMKilled state. This example, increases these to double the default:

tux > helm repo add svc-cat https://svc-catalog-charts.storage.googleapis.com

tux > helm repo update

tux > kubectl create namespace catalog

tux > helm install catalog svc-cat/catalog \
 --namespace catalog \
 --set controllerManager.healthcheck.enabled=false \
 --set apiserver.healthcheck.enabled=false \
 --set controllerManager.resources.requests.cpu=200m \
 --set controllerManager.resources.requests.memory=40Mi \
 --set controllerManager.resources.limits.cpu=200m \
 --set controllerManager.resources.limits.memory=40Mi

tux > kubectl get apiservice

tux > helm repo add azure https://kubernetescharts.blob.core.windows.net/azure

tux > helm repo update

Set up the service broker with your variables:

tux > kubectl create namespace osba

tux > helm install osba azure/open-service-broker-azure \
--namespace osba \
--set azure.subscriptionId=${SUBSCRIPTION_ID} \
--set azure.tenantId=${TENANT_ID} \
--set azure.clientId=${CLIENT_ID} \
--set azure.clientSecret=${CLIENT_SECRET} \
--set azure.defaultLocation=${REGION} \
--set redis.persistence.storageClass=default \
--set basicAuth.username=$(tr -dc 'a-zA-Z0-9' < /dev/urandom | head -c 16) \
--set basicAuth.password=$(tr -dc 'a-zA-Z0-9' < /dev/urandom | head -c 16) \
--set tls.enabled=false

Monitor the progress:

tux > watch --color 'kubectl get pods --namespace osba'

When all pods are running, create the service broker in KubeCF using the cf CLI:

tux > cf login

tux > cf create-service-broker azure $(kubectl get deployment osba-open-service-broker-azure \
--namespace osba --output jsonpath='{.spec.template.spec.containers[0].env[?(@.name == "BASIC_AUTH_USERNAME")].value}') $(kubectl get secret --namespace osba osba-open-service-broker-azure --output jsonpath='{.data.basic-auth-password}' | base64 --decode) http://osba-open-service-broker-azure.osba

List the available service plans. For more information about the services supported see https://github.com/Azure/open-service-broker-azure#supported-services:

tux > cf service-access -b azure

Use cf enable-service-access to enable access to a service plan. This example enables all basic plans:

tux > cf service-access -b azure | \
awk '($2 ~ /basic/) { system("cf enable-service-access " $1 " -p " $2)}'

Test your new service broker with an example PHP application. First create an organization and space to deploy your test application to:

tux > cf create-org testorg

tux > cf create-space kubecftest -o testorg

tux > cf target -o "testorg" -s "kubecftest"

tux > cf create-service azure-mysql-5-7 basic question2answer-db \
-c "{ \"location\": \"${REGION}\", \"resourceGroup\": \"${SBRG_NAME}\", \"firewallRules\": [{\"name\": \
\"AllowAll\", \"startIPAddress\":\"\",\"endIPAddress\":\"\"}]}"

tux > cf service question2answer-db | grep status

Find your new service and optionally disable TLS. You should not disable TLS on a production deployment, but it simplifies testing. The mysql2 gem must be configured to use TLS, see brianmario/mysql2/SSL options on GitHub:

tux > az mysql server list --resource-group $SBRG_NAME

tux > az mysql server update --resource-group $SBRG_NAME \
--name kubecftest --ssl-enforcement Disabled

Look in your Azure portal to find your database --name.

Build and push the example PHP application:

tux > git clone https://github.com/scf-samples/question2answer

tux > cd question2answer

tux > cf push

tux > cf service question2answer-db # => bound apps

When the application has finished deploying, use your browser and navigate to the URL specified in the routes field displayed at the end of the staging logs. For example, the application route could be question2answer.example.com.

Press the button to prepare the database. When the database is ready, further verify by creating an initial user and posting some test questions.

5.15 LDAP Integration

SUSE Cloud Application Platform can be integrated with identity providers to help manage authentication of users. The Lightweight Directory Access Protocol (LDAP) is an example of an identity provider that Cloud Application Platform integrates with. This section describes the necessary components and steps in order to configure the integration. See User Account and Authentication LDAP Integration for more information.

5.15.1 Prerequisites

The following prerequisites are required in order to complete an LDAP integration with SUSE Cloud Application Platform.

  • cf, the Cloud Foundry command line interface. For more information, see https://docs.cloudfoundry.org/cf-cli/.

    For SUSE Linux Enterprise and openSUSE systems, install using zypper.

    tux > sudo zypper install cf-cli

    For SLE, ensure the SUSE Cloud Application Platform Tools Module has been added. Add the module using YaST or SUSEConnect.

    tux > SUSEConnect --product sle-module-cap-tools/15.1/x86_64

    For other systems, follow the instructions at https://docs.cloudfoundry.org/cf-cli/install-go-cli.html.

  • uaac, the Cloud Foundry uaa command line client (UAAC). See https://docs.cloudfoundry.org/uaa/uaa-user-management.html for more information and installation instructions.

    On SUSE Linux Enterprise systems, ensure the ruby-devel and gcc-c++ packages have been installed before installing the cf-uaac gem.

    tux > sudo zypper install ruby-devel gcc-c++
  • An LDAP server and the credentials for a user/service account with permissions to search the directory.

5.15.2 Example LDAP Integration

Run the following commands to complete the integration of your Cloud Application Platform deployment and LDAP server.

  1. Use UAAC to target your uaa server.

    tux > uaac target --skip-ssl-validation https://uaa.example.com:2793
  2. Authenticate to the uaa server as admin using the UAA_ADMIN_CLIENT_SECRET set in your kubecf-config-values.yaml file.

    tux > uaac token client get admin --secret password
  3. Create the LDAP identity provider. A 201 response will be returned when the identity provider is successfully created. See the UAA API Reference and Cloud Foundry UAA-LDAP Documentationfor information regarding the request parameters and additional options available to configure your identity provider.

    The following is an example of a uaac curl command and its request parameters used to create an identity provider. Specify the parameters according to your LDAP server's credentials and directory structure. Ensure the user specifed in the bindUserDn has permissions to search the directory.

    tux > uaac curl /identity-providers?rawConfig=true \
        --request POST \
        --insecure \
        --header 'Content-Type: application/json' \
        --header 'X-Identity-Zone-Subdomain: uaa' \
        --data '{
      "type" : "ldap",
      "config" : {
        "ldapProfileFile" : "ldap/ldap-search-and-bind.xml",
        "baseUrl" : "ldap://ldap.example.com:389",
        "bindUserDn" : "cn=admin,dc=example,dc=com",
        "bindPassword" : "password",
        "userSearchBase" : "dc=example,dc=com",
        "userSearchFilter" : "uid={0}",
        "ldapGroupFile" : "ldap/ldap-groups-map-to-scopes.xml",
        "groupSearchBase" : "dc=example,dc=com",
        "groupSearchFilter" : "member={0}"
      "originKey" : "ldap",
      "name" : "My LDAP Server",
      "active" : true
  4. Verify the LDAP identify provider has been created in the kubecf zone. The output should now contain an entry for the ldap type.

    tux > uaac curl /identity-providers --insecure --header "X-Identity-Zone-Id: uaa"
  5. Use the cf CLI to target your SUSE Cloud Application Platform deployment.

    tux > cf api --skip-ssl-validation https://api.example.com
  6. Log in as an administrator.

    tux > cf login
    API endpoint: https://api.example.com
    Email> admin
  7. Create users associated with your LDAP identity provider.

    tux > cf create-user username --origin ldap
    Creating user username...
    TIP: Assign roles with 'cf set-org-role' and 'cf set-space-role'.
  8. Assign the user a role. Roles define the permissions a user has for a given org or space and a user can be assigned multiple roles. See Orgs, Spaces, Roles, and Permissions for available roles and their corresponding permissions. The following example assumes that an org named Org and a space named Space have already been created.

    tux > cf set-space-role username Org Space SpaceDeveloper
    Assigning role RoleSpaceDeveloper to user username in org Org / space Space as admin...
    tux > cf set-org-role username Org OrgManager
    Assigning role OrgManager to user username in org Org as admin...
  9. Verify the user can log into your SUSE Cloud Application Platform deployment using their associated LDAP server credentials.

    tux > cf login
    API endpoint: https://api.example.com
    Email> username
    API endpoint:   https://api.example.com (API version: 2.115.0)
    User:           username@ldap.example.com

If the LDAP identity provider is no longer needed, it can be removed with the following steps.

  1. Obtain the ID of your identity provider.

    tux > uaac curl /identity-providers \
        --insecure \
        --header "Content-Type:application/json" \
        --header "Accept:application/json" \
  2. Delete the identity provider.

    tux > uaac curl /identity-providers/IDENTITY_PROVIDER_ID \
        --request DELETE \
        --insecure \
        --header "X-Identity-Zone-Id:uaa"

5.16 Expanding Capacity of a Cloud Application Platform Deployment on Microsoft AKS

If the current capacity of your Cloud Application Platform deployment is insufficient for your workloads, you can expand the capacity using the procedure in this section.

These instructions assume you have followed the procedure in Chapter 5, Deploying SUSE Cloud Application Platform on Microsoft Azure Kubernetes Service (AKS) and have a running Cloud Application Platform deployment on Microsoft AKS. The instructions below will use environment variables defined in Section 5.2, “Create Resource Group and AKS Instance”.

  1. Get the current number of Kubernetes nodes in the cluster.

    tux > export OLD_NODE_COUNT=$(kubectl get nodes --output json | jq '.items | length')
  2. Set the number of Kubernetes nodes the cluster will be expanded to. Replace the example value with the number of nodes required for your workload.

    tux > export NEW_NODE_COUNT=5
  3. Increase the Kubernetes node count in the cluster.

    tux > az aks scale --resource-group $RG_NAME --name $AKS_NAME \
    --node-count $NEW_NODE_COUNT \
    --nodepool-name $NODEPOOL_NAME
  4. Verify the new nodes are in a Ready state before proceeding.

    tux > kubectl get nodes
  5. Add or update the following in your kubecf-config-values.yaml file to increase the number of diego-cell in your Cloud Application Platform deployment. Replace the example value with the number required by your workflow.

        instances: 5
  6. Perform a helm upgrade to apply the change.

    tux > helm upgrade kubecf suse/kubecf \
    --namespace kubecf \
    --values kubecf-config-values.yaml \
    --version 2.2.3
  7. Monitor progress of the additional diego-cell pods:

    tux > watch --color 'kubectl get pods --namespace kubecf'