Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
Applies to SUSE Cloud Application Platform 1.5.2

9 Deploying SUSE Cloud Application Platform on Microsoft Azure Kubernetes Service (AKS)

Important
Important
README first!

Before you start deploying SUSE Cloud Application Platform, review the following documents:

Read the Release Notes: Release Notes SUSE Cloud Application Platform

Read Chapter 3, Deployment and Administration Notes

SUSE Cloud Application Platform supports deployment on Microsoft Azure Kubernetes Service (AKS), Microsoft's managed Kubernetes service. This chapter describes the steps for preparing Azure for a SUSE Cloud Application Platform deployment, with a basic Azure load balancer. Note that you will not create any DNS records until after uaa is deployed. (See Azure Kubernetes Service (AKS) for more information.)

In Kubernetes terminology a node used to be a minion, which was the name for a worker node. Now the correct term is simply node (see https://kubernetes.io/docs/concepts/architecture/nodes/). This can be confusing, as computing nodes have traditionally been defined as any device in a network that has an IP address. In Azure they are called agent nodes. In this chapter we call them agent nodes or Kubernetes nodes.

9.1 Prerequisites

The following are required to deploy and use SUSE Cloud Application Platform on AKS:

Important
Important

The prerequisites and configurations described is this chapter only reflect the requirements for a minimal SUSE Cloud Application Platform deployment. For a more production-ready environment, consider incoporating the following features:

Stratos Web Console

For details, see Chapter 6, Installing the Stratos Web Console.

High Availability

For details, see Chapter 7, SUSE Cloud Application Platform High Availability.

LDAP Integration

For details, see Chapter 8, LDAP Integration.

External Log Server Integration

For details, see Chapter 24, Logging.

Managing Certificates

For details, see Chapter 25, Managing Certificates.

Other Features

Refer to the Administration Guide at Part III, “SUSE Cloud Application Platform Administration” for additional features.

Log in to your Azure Account:

tux > az login

Your Azure user needs the User Access Administrator role. Check your assigned roles with the az command:

tux > az role assignment list --assignee login-name
[...]
"roleDefinitionName": "User Access Administrator",

If you do not have this role, then you must request it from your Azure administrator.

You need your Azure subscription ID. Extract it with az:

tux > az account show --query "{ subscription_id: id }"
{
"subscription_id": "a900cdi2-5983-0376-s7je-d4jdmsif84ca"
}

Replace the example subscription-id in the next command with your subscription-id. Then export it as an environment variable and set it as the current subscription:

tux > export SUBSCRIPTION_ID="a900cdi2-5983-0376-s7je-d4jdmsif84ca"

tux > az account set --subscription $SUBSCRIPTION_ID

Verify that the Microsoft.Network, Microsoft.Storage, Microsoft.Compute, and Microsoft.ContainerService providers are enabled:

tux > az provider list | egrep --word-regexp 'Microsoft.Network|Microsoft.Storage|Microsoft.Compute|Microsoft.ContainerService'

If any of these are missing, enable them with the az provider register --name provider command.

9.2 Create Resource Group and AKS Instance

Now you can create a new Azure resource group and AKS instance. Define the required variables as environment variables in a file, called env.sh. This helps to speed up the setup, and to reduce errors. Verify your environment variables at any time by using the source to load the file, then running echo $VARNAME, for example:

tux > source ./env.sh

tux > echo $RG_NAME
cap-aks

This is especially useful when you run long compound commands to extract and set environment variables.

Tip
Tip: Use Different Names

Ensure each of your AKS clusters use unique resource group and managed cluster names, and not copy the examples, especially when your Azure subscription supports multiple users. Azure has no tools for sorting resources by user, so creating unique names and putting everything in your deployment in a single resource group helps you keep track, and you can delete the whole deployment by deleting the resource group.

In env.sh, define the environment variables below. Replace the example values with your own.

  • Set a resource group name.

    export RG_NAME="cap-aks"
  • Set an AKS managed cluster name. Azure's default is to use the resource group name, then prepend it with MC and append the location, for example MC_cap-aks_cap-aks_eastus. This example uses the creator's initials for the AKS_NAME environment variable, which will be mapped to the az command's --name option. The --name option is for creating arbitrary names for your AKS resources. This example will create a managed cluster named MC_cap-aks_cjs_eastus:

    tux > export AKS_NAME=cjs
  • Set the Azure location. See Quotas, virtual machine size restrictions, and region availability in Azure Kubernetes Service (AKS) for supported locations. Run az account list-locations to verify the correct way to spell your location name, for example East US is eastus in your az commands:

    tux > export REGION="eastus"
  • Set the Kubernetes agent node count. (Cloud Application Platform requires a minimum of 3.)

    tux > export NODE_COUNT="3"
  • Set the virtual machine size (see General purpose virtual machine sizes). A virtual machine size of at least Standard_DS4_v2 using premium storage (see Built in storage classes) is recommended. Note managed-premium has been specified in the example scf-config-values.yaml used (see Section 9.9, “Deploying SUSE Cloud Application Platform”):

    tux > export NODE_VM_SIZE="Standard_DS4_v2"
  • Set the public SSH key name associated with your Azure account:

    tux > export SSH_KEY_VALUE="~/.ssh/id_rsa.pub"
  • Set a new admin username:

    tux > export ADMIN_USERNAME="scf-admin"
  • Create a unique nodepool name. The default is aks-nodepool followed by an auto-generated number, for example aks-nodepool1-39318075-2. You have the option to change nodepool1 and create your own unique identifier. For example, mypool results in aks-mypool-39318075-2. Note that uppercase characters are considered invalid in a nodepool name and should not be used.

    tux > export NODEPOOL_NAME="mypool"

The below is an example env.sh file after all the environment variables have been defined.

### example environment variable definition file
### env.sh

export RG_NAME="cap-aks"
export AKS_NAME="cjs"
export REGION="eastus"
export NODE_COUNT="3"
export NODE_VM_SIZE="Standard_DS4_v2"
export SSH_KEY_VALUE="~/.ssh/id_rsa.pub"
export ADMIN_USERNAME="scf-admin"
export NODEPOOL_NAME="mypool"

Now that your environment variables are in place, load the file:

tux > source ./env.sh

Create a new resource group:

tux > az group create --name $RG_NAME --location $REGION

List the Kubernetes versions currently supported by AKS (see https://docs.microsoft.com/en-us/azure/aks/supported-kubernetes-versions for more information on the AKS version support policy):

tux > az aks get-versions --location $REGION --output table

Create a new AKS managed cluster, and specify the Kubernetes version for consistent deployments:

tux > az aks create --resource-group $RG_NAME --name $AKS_NAME \
 --node-count $NODE_COUNT --admin-username $ADMIN_USERNAME \
 --ssh-key-value $SSH_KEY_VALUE --node-vm-size $NODE_VM_SIZE \
 --node-osdisk-size=80 --nodepool-name $NODEPOOL_NAME \
 --vm-set-type AvailabilitySet
Note
Note

An OS disk size of at least 80 GB must be specified using the --node-osdisk-size flag.

This takes a few minutes. When it is completed, fetch your kubectl credentials. The default behavior for az aks get-credentials is to merge the new credentials with the existing default configuration, and to set the new credentials as as the current Kubernetes context. The context name is your AKS_NAME value. You should first backup your current configuration, or move it to a different location, then fetch the new credentials:

tux > az aks get-credentials --resource-group $RG_NAME --name $AKS_NAME
 Merged "cap-aks" as current context in /home/tux/.kube/config

Verify that you can connect to your cluster:

tux > kubectl get nodes
NAME                    STATUS    ROLES     AGE       VERSION
aks-mypool-47788232-0   Ready     agent     5m        v1.11.9
aks-mypool-47788232-1   Ready     agent     6m        v1.11.9
aks-mypool-47788232-2   Ready     agent     6m        v1.11.9

tux > kubectl get pods --all-namespaces
NAMESPACE     NAME                                READY  STATUS    RESTARTS   AGE
kube-system   azureproxy-79c5db744-fwqcx          1/1    Running   2          6m
kube-system   heapster-55f855b47-c4mf9            2/2    Running   0          5m
kube-system   kube-dns-v20-7c556f89c5-spgbf       3/3    Running   0          6m
kube-system   kube-dns-v20-7c556f89c5-z2g7b       3/3    Running   0          6m
kube-system   kube-proxy-g9zpk                    1/1    Running   0          6m
kube-system   kube-proxy-kph4v                    1/1    Running   0          6m
kube-system   kube-proxy-xfngh                    1/1    Running   0          6m
kube-system   kube-svc-redirect-2knsj             1/1    Running   0          6m
kube-system   kube-svc-redirect-5nz2p             1/1    Running   0          6m
kube-system   kube-svc-redirect-hlh22             1/1    Running   0          6m
kube-system   kubernetes-dashboard-546686-mr9hz   1/1    Running   1          6m
kube-system   tunnelfront-595565bc78-j8msn        1/1    Running   0          6m

When all nodes are in a ready state and all pods are running, proceed to the next steps.

9.3 Install Helm Client and Tiller

Helm is a Kubernetes package manager. It consists of a client and server component, both of which are required in order to install and manage Cloud Application Platform.

The Helm client, helm, can be installed on your remote administration computer by referring to the documentation at https://v2.helm.sh/docs/using_helm/#installing-helm. Cloud Application Platform is compatible with both Helm 2 and Helm 3. Examples in this guide are based on Helm 2. To use Helm 3, refer to the Helm documentation at https://helm.sh/docs/.

Tiller, the Helm server component, needs to be installed on your Kubernetes cluster. Follow the instructions at https://v2.helm.sh/docs/using_helm/#installing-tiller to install Tiller with a service account and ensure your installation is appropriately secured according to your requirements as described in https://v2.helm.sh/docs/using_helm/#securing-your-helm-installation.

9.4 Pod Security Policies

Role-based access control (RBAC) is enabled by default on AKS. SUSE Cloud Application Platform 1.3.1 and later do not need to be configured manually. Older Cloud Application Platform releases require manual PSP configuration; see Section A.1, “Manual Configuration of Pod Security Policies” for instructions.

9.5 Default Storage Class

This example creates a managed-premium (see https://docs.microsoft.com/en-us/azure/aks/azure-disks-dynamic-pv) storage class for your cluster using the manifest defined in storage-class.yaml below:

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
  labels:
    kubernetes.io/cluster-service: "true"
  name: persistent
parameters:
  kind: Managed
  storageaccounttype: Premium_LRS
provisioner: kubernetes.io/azure-disk
reclaimPolicy: Delete
allowVolumeExpansion: true

Then apply the new storage class configuration with this command:

tux > kubectl create --filename storage-class.yaml

Specify the newly created created storage class, called persistent, as the value for kube.storage_class.persistent in your deployment configuration file, like this example:

kube:
  storage_class:
    persistent: "persistent"
    shared: "persistent"

See Section 9.7, “Deployment Configuration” for a complete example deployment configuration file, scf-config-values.yaml.

9.6 DNS Configuration

This section provides an overview of the domain and sub-domains that require A records to be set up for. The process is described in more detail in the deployment section.

The following table lists the required domain and sub-domains, using example.com as the example domain:

Warning
Warning: Supported Domains

When selecting a domain, SUSE Cloud Application Platform expects DOMAIN to be either a subdomain or a root domain. Setting DOMAIN to a top-level domain, such suse, is not supported.

DomainsServices
uaa.example.comuaa-uaa-public
*.uaa.example.comuaa-uaa-public
example.comrouter-gorouter-public
*.example.comrouter-gorouter-public
tcp.example.comtcp-router-tcp-router-public
ssh.example.comdiego-ssh-ssh-proxy-public

A SUSE Cloud Application Platform cluster exposes these four services:

Kubernetes service descriptionsKubernetes service names
User Account and Authentication (uaa)uaa-uaa-public
Cloud Foundry (CF) TCP routing servicetcp-router-tcp-router-public
CF application SSH accessdiego-ssh-ssh-proxy-public
CF routerrouter-gorouter-public

uaa-uaa-public is in the uaa namespace, and the rest are in the scf namespace.

9.7 Deployment Configuration

It is not necessary to create any DNS records before deploying uaa. Instead, after uaa is running you will find the load balancer IP address that was automatically created during deployment, and then create the necessary records.

The following file, scf-config-values.yaml, provides a complete example deployment configuration.

Warning
Warning: Supported Domains

When selecting a domain, SUSE Cloud Application Platform expects DOMAIN to be either a subdomain or a root domain. Setting DOMAIN to a top-level domain, such suse, is not supported.

### example deployment configuration file
### scf-config-values.yaml

env:
  DOMAIN: example.com
  # the UAA prefix is required
  UAA_HOST: uaa.example.com
  UAA_PORT: 2793
  GARDEN_ROOTFS_DRIVER: "overlay-xfs"

kube:
  storage_class:
    persistent: "persistent"
    shared: "persistent"

  registry:
    hostname: "registry.suse.com"
    username: ""
    password: ""
  organization: "cap"

secrets:
  # Create a very strong password for user 'admin'
  CLUSTER_ADMIN_PASSWORD: password

  # Create a very strong password, and protect it because it
  # provides root access to everything
  UAA_ADMIN_CLIENT_SECRET: password

services:
  loadbalanced: true

Take note of the following Helm values when defining your scf-config-values.yaml file.

GARDEN_ROOTFS_DRIVER

For SUSE® CaaS Platform and other Kubernetes deployments where the nodes are based on SUSE Linux Enterprise, the btrfs file system driver must be used. By default, btrfs is selected as the default.

For Microsoft AKS, Amazon EKS, Google GKE, and other Kubernetes deployments where the nodes are based on other operating systems, the overlay-xfs file system driver must be used.

Important
Important: Protect UAA_ADMIN_CLIENT_SECRET

The UAA_ADMIN_CLIENT_SECRET is the master password for access to your Cloud Application Platform cluster. Make this a very strong password, and protect it just as carefully as you would protect any root password.

9.8 Add the Kubernetes Charts Repository

Download the SUSE Kubernetes charts repository with Helm:

tux > helm repo add suse https://kubernetes-charts.suse.com/

You may replace the example suse name with any name. Verify with helm:

tux > helm repo list
NAME       URL
stable     https://kubernetes-charts.storage.googleapis.com
local      http://127.0.0.1:8879/charts
suse       https://kubernetes-charts.suse.com/

List your chart names, as you will need these for some operations:

tux > helm search suse
NAME                            CHART VERSION   APP VERSION     DESCRIPTION
suse/cf                         2.20.3          1.5.2           A Helm chart for SUSE Cloud Foundry
suse/cf-usb-sidecar-mysql       1.0.1                           A Helm chart for SUSE Universal Service Broker Sidecar fo...
suse/cf-usb-sidecar-postgres    1.0.1                           A Helm chart for SUSE Universal Service Broker Sidecar fo...
suse/console                    3.1.0           1.5.2           A Helm chart for deploying Stratos UI Console
suse/log-agent-rsyslog      	1.0.1        	8.39.0     	Log Agent for forwarding logs of K8s control pl...
suse/metrics                    1.1.2           1.5.2           A Helm chart for Stratos Metrics
suse/minibroker                 0.3.1                           A minibroker for your minikube
suse/nginx-ingress              0.28.4          0.15.0          An nginx Ingress controller that uses ConfigMap to store ...
suse/uaa                        2.20.3          1.5.2           A Helm chart for SUSE UAA

9.9 Deploying SUSE Cloud Application Platform

This section describes how to deploy SUSE Cloud Application Platform with a basic AKS load balancer and how to configure your DNS records.

9.9.1 Deploy uaa

Note
Note: Embedded uaa in scf

The User Account and Authentication (uaa) Server is included as an optional feature of the scf Helm chart. This simplifies the Cloud Application Platform deployment process as a separate installation and/or upgrade of uaa is no longer a prerequisite to installing and/or upgrading scf.

It is important to note that:

  • This feature should only be used when uaa is not shared with other projects.

  • You cannot migrate from an existing external uaa to an embedded one. In this situation, enabling this feature during an upgrade will result in a single admin account.

To enable this feature, add the following to your scf-config-values.yaml.

enable:
  uaa: true

When deploying and/or upgrading scf, run helm install and/or helm upgrade and note that:

  • Installing and/or upgrading uaa using helm install suse/uaa ... and/or helm upgrade is no longer required.

  • It is no longer necessary to set the UAA_CA_CERT parameter. Previously, this parameter was passed the CA_CERT variable, which was assigned the CA certificate of uaa.

Use Helm to deploy the uaa server:

tux > helm install suse/uaa \
--name susecf-uaa \
--namespace uaa \
--values scf-config-values.yaml

Wait until you have a successful uaa deployment before going to the next steps, which you can monitor with the watch command:

tux > watch --color 'kubectl get pods --namespace uaa'

When uaa is successfully deployed, the following is observed:

  • For the secret-generation pod, the STATUS is Completed and the READY column is at 0/1.

  • All other pods have a Running STATUS and a READY value of n/n.

Press CtrlC to exit the watch command.

After the deployment completes, a Kubernetes service for uaa will be exposed on an Azure load balancer that is automatically set up by AKS (named kubernetes in the resource group that hosts the worker node VMs).

List the services that have been exposed on the load balancer public IP. The name of these services end in -public. For example, the uaa service is exposed on 40.85.188.67 and port 2793.

tux > kubectl get services --namespace uaa | grep public
uaa-uaa-public    LoadBalancer   10.0.67.56     40.85.188.67   2793:32034/TCP

Use the DNS service of your choice to set up DNS A records for the service from the previous step. Use the public load balancer IP associated with the service to create domain mappings:

  • For the uaa service, map the following domains:

    uaa.DOMAIN

    Using the example values, an A record for uaa.example.com that points to 40.85.188.67

    *.uaa.DOMAIN

    Using the example values, an A record for *.uaa.example.com that points to 40.85.188.67

If you wish to use the DNS service provided by Azure, see the Azure DNS Documentation to learn more.

Use curl to verify you are able to connect to the uaa OAuth server on the DNS name configured:

tux > curl --insecure https://uaa.example.com:2793/.well-known/openid-configuration

This should return a JSON object, as this abbreviated example shows:

{"issuer":"https://uaa.example.com:2793/oauth/token",
"authorization_endpoint":"https://uaa.example.com:2793
/oauth/authorize","token_endpoint":"https://uaa.example.com:2793/oauth/token"

9.9.2 Deploy scf

Before deploying scf, ensure the DNS records for the uaa domains have been set up as specified in the previous section. Next, use Helm to deploy scf:

Note
Note: Setting UAA_CA_CERT

Starting with SUSE Cloud Application Platform 1.5.2, you no longer need to set UAA_CA_CERT when using an external UAA with a certificate signed by a well known Certificate Authority (CA). It is only needed when you use an external UAA with either a certificate generated by the secret-generator or a self-signed certificate.

If you need to set UAA_CA_CERT:

  1. Obtain your UAA secret and certificate:

    tux > SECRET=$(kubectl get pods --namespace uaa \
    --output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')
    
    tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \
    --output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"
  2. Then pass --set "secrets.UAA_CA_CERT=${CA_CERT}" as part of your helm command.

tux > helm install suse/cf \
--name susecf-scf \
--namespace scf \
--values scf-config-values.yaml

Wait until you have a successful scf deployment before going to the next steps, which you can monitor with the watch command:

tux > watch --color 'kubectl get pods --namespace scf'

When scf is successfully deployed, the following is observed:

  • For the secret-generation and post-deployment-setup pods, the STATUS is Completed and the READY column is at 0/1.

  • All other pods have a Running STATUS and a READY value of n/n.

Press CtrlC to exit the watch command.

After the deployment completes, a number of public services will be setup using a load balancer that has been configured with corresponding load balancing rules and probes as well as having the correct ports opened in Network Security Group.

List the services that have been exposed on the load balancer public IP. The name of these services end in -public. For example, the gorouter service is exposed on 23.96.32.205:

tux > kubectl get services --namespace scf | grep public
diego-ssh-ssh-proxy-public                  LoadBalancer   10.0.44.118    40.71.187.83   2222:32412/TCP                                                                                                                                    1d
router-gorouter-public                      LoadBalancer   10.0.116.78    23.96.32.205   80:32136/TCP,443:32527/TCP,4443:31541/TCP                                                                                                         1d
tcp-router-tcp-router-public                LoadBalancer   10.0.132.203   23.96.46.98    20000:30337/TCP,20001:31530/TCP,20002:32118/TCP,20003:30750/TCP,20004:31014/TCP,20005:32678/TCP,20006:31528/TCP,20007:31325/TCP,20008:30060/TCP   1d

Use the DNS service of your choice to set up DNS A records for the services from the previous step. Use the public load balancer IP associated with the service to create domain mappings:

  • For the gorouter service, map the following domains:

    DOMAIN

    Using the example values, an A record for example.com that points to 23.96.32.205 would be created.

    *.DOMAIN

    Using the example values, an A record for *.example.com that points to 23.96.32.205 would be created.

  • For the diego-ssh service, map the following domain:

    ssh.DOMAIN

    Using the example values, an A record for ssh.example.com that points to 40.71.187.83 would be created.

  • For the tcp-router service, map the following domain:

    tcp.DOMAIN

    Using the example values, an A record for tcp.example.com that points to 23.96.46.98 would be created.

If you wish to use the DNS service provided by Azure, see the Azure DNS Documentation to learn more.

Your load balanced deployment of Cloud Application Platform is now complete. Verify you can access the API endpoint:

tux > cf api --skip-ssl-validation https://api.example.com

9.10 Configuring and Testing the Native Microsoft AKS Service Broker

Microsoft Azure Kubernetes Service provides a service broker called the Open Service Broker for Azure (see https://github.com/Azure/open-service-broker-azure. This section describes how to use it with your SUSE Cloud Application Platform deployment.

Start by extracting and setting a batch of environment variables:

tux > SBRG_NAME=$(tr -dc 'a-zA-Z0-9' < /dev/urandom | head -c 8)-service-broker

tux > REGION=eastus

tux > export SUBSCRIPTION_ID=$(az account show | jq -r '.id')

tux > az group create --name ${SBRG_NAME} --location ${REGION}

tux > SERVICE_PRINCIPAL_INFO=$(az ad sp create-for-rbac --name ${SBRG_NAME})

tux > TENANT_ID=$(echo ${SERVICE_PRINCIPAL_INFO} | jq -r '.tenant')

tux > CLIENT_ID=$(echo ${SERVICE_PRINCIPAL_INFO} | jq -r '.appId')

tux > CLIENT_SECRET=$(echo ${SERVICE_PRINCIPAL_INFO} | jq -r '.password')

tux > echo SBRG_NAME=${SBRGNAME}

tux > echo REGION=${REGION}

tux > echo SUBSCRIPTION_ID=${SUBSCRIPTION_ID} \; TENANT_ID=${TENANT_ID}\; CLIENT_ID=${CLIENT_ID}\; CLIENT_SECRET=${CLIENT_SECRET}

Add the necessary Helm repositories and download the charts:

tux > helm repo add svc-cat https://svc-catalog-charts.storage.googleapis.com

tux > helm repo update

tux > helm install svc-cat/catalog --name catalog \
 --namespace catalog \
 --set controllerManager.healthcheck.enabled=false \
 --set apiserver.healthcheck.enabled=false

tux > kubectl get apiservice

tux > helm repo add azure https://kubernetescharts.blob.core.windows.net/azure

tux > helm repo update

Set up the service broker with your variables:

tux > helm install azure/open-service-broker-azure \
--name osba \
--namespace osba \
--set azure.subscriptionId=${SUBSCRIPTION_ID} \
--set azure.tenantId=${TENANT_ID} \
--set azure.clientId=${CLIENT_ID} \
--set azure.clientSecret=${CLIENT_SECRET} \
--set azure.defaultLocation=${REGION} \
--set redis.persistence.storageClass=default \
--set basicAuth.username=$(tr -dc 'a-zA-Z0-9' < /dev/urandom | head -c 16) \
--set basicAuth.password=$(tr -dc 'a-zA-Z0-9' < /dev/urandom | head -c 16) \
--set tls.enabled=false

Monitor the progress:

tux > watch --color 'kubectl get pods --namespace osba'

When all pods are running, create the service broker in SUSE Cloud Foundry using the cf CLI:

tux > cf login

tux > cf create-service-broker azure $(kubectl get deployment osba-open-service-broker-azure \
--namespace osba --output jsonpath='{.spec.template.spec.containers[0].env[?(@.name == "BASIC_AUTH_USERNAME")].value}') $(kubectl get secret --namespace osba osba-open-service-broker-azure --output jsonpath='{.data.basic-auth-password}' | base64 --decode) http://osba-open-service-broker-azure.osba

List the available service plans. For more information about the services supported see https://github.com/Azure/open-service-broker-azure#supported-services:

tux > cf service-access -b azure

Use cf enable-service-access to enable access to a service plan. This example enables all basic plans:

tux > cf service-access -b azure | \
awk '($2 ~ /basic/) { system("cf enable-service-access " $1 " -p " $2)}'

Test your new service broker with an example PHP application. First create an organization and space to deploy your test application to:

tux > cf create-org testorg

tux > cf create-space scftest -o testorg

tux > cf target -o "testorg" -s "scftest"

tux > cf create-service azure-mysql-5-7 basic question2answer-db \
-c "{ \"location\": \"${REGION}\", \"resourceGroup\": \"${SBRG_NAME}\", \"firewallRules\": [{\"name\": \
\"AllowAll\", \"startIPAddress\":\"0.0.0.0\",\"endIPAddress\":\"255.255.255.255\"}]}"

tux > cf service question2answer-db | grep status

Find your new service and optionally disable TLS. You should not disable TLS on a production deployment, but it simplifies testing. The mysql2 gem must be configured to use TLS, see brianmario/mysql2/SSL options on GitHub:

tux > az mysql server list --resource-group $SBRG_NAME

tux > az mysql server update --resource-group $SBRG_NAME \
--name scftest --ssl-enforcement Disabled

Look in your Azure portal to find your database --name.

Build and push the example PHP application:

tux > git clone https://github.com/scf-samples/question2answer

tux > cd question2answer

tux > cf push

tux > cf service question2answer-db # => bound apps

When the application has finished deploying, use your browser and navigate to the URL specified in the routes field displayed at the end of the staging logs. For example, the application route could be question2answer.example.com.

Press the button to prepare the database. When the database is ready, further verify by creating an initial user and posting some test questions.

9.11 Resizing Persistent Volumes

Depending on your workloads, the default persistent volume (PV) sizes of your Cloud Application Platform deployment may be insufficient. This section describes the process to resize a persistent volume in your Cloud Application Platform deployment, by modifying the persistent volumes claim (PVC) object.

Note that PVs can only be expanded, but cannot be shrunk. shrunk.

9.11.1 Prerequisites

The following are required in order to use the process below to resize a PV.

9.11.2 Example Procedure

The following describes the process required to resize a PV, using the PV and PVC associated with uaa's mysql as an example.

  1. Find the storage class and PVC associated with the PV being expanded. In This example, the storage class is called persistent and the PVC is called mysql-data-mysql-0.

    tux > kubectl get persistentvolume
  2. Verify whether the storage class has allowVolumeExpansion set to true. If it does not, run the following command to update the storage class.

    tux > kubectl get storageclass persistent --output json

    If it does not, run the below command to update the storage class.

    tux > kubectl patch storageclass persistent \
    --patch '{"allowVolumeExpansion": true}'
  3. Cordon all nodes in your cluster.

    1. tux > export VM_NODES=$(kubectl get nodes -o name)
    2. tux > for i in $VM_NODES
       do
        kubectl cordon `echo "${i//node\/}"`
      done
  4. Increase the storage size of the PVC object associated with the PV being expanded.

    tux > kubectl patch persistentvolumeclaim --namespace uaa mysql-data-mysql-0 \
    --patch '{"spec": {"resources": {"requests": {"storage": "25Gi"}}}}'
  5. List all pods that use the PVC, in any namespace.

    tux > kubectl get pods --all-namespaces --output=json | jq -c '.items[] | {name: .metadata.name, namespace: .metadata.namespace, claimName: .spec |  select( has ("volumes") ).volumes[] | select( has ("persistentVolumeClaim") ).persistentVolumeClaim.claimName }'
  6. Restart all pods that use the PVC.

    tux > kubectl delete pod mysql-0 --namespace uaa
  7. Run kubectl describe persistentvolumeclaim and monitor the status.conditions field.

    tux > watch 'kubectl get persistentvolumeclaim --namespace uaa mysql-data-mysql-0 --output json'

    When the following is observed, press CtrlC to exit the watch command and proceed to the next step.

    • status.conditions.message is

      message: Waiting for user to (re-)start a pod to finish file system resize of volume on node.
    • status.conditions.type is

      type: FileSystemResizePending
  8. Uncordon all nodes in your cluster.

    tux > for i in $VM_NODES
     do
      kubectl uncordon `echo "${i//node\/}"`
    done
  9. Wait for the resize to finish. Verify the storage size values match for status.capacity.storage and spec.resources.requests.storage.

    tux > watch 'kubectl get persistentvolumeclaim --namespace uaa mysql-data-mysql-0 --output json'
  10. Also verify the storage size in the pod itself is updated.

    tux > kubectl --namespace uaa exec mysql-0 -- df --human-readable

9.12 Expanding Capacity of a Cloud Application Platform Deployment on Microsoft AKS

If the current capacity of your Cloud Application Platform deployment is insufficient for your workloads, you can expand the capacity using the procedure in this section.

These instructions assume you have followed the procedure in Chapter 9, Deploying SUSE Cloud Application Platform on Microsoft Azure Kubernetes Service (AKS) and have a running Cloud Application Platform deployment on Microsoft AKS. The instructions below will use environment variables defined in Section 9.2, “Create Resource Group and AKS Instance”.

  1. Get the current number of Kubernetes nodes in the cluster.

    tux > export OLD_NODE_COUNT=$(kubectl get nodes --output json | jq '.items | length')
  2. Set the number of Kubernetes nodes the cluster will be expanded to. Replace the example value with the number of nodes required for your workload.

    tux > export NEW_NODE_COUNT=4
  3. Increase the Kubernetes node count in the cluster.

    tux > az aks scale --resource-group $RG_NAME --name $AKS_NAME \
    --node-count $NEW_NODE_COUNT \
    --nodepool-name $NODEPOOL_NAME
  4. Verify the new nodes are in a Ready state before proceeding.

    tux > kubectl get nodes
  5. Add or update the following in your scf-config-values.yaml file to increase the number of diego-cell in your Cloud Application Platform deployment. Replace the example value with the number required by your workflow.

    sizing:
      diego_cell:
        count: 4
  6. Perform a helm upgrade to apply the change.

    Note
    Note: Setting UAA_CA_CERT

    Starting with SUSE Cloud Application Platform 1.5.2, you no longer need to set UAA_CA_CERT when using an external UAA with a certificate signed by a well known Certificate Authority (CA). It is only needed when you use an external UAA with either a certificate generated by the secret-generator or a self-signed certificate.

    If you need to set UAA_CA_CERT:

    1. Obtain your UAA secret and certificate:

      tux > SECRET=$(kubectl get pods --namespace uaa \
      --output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')
      
      tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \
      --output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"
    2. Then pass --set "secrets.UAA_CA_CERT=${CA_CERT}" as part of your helm command.

    tux > helm upgrade susecf-scf suse/cf \
    --values scf-config-values.yaml \
    --version 2.20.3
  7. Monitor progress of the additional diego-cell pods:

    tux > watch --color 'kubectl get pods --namespace scf'