Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
Applies to SUSE Cloud Application Platform 1.5.2

11 Deploying SUSE Cloud Application Platform on Google Kubernetes Engine (GKE)

Important
Important
README first!

Before you start deploying SUSE Cloud Application Platform, review the following documents:

Read the Release Notes: Release Notes SUSE Cloud Application Platform

Read Chapter 3, Deployment and Administration Notes

SUSE Cloud Application Platform supports deployment on Google Kubernetes Engine (GKE). This chapter describes the steps to prepare a SUSE Cloud Application Platform deployment on GKE using its integrated network load balancers. See https://cloud.google.com/kubernetes-engine/ for more information on GKE.

11.1 Prerequisites

The following are required to deploy and use SUSE Cloud Application Platform on GKE:

Important
Important

The prerequisites and configurations described is this chapter only reflect the requirements for a minimal SUSE Cloud Application Platform deployment. For a more production-ready environment, consider incoporating the following features:

Stratos Web Console

For details, see Chapter 6, Installing the Stratos Web Console.

High Availability

For details, see Chapter 7, SUSE Cloud Application Platform High Availability.

LDAP Integration

For details, see Chapter 8, LDAP Integration.

External Log Server Integration

For details, see Chapter 24, Logging.

Managing Certificates

For details, see Chapter 25, Managing Certificates.

Other Features

Refer to the Administration Guide at Part III, “SUSE Cloud Application Platform Administration” for additional features.

11.2 Creating a GKE cluster

In order to deploy SUSE Cloud Application Platform, create a cluster that:

  • Is a Zonal, Regional, or Private type. Do not use a Alpha cluster.

  • Uses Ubuntu as the host operating system. If using the gcloud CLI, include --image-type=UBUNTU during the cluster creation.

  • Allows access to all Cloud APIs (in order for storage to work correctly).

  • Has at least 3 nodes of machine type n1-standard-4. If using the gcloud CLI, include --machine-type=n1-standard-4 and --num-nodes=3 during the cluster creation. For details, see https://cloud.google.com/compute/docs/machine-types#standard_machine_types.

  • Ensure nodes use a mininum kernel version of 3.19.

  • Has at least 80 GB local storage per node.

  • (Optional) Uses preemptible nodes to keep costs low. For detials, see https://cloud.google.com/kubernetes-engine/docs/how-to/preemptible-vms.

  1. Set a name for your cluster:

    tux > export CLUSTER_NAME="cap"
  2. Set the zone for your cluster:

    tux > export CLUSTER_ZONE="us-west1-a"
  3. Set the number of nodes for your cluster:

    tux > export NODE_COUNT=3
  4. Create the cluster:

    tux > gcloud container clusters create ${CLUSTER_NAME} \
    --image-type=UBUNTU \
    --machine-type=n1-standard-4 \
    --zone ${CLUSTER_ZONE} \
    --num-nodes=$NODE_COUNT \
    --no-enable-basic-auth \
    --no-issue-client-certificate \
    --no-enable-autoupgrade \
    --metadata disable-legacy-endpoints=true
    • Specify the --no-enable-basic-auth and --no-issue-client-certificate flags so that kubectl does not use basic or client certificate authentication, but uses OAuth Bearer Tokens instead. Configure the flags to suit your desired authentication mechanism.

    • Specify --no-enable-autoupgrade to disable automatic upgrades.

    • Disable legacy metadata server endpoints using --metadata disable-legacy-endpoints=true as a best practice as indicated in https://cloud.google.com/compute/docs/storing-retrieving-metadata#default.

11.3 Get kubeconfig File

Get the kubeconfig file for your cluster.

tux > gcloud container clusters get-credentials --zone ${CLUSTER_ZONE:?required} ${CLUSTER_NAME:?required}

11.4 Install Helm Client and Tiller

Helm is a Kubernetes package manager. It consists of a client and server component, both of which are required in order to install and manage Cloud Application Platform.

The Helm client, helm, can be installed on your remote administration computer by referring to the documentation at https://v2.helm.sh/docs/using_helm/#installing-helm. Cloud Application Platform is compatible with both Helm 2 and Helm 3. Examples in this guide are based on Helm 2. To use Helm 3, refer to the Helm documentation at https://helm.sh/docs/.

Tiller, the Helm server component, needs to be installed on your Kubernetes cluster. Follow the instructions at https://v2.helm.sh/docs/using_helm/#installing-tiller to install Tiller with a service account and ensure your installation is appropriately secured according to your requirements as described in https://v2.helm.sh/docs/using_helm/#securing-your-helm-installation.

11.5 Default Storage Class

This example creates a pd-ssd storage class for your cluster. Create a file named storage-class.yaml with the following:

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
  labels:
    addonmanager.kubernetes.io/mode: EnsureExists
    kubernetes.io/cluster-service: "true"
  name: persistent
parameters:
  type: pd-ssd
provisioner: kubernetes.io/gce-pd
reclaimPolicy: Delete
allowVolumeExpansion: true

Create the new storage class using the manifest defined:

tux > kubectl create --filename storage-class.yaml

Specify the newly created created storage class, called persistent, as the value for kube.storage_class.persistent in your deployment configuration file, like this example:

kube:
  storage_class:
    persistent: "persistent"

See Section 11.7, “Deployment Configuration” for a complete example deployment configuration file, scf-config-values.yaml.

11.6 DNS Configuration

This section provides an overview of the domain and sub-domains that require A records to be set up for. The process is described in more detail in the deployment section.

The following table lists the required domain and sub-domains, using example.com as the example domain:

Warning
Warning: Supported Domains

When selecting a domain, SUSE Cloud Application Platform expects DOMAIN to be either a subdomain or a root domain. Setting DOMAIN to a top-level domain, such suse, is not supported.

DomainsServices
uaa.example.comuaa-uaa-public
*.uaa.example.comuaa-uaa-public
example.comrouter-gorouter-public
*.example.comrouter-gorouter-public
tcp.example.comtcp-router-tcp-router-public
ssh.example.comdiego-ssh-ssh-proxy-public

A SUSE Cloud Application Platform cluster exposes these four services:

Kubernetes service descriptionsKubernetes service names
User Account and Authentication (uaa)uaa-uaa-public
Cloud Foundry (CF) TCP routing servicetcp-router-tcp-router-public
CF application SSH accessdiego-ssh-ssh-proxy-public
CF routerrouter-gorouter-public

uaa-uaa-public is in the uaa namespace, and the rest are in the scf namespace.

11.7 Deployment Configuration

It is not necessary to create any DNS records before deploying uaa. Instead, after uaa is running you will find the load balancer IP address that was automatically created during deployment, and then create the necessary records.

The following file, scf-config-values.yaml, provides a complete example deployment configuration.

Warning
Warning: Supported Domains

When selecting a domain, SUSE Cloud Application Platform expects DOMAIN to be either a subdomain or a root domain. Setting DOMAIN to a top-level domain, such suse, is not supported.

### example deployment configuration file
### scf-config-values.yaml

env:
  DOMAIN: example.com
  # the UAA prefix is required
  UAA_HOST: uaa.example.com
  UAA_PORT: 2793
  GARDEN_ROOTFS_DRIVER: "overlay-xfs"

kube:
  storage_class:
    persistent: "persistent"
  auth: rbac

secrets:
  # Create a very strong password for user 'admin'
  CLUSTER_ADMIN_PASSWORD: password

  # Create a very strong password, and protect it because it
  # provides root access to everything
  UAA_ADMIN_CLIENT_SECRET: password

services:
  loadbalanced: true

Take note of the following Helm values when defining your scf-config-values.yaml file.

GARDEN_ROOTFS_DRIVER

For SUSE® CaaS Platform and other Kubernetes deployments where the nodes are based on SUSE Linux Enterprise, the btrfs file system driver must be used. By default, btrfs is selected as the default.

For Microsoft AKS, Amazon EKS, Google GKE, and other Kubernetes deployments where the nodes are based on other operating systems, the overlay-xfs file system driver must be used.

Important
Important: Protect UAA_ADMIN_CLIENT_SECRET

The UAA_ADMIN_CLIENT_SECRET is the master password for access to your Cloud Application Platform cluster. Make this a very strong password, and protect it just as carefully as you would protect any root password.

11.8 Add the Kubernetes charts repository

Download the SUSE Kubernetes charts repository with Helm:

tux > helm repo add suse https://kubernetes-charts.suse.com/

You may replace the example suse name with any name. Verify with helm:

tux > helm repo list
NAME       URL
stable     https://kubernetes-charts.storage.googleapis.com
local      http://127.0.0.1:8879/charts
suse       https://kubernetes-charts.suse.com/

List your chart names, as you will need these for some operations:

tux > helm search suse
NAME                            CHART VERSION   APP VERSION     DESCRIPTION
suse/cf                         2.20.3          1.5.2           A Helm chart for SUSE Cloud Foundry
suse/cf-usb-sidecar-mysql       1.0.1                           A Helm chart for SUSE Universal Service Broker Sidecar fo...
suse/cf-usb-sidecar-postgres    1.0.1                           A Helm chart for SUSE Universal Service Broker Sidecar fo...
suse/console                    3.1.0           1.5.2           A Helm chart for deploying Stratos UI Console
suse/log-agent-rsyslog      	1.0.1        	8.39.0     	Log Agent for forwarding logs of K8s control pl...
suse/metrics                    1.1.2           1.5.2           A Helm chart for Stratos Metrics
suse/minibroker                 0.3.1                           A minibroker for your minikube
suse/nginx-ingress              0.28.4          0.15.0          An nginx Ingress controller that uses ConfigMap to store ...
suse/uaa                        2.20.3          1.5.2           A Helm chart for SUSE UAA

11.9 Deploying SUSE Cloud Application Platform

This section describes how to deploy SUSE Cloud Application Platform on Google GKE, and how to configure your DNS records.

11.9.1 Deploy uaa

Note
Note: Embedded uaa in scf

The User Account and Authentication (uaa) Server is included as an optional feature of the scf Helm chart. This simplifies the Cloud Application Platform deployment process as a separate installation and/or upgrade of uaa is no longer a prerequisite to installing and/or upgrading scf.

It is important to note that:

  • This feature should only be used when uaa is not shared with other projects.

  • You cannot migrate from an existing external uaa to an embedded one. In this situation, enabling this feature during an upgrade will result in a single admin account.

To enable this feature, add the following to your scf-config-values.yaml.

enable:
  uaa: true

When deploying and/or upgrading scf, run helm install and/or helm upgrade and note that:

  • Installing and/or upgrading uaa using helm install suse/uaa ... and/or helm upgrade is no longer required.

  • It is no longer necessary to set the UAA_CA_CERT parameter. Previously, this parameter was passed the CA_CERT variable, which was assigned the CA certificate of uaa.

Use Helm to deploy the uaa server:

tux > helm install suse/uaa \
--name susecf-uaa \
--namespace uaa \
--values scf-config-values.yaml

Wait until you have a successful uaa deployment before going to the next steps, which you can monitor with the watch command:

tux > watch --color 'kubectl get pods --namespace uaa'

When uaa is successfully deployed, the following is observed:

  • For the secret-generation pod, the STATUS is Completed and the READY column is at 0/1.

  • All other pods have a Running STATUS and a READY value of n/n.

Press CtrlC to exit the watch command.

Once the uaa deployment completes, a uaa service will be exposed on a load balancer public IP. The name of the service ends with -public. In the following example, the uaa-uaa-public service is exposed on 35.197.11.229 and port 2793.

tux > kubectl get services --namespace uaa | grep public
uaa-uaa-public    LoadBalancer   10.0.67.56     35.197.11.229  2793:30206/TCP

Use the DNS service of your choice to set up DNS A records for the service from the previous step. Use the public load balancer IP associated with the service to create domain mappings:

  • For the uaa-uaa-public service, map the following domains:

    uaa.DOMAIN

    Using the example values, an A record for uaa.example.com that points to 35.197.11.229

    *.uaa.DOMAIN

    Using the example values, an A record for *.uaa.example.com that points to 35.197.11.229

Use curl to verify you are able to connect to the uaa OAuth server on the DNS name configured:

tux > curl --insecure https://uaa.example.com:2793/.well-known/openid-configuration

This should return a JSON object, as this abbreviated example shows:

{"issuer":"https://uaa.example.com:2793/oauth/token",
"authorization_endpoint":"https://uaa.example.com:2793
/oauth/authorize","token_endpoint":"https://uaa.example.com:2793/oauth/token"

11.9.2 Deploy scf

Before deploying scf, ensure the DNS records for the uaa domains have been set up as specified in the previous section. Next, use Helm to deploy scf:

Note
Note: Setting UAA_CA_CERT

Starting with SUSE Cloud Application Platform 1.5.2, you no longer need to set UAA_CA_CERT when using an external UAA with a certificate signed by a well known Certificate Authority (CA). It is only needed when you use an external UAA with either a certificate generated by the secret-generator or a self-signed certificate.

If you need to set UAA_CA_CERT:

  1. Obtain your UAA secret and certificate:

    tux > SECRET=$(kubectl get pods --namespace uaa \
    --output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')
    
    tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \
    --output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"
  2. Then pass --set "secrets.UAA_CA_CERT=${CA_CERT}" as part of your helm command.

tux > helm install suse/cf \
--name susecf-scf \
--namespace scf \
--values scf-config-values.yaml

Wait until you have a successful scf deployment before going to the next steps, which you can monitor with the watch command:

tux > watch --color 'kubectl get pods --namespace scf'

When scf is successfully deployed, the following is observed:

  • For the secret-generation and post-deployment-setup pods, the STATUS is Completed and the READY column is at 0/1.

  • All other pods have a Running STATUS and a READY value of n/n.

Press CtrlC to exit the watch command.

Once the deployment completes, a number of public services will be setup using load balancers that has been configured with corresponding load balancing rules and probes as well as having the correct ports opened in the firewall settings.

List the services that have been exposed on the load balancer public IP. The name of these services end in -public:

tux > kubectl get services --namespace scf | grep public
diego-ssh-ssh-proxy-public                  LoadBalancer   10.23.249.196  35.197.32.244  2222:31626/TCP                                                                                                                                    1d
router-gorouter-public                      LoadBalancer   10.23.248.85   35.197.18.22   80:31213/TCP,443:30823/TCP,4443:32200/TCP                                                                                                         1d
tcp-router-tcp-router-public                LoadBalancer   10.23.241.17   35.197.53.74   20000:30307/TCP,20001:30630/TCP,20002:32524/TCP,20003:32344/TCP,20004:31514/TCP,20005:30917/TCP,20006:31568/TCP,20007:30701/TCP,20008:31810/TCP   1d

Use the DNS service of your choice to set up DNS A records for the services from the previous step. Use the public load balancer IP associated with the service to create domain mappings:

  • For the router-gorouter-public service, map the following domains:

    DOMAIN

    Using the example values, an A record for example.com that points to 35.197.18.22 would be created.

    *.DOMAIN

    Using the example values, an A record for *.example.com that points to 35.197.18.22 would be created.

  • For the diego-ssh-ssh-proxy-public service, map the following domain:

    ssh.DOMAIN

    Using the example values, an A record for ssh.example.com that points to 35.197.32.244 would be created.

  • For the tcp-router-tcp-router-public service, map the following domain:

    tcp.DOMAIN

    Using the example values, an A record for tcp.example.com that points to 35.197.53.74 would be created.

Your load balanced deployment of Cloud Application Platform is now complete. Verify you can access the API endpoint:

tux > cf api --skip-ssl-validation https://api.example.com

11.10 Deploying and Using the Google Cloud Platform Service Broker

The Google Cloud Platform (GCP) Service Broker is designed for use with Cloud Foundry and Kubernetes. It is compliant with v2.13 of the Open Service Broker API (see https://www.openservicebrokerapi.org/) and provides support for the services listed at https://github.com/GoogleCloudPlatform/gcp-service-broker/tree/master#open-service-broker-for-google-cloud-platform.

This section describes the how to deploy and use the GCP Service Broker, as a SUSE Cloud Foundry application, on SUSE Cloud Application Platform.

11.10.1 Enable APIs

  1. From the GCP console, click the Navigation menu.

  2. Click APIs & Services and then Library.

  3. Enable the following:

  4. Additionally, enable the APIs for the services that will be used. Refer to https://github.com/GoogleCloudPlatform/gcp-service-broker/tree/master#open-service-broker-for-google-cloud-platform to see the services available and the corresponding APIs that will need to be enabled. The examples in this section will require enabling the following APIs:

11.10.2 Create a Service Account

A service account allows non-human users to authenticate with and be authorized to interact with Google APIs. To learn more about service accounts, see https://cloud.google.com/iam/docs/understanding-service-accounts. The service account created here will be used by the GCP Service Broker application so that it can interact with the APIs to provision resources.

  1. From the GCP console, click the Navigation menu.

  2. Go to IAM & admin and click Service accounts.

  3. Click Create Service Account.

  4. In the Service account name field, enter a name.

  5. Click Create.

  6. In the Service account permissions section, add the following roles:

    • Project > Editor

    • Cloud SQL > Cloud SQL Admin

    • Compute Engine > Compute Admin

    • Service Accounts > Service Account User

    • Cloud Services > Service Broker Admin

    • IAM > Security Admin

  7. Click Continue.

  8. In the Create key section, click Create Key.

  9. In the Key type field, select JSON and click Create. Save the file to a secure location. This will be required when deploying the GCP Service Broker application.

  10. Click Done to finish creating the service account.

11.10.3 Create a Database for the GCP Service Broker

The GCP Service Broker requires a database to store information about the resources it provisions. Any database that adheres to the MySQL protocol may be used, but it is recommended to use a GCP Cloud SQL instance, as outlined in the following steps.

  1. From the GCP console, click the Navigation menu.

  2. Under the Storage section, click SQL.

  3. Click Create Instance.

  4. Click Choose MySQL to select MySQL as the database engine.

  5. In the Instance ID field, enter an identifier for MySQL instance.

  6. In the Root password field, set a password for the root user.

  7. Click Show configuration options to see additonal configuration options.

  8. Under the Set connectivity section, click Add network to add an authorized network.

  9. In the Network field, enter 0.0.0.0/0 and click Done.

  10. Optionally, create SSL certificates for the database and store them in a secure location.

  11. Click Create and wait for the MySQL instance to finish creating.

  12. After the MySQL instance is finished creating, connect to it using either the Cloud Shell or the mysql command line client.

    • To connect using Cloud Shell:

      1. Click on the instance ID of the MySQL instance.

      2. In the Connect to this instance section of the Overview tab, click Connect using Cloud Shell.

      3. After the shell is opened, the gcloud sql connect command is displayed. Press Enter to connect to the MySQL instance as the root user.

      4. When prompted, enter the password for the root user set in an earlier step.

    • To connect using the mysql command line client:

      1. Click on the instance ID of the MySQL instance.

      2. In the Connect to this instance section of the Overview tab, take note of the IP address. For example, 11.22.33.44.

      3. Using the mysql command line client, run the following command.

        tux > mysql -h 11.22.33.44 -u root -p
      4. When prompted, enter the password for the root user set in an earlier step.

  13. After connecting to the MySQL instance, run the following commands to create an initial user. The service broker will use this user to connect to the service broker database.

    CREATE DATABASE servicebroker;
    CREATE USER 'gcpdbuser'@'%' IDENTIFIED BY 'gcpdbpassword';
    GRANT ALL PRIVILEGES ON servicebroker.* TO 'gcpdbuser'@'%' WITH GRANT OPTION;

    Where:

    gcpdbuser

    Is the username of the user the service broker will connect to the service broker database with. Replace gcpdbuser with a username of your choosing.

    gcpdbpassword

    Is the password of the user the service broker will connect to the service broker database with. Replace gcpdbpassword with a secure password of your choosing.

11.10.4 Deploy the Service Broker

The GCP Service Broker can be deployed as a Cloud Foundry application onto your deployment of SUSE Cloud Application Platform.

  1. Get the GCP Service Broker application from Github and change to the GCP Service Broker application directory.

    tux > git clone https://github.com/GoogleCloudPlatform/gcp-service-broker
    tux > cd gcp-service-broker
  2. Update the manifest.yml file and add the environment variables below and their associated values to the env section:

    ROOT_SERVICE_ACCOUNT_JSON

    The contents, as a string, of the JSON key file created for the service account created earlier (see Section 11.10.2, “Create a Service Account”).

    SECURITY_USER_NAME

    The username to authenticate broker requests. This will be the same one used in the cf create-service-broker command. In the examples, this is cfgcpbrokeruser.

    SECURITY_USER_PASSWORD

    The password to authenticate broker requests. This will be the same one used in the cf create-service-broker command. In the examples, this is cfgcpbrokerpassword.

    DB_HOST

    The host for the service broker database created earlier (see Section 11.10.3, “Create a Database for the GCP Service Broker”. This can be found in the GCP console by clicking on the name of the database instance and examining the Connect to this instance section of the Overview tab. In the examples, this is 11.22.33.44.

    DB_USERNAME

    The username used to connect to the service broker database. This was created by the mysql commands earlier while connected to the service broker database instance (see Section 11.10.3, “Create a Database for the GCP Service Broker”). In the examples, this is gcpdbuser.

    DB_PASSWORD

    The password of the user used to connect to the service broker database. This was created by the mysql commands earlier while connected to the service broker database instance (see Section 11.10.3, “Create a Database for the GCP Service Broker”). In the examples, this is gcpdbpassword.

    The manifest.yml should look similar to the example below.

    ### example manifest.yml for the GCP Service Broker
    ---
    applications:
    - name: gcp-service-broker
      memory: 1G
      buildpacks:
      - go_buildpack
      env:
        GOPACKAGENAME: github.com/GoogleCloudPlatform/gcp-service-broker
        GOVERSION: go1.12
        ROOT_SERVICE_ACCOUNT_JSON: '{ ... }'
        SECURITY_USER_NAME: cfgcpbrokeruser
        SECURITY_USER_PASSWORD: cfgcpbrokerpassword
        DB_HOST: 11.22.33.44
        DB_USERNAME: gcpdbuser
        DB_PASSWORD: gcpdbpassword
  3. After updating the manifest.yml file, deploy the service broker as an application to your Cloud Application Platform deployment. Specify a health check type of none.

    tux > cf push --health-check-type none
  4. After the service broker application is deployed, take note of the URL displayed in the route field. Alternatively, run cf app gcp-service-broker to find the URL in the route field. On a browser, go to the route (for example, https://gcp-service-broker.example.com). You should see the documentation for the GCP Service Broker.

  5. Create the service broker in SUSE Cloud Foundry using the cf CLI.

    tux > cf create-service-broker gcp-service-broker cfgcpbrokeruser cfgcpbrokerpassword https://gcp-service-broker.example.com

    Where https://gcp-service-broker.example.com is replaced by the URL of the GCP Service Broker application deployed to SUSE Cloud Application Platform. Find the URL using cf app gcp-service-broker and examining the routes field.

  6. Verify the service broker has been successfully registered.

    tux > cf service-brokers
  7. List the available services and their associated plans for the GCP Service Broker. For more information about the services, see https://github.com/GoogleCloudPlatform/gcp-service-broker/tree/master#open-service-broker-for-google-cloud-platform.

    tux > cf service-access -b gcp-service-broker
  8. Enable access to a service. This example enables access to the Google CloudSQL MySQL service (see https://cloud.google.com/sql/).

    tux > cf enable-service-access google-cloudsql-mysql
  9. Create an instance of the Google CloudSQL MySQL service. This example uses the mysql-db-f1-micro plan. Use the -c flag to pass optional parameters when provisioning a service. See https://github.com/GoogleCloudPlatform/gcp-service-broker/blob/master/docs/use.md for the parameters that can be set for each service.

    tux > cf create-service google-cloudsql-mysql mysql-db-f1-micro mydb-instance

    Wait for the service to finish provisioning. Check the status using the GCP console or with the following command.

    tux > cf service mydb-instance | grep status

    The service can now be bound to applications and used.

11.11 Resizing Persistent Volumes

Depending on your workloads, the default persistent volume (PV) sizes of your Cloud Application Platform deployment may be insufficient. This section describes the process to resize a persistent volume in your Cloud Application Platform deployment, by modifying the persistent volumes claim (PVC) object.

Note that PVs can only be expanded, but cannot be shrunk. shrunk.

11.11.1 Prerequisites

The following are required in order to use the process below to resize a PV.

11.11.2 Example Procedure

The following describes the process required to resize a PV, using the PV and PVC associated with uaa's mysql as an example.

  1. Find the storage class and PVC associated with the PV being expanded. In This example, the storage class is called persistent and the PVC is called mysql-data-mysql-0.

    tux > kubectl get persistentvolume
  2. Verify whether the storage class has allowVolumeExpansion set to true. If it does not, run the following command to update the storage class.

    tux > kubectl get storageclass persistent --output json

    If it does not, run the below command to update the storage class.

    tux > kubectl patch storageclass persistent \
    --patch '{"allowVolumeExpansion": true}'
  3. Cordon all nodes in your cluster.

    1. tux > export VM_NODES=$(kubectl get nodes -o name)
    2. tux > for i in $VM_NODES
       do
        kubectl cordon `echo "${i//node\/}"`
      done
  4. Increase the storage size of the PVC object associated with the PV being expanded.

    tux > kubectl patch persistentvolumeclaim --namespace uaa mysql-data-mysql-0 \
    --patch '{"spec": {"resources": {"requests": {"storage": "25Gi"}}}}'
  5. List all pods that use the PVC, in any namespace.

    tux > kubectl get pods --all-namespaces --output=json | jq -c '.items[] | {name: .metadata.name, namespace: .metadata.namespace, claimName: .spec |  select( has ("volumes") ).volumes[] | select( has ("persistentVolumeClaim") ).persistentVolumeClaim.claimName }'
  6. Restart all pods that use the PVC.

    tux > kubectl delete pod mysql-0 --namespace uaa
  7. Run kubectl describe persistentvolumeclaim and monitor the status.conditions field.

    tux > watch 'kubectl get persistentvolumeclaim --namespace uaa mysql-data-mysql-0 --output json'

    When the following is observed, press CtrlC to exit the watch command and proceed to the next step.

    • status.conditions.message is

      message: Waiting for user to (re-)start a pod to finish file system resize of volume on node.
    • status.conditions.type is

      type: FileSystemResizePending
  8. Uncordon all nodes in your cluster.

    tux > for i in $VM_NODES
     do
      kubectl uncordon `echo "${i//node\/}"`
    done
  9. Wait for the resize to finish. Verify the storage size values match for status.capacity.storage and spec.resources.requests.storage.

    tux > watch 'kubectl get persistentvolumeclaim --namespace uaa mysql-data-mysql-0 --output json'
  10. Also verify the storage size in the pod itself is updated.

    tux > kubectl --namespace uaa exec mysql-0 -- df --human-readable

11.12 Expanding Capacity of a Cloud Application Platform Deployment on Google GKE

If the current capacity of your Cloud Application Platform deployment is insufficient for your workloads, you can expand the capacity using the procedure in this section.

These instructions assume you have followed the procedure in Chapter 11, Deploying SUSE Cloud Application Platform on Google Kubernetes Engine (GKE) and have a running Cloud Application Platform deployment on Microsoft AKS. The instructions below will use environment variables defined in Section 11.2, “Creating a GKE cluster”.

  1. Get the most recently created node in the cluster.

    tux > RECENT_VM_NODE=$(gcloud compute instances list --filter=name~${CLUSTER_NAME:?required} --format json | jq --raw-output '[sort_by(.creationTimestamp) | .[].creationTimestamp ] | last | .[0:19] | strptime("%Y-%m-%dT%H:%M:%S") | mktime')
  2. Increase the Kubernetes node count in the cluster. Replace the example value with the number of nodes required for your workload.

    tux > gcloud container clusters resize $CLUSTER_NAME \
    --num-nodes 4
  3. Verify the new nodes are in a Ready state before proceeding.

    tux > kubectl get nodes
  4. Add or update the following in your scf-config-values.yaml file to increase the number of diego-cell in your Cloud Application Platform deployment. Replace the example value with the number required by your workflow.

    sizing:
      diego_cell:
        count: 4
  5. Perform a helm upgrade to apply the change.

    Note
    Note: Setting UAA_CA_CERT

    Starting with SUSE Cloud Application Platform 1.5.2, you no longer need to set UAA_CA_CERT when using an external UAA with a certificate signed by a well known Certificate Authority (CA). It is only needed when you use an external UAA with either a certificate generated by the secret-generator or a self-signed certificate.

    If you need to set UAA_CA_CERT:

    1. Obtain your UAA secret and certificate:

      tux > SECRET=$(kubectl get pods --namespace uaa \
      --output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')
      
      tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \
      --output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"
    2. Then pass --set "secrets.UAA_CA_CERT=${CA_CERT}" as part of your helm command.

    tux > helm upgrade susecf-scf suse/cf \
    --values scf-config-values.yaml \
    --version 2.20.3
  6. Monitor progress of the additional diego-cell pods:

    tux > watch --color 'kubectl get pods --namespace scf'