Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
Applies to SUSE Cloud Application Platform 2.0.1

7 Deploying SUSE Cloud Application Platform on Google Kubernetes Engine (GKE)

Important
Important
README first!

Before you start deploying SUSE Cloud Application Platform, review the following documents:

Read the Release Notes: Release Notes SUSE Cloud Application Platform

Read Chapter 3, Deployment and Administration Notes

SUSE Cloud Application Platform supports deployment on Google Kubernetes Engine (GKE). This chapter describes the steps to prepare a SUSE Cloud Application Platform deployment on GKE using its integrated network load balancers. See https://cloud.google.com/kubernetes-engine/ for more information on GKE.

7.1 Prerequisites

The following are required to deploy and use SUSE Cloud Application Platform on GKE:

Important
Important

The prerequisites and configurations described is this chapter only reflect the requirements for a minimal SUSE Cloud Application Platform deployment. For a more production-ready environment, consider incoporating some of following optional features in this chapter and the Administration Guide at Part III, “SUSE Cloud Application Platform Administration”

7.2 Creating a GKE cluster

In order to deploy SUSE Cloud Application Platform, create a cluster that:

  1. Set a name for your cluster:

    tux > export CLUSTER_NAME="cap"
  2. Set the zone for your cluster:

    tux > export CLUSTER_ZONE="us-west1-a"
  3. Set the number of nodes for your cluster:

    tux > export NODE_COUNT=3
  4. Create the cluster:

    tux > gcloud container clusters create ${CLUSTER_NAME} \
    --image-type=UBUNTU \
    --machine-type=n1-standard-4 \
    --zone ${CLUSTER_ZONE} \
    --num-nodes=$NODE_COUNT \
    --no-enable-basic-auth \
    --no-issue-client-certificate \
    --no-enable-autoupgrade
    • Specify the --no-enable-basic-auth and --no-issue-client-certificate flags so that kubectl does not use basic or client certificate authentication, but uses OAuth Bearer Tokens instead. Configure the flags to suit your desired authentication mechanism.

    • Specify --no-enable-autoupgrade to disable automatic upgrades.

    • Disable legacy metadata server endpoints using --metadata disable-legacy-endpoints=true as a best practice as indicated in https://cloud.google.com/compute/docs/storing-retrieving-metadata#default.

7.3 Get kubeconfig File

Get the kubeconfig file for your cluster.

tux > gcloud container clusters get-credentials --zone ${CLUSTER_ZONE:?required} ${CLUSTER_NAME:?required} --project example-project

7.4 Install the Helm Client

Helm is a Kubernetes package manager used to install and manage SUSE Cloud Application Platform. This requires installing the Helm client, helm, on your remote management workstation. Cloud Application Platform requires Helm 3. For more information regarding Helm, refer to the documentation at https://helm.sh/docs/.

If your remote management workstation has the SUSE CaaS Platform package repository, install helm by running

tux > sudo zypper install helm3

Otherwise, helm can be installed by referring to the documentation at https://helm.sh/docs/intro/install/.

7.5 Storage Class

In SUSE Cloud Application Platform some instance groups, such as bits, database, diego-cell, and singleton-blobstore require a storage class. To learn more about storage classes, see https://kubernetes.io/docs/concepts/storage/storage-classes/.

By default, SUSE Cloud Application Platform will use the cluster's default storage class. To designate or change the default storage class, refer to https://kubernetes.io/docs/tasks/administer-cluster/change-default-storage-class/

A storage class can be chosen by setting the kube.storage_class value in your kubecf-config-values.yaml configuration file as seen in this example. Note that if there is no storage class designated as the default this value must be set.

kube:
  storage_class: my-storage-class

7.6 Deployment Configuration

The following file, kubecf-config-values.yaml, provides a complete example deployment configuration.

The format of the kubecf-config-values.yaml file has been restructured completely. Do not re-use the previous version of the file. Instead, source the default file from the appendix in Section A.1, “Complete suse/kubecf values.yaml File”.

Warning
Warning: Supported Domains

When selecting a domain, SUSE Cloud Application Platform expects DOMAIN to be either a subdomain or a root domain. Setting DOMAIN to a top-level domain, such as suse, is not supported.

### example deployment configuration file
### kubecf-config-values.yaml

system_domain: example.com

credentials:
  cf_admin_password: changeme
  uaa_admin_client_secret: alsochangeme

7.7 Certificates

This section describes the process to secure traffic passing through your SUSE Cloud Application Platform deployment. This is achieved by using certificates to set up Transport Layer Security (TLS) for the router component.

7.7.1 Certificate Characteristics

Ensure the certificates you use have the following characteristics:

  • The certificate is encoded in the PEM format.

  • The certificate is signed by an external Certificate Authority (CA).

  • The certificate's Subject Alternative Names (SAN) include the domain *.example.com, where example.com is replaced with the system_domain in your kubecf-config-values.yaml.

7.7.2 Deployment Configuration

The certificate used to secure your deployment is passed through the kubecf-config-values.yaml configuration file. To specify a certificate, set the value of the certificate and its corresponding private key using the router.tls.crt and router.tls.key Helm values in the settings: section.

Note
Note

Note the use of the "|" character which indicates the use of a literal scalar. See the http://yaml.org/spec/1.2/spec.html#id2795688 for more information.

settings:
  router:
    tls:
      crt: |
        -----BEGIN CERTIFICATE-----
        MIIEEjCCAfoCCQCWC4NErLzy3jANBgkqhkiG9w0BAQsFADBGMQswCQYDVQQGEwJD
        QTETMBEGA1UECAwKU29tZS1TdGF0ZTEOMAwGA1UECgwFTXlPcmcxEjAQBgNVBAMM
        CU15Q0Euc2l0ZTAeFw0xODA5MDYxNzA1MTRaFw0yMDAxMTkxNzA1MTRaMFAxCzAJ
        ...
        xtNNDwl2rnA+U0Q48uZIPSy6UzSmiNaP3PDR+cOak/mV8s1/7oUXM5ivqkz8pEJo
        M3KrIxZ7+MbdTvDOh8lQplvFTeGgjmUDd587Gs4JsormqOsGwKd1BLzQbGELryV9
        1usMOVbUuL8mSKVvgqhbz7vJlW1+zwmrpMV3qgTMoHoJWGx2n5g=
        -----END CERTIFICATE-----
      key: |
        -----BEGIN RSA PRIVATE KEY-----
        MIIEpAIBAAKCAQEAm4JMchGSqbZuqc4LdryJpX2HnarWPOW0hUkm60DL53f6ehPK
        T5Dtb2s+CoDX9A0iTjGZWRD7WwjpiiuXUcyszm8y9bJjP3sIcTnHWSgL/6Bb3KN5
        G5D8GHz7eMYkZBviFvygCqEs1hmfGCVNtgiTbAwgBTNsrmyx2NygnF5uy4KlkgwI
        ...
        GORpbQKBgQDB1/nLPjKxBqJmZ/JymBl6iBnhIgVkuUMuvmqES2nqqMI+r60EAKpX
        M5CD+pq71TuBtbo9hbjy5Buh0+QSIbJaNIOdJxU7idEf200+4anzdaipyCWXdZU+
        MPdJf40awgSWpGdiSv6hoj0AOm+lf4AsH6yAqw/eIHXNzhWLRvnqgA==
        -----END RSA PRIVATE KEY----

7.8 Using an Ingress Controller

By default, a SUSE Cloud Application Platform cluster is exposed through its Kubernetes services. This section describes how to use an ingress (see https://kubernetes.io/docs/concepts/services-networking/ingress/) to manage access to the services in the cluster.

Note that only the NGINX Ingress Controller has been verified to be compatible with Cloud Application Platform. Other Ingress controller alternatives may work, but compatibility with Cloud Application Platform is not supported.

7.8.1 Install and Configure the NGINX Ingress Controller

  1. Create a configuration file with the section below. The file is called nginx-ingress.yaml in this example.

    tcp:
      2222: "kubecf/scheduler:2222"
      20000: "kubecf/tcp-router:20000"
      20001: "kubecf/tcp-router:20001"
      20002: "kubecf/tcp-router:20002"
      20003: "kubecf/tcp-router:20003"
      20004: "kubecf/tcp-router:20004"
      20005: "kubecf/tcp-router:20005"
      20006: "kubecf/tcp-router:20006"
      20007: "kubecf/tcp-router:20007"
      20008: "kubecf/tcp-router:20008"
  2. Create the namespace.

    tux > kubectl create namespace nginx-ingress
  3. Install the NGINX Ingress Controller.

    tux > helm install nginx-ingress suse/nginx-ingress \
    --namespace nginx-ingress \
    --values nginx-ingress.yaml
  4. Monitor the progess of the deployment:

    tux > watch --color 'kubectl get pods --namespace nginx-ingress'
  5. After the deployment completes, the Ingress controller service will be deployed with either an external IP or a hostname.

    Find the external IP or hostname.

    tux > kubectl get services nginx-ingress-controller --namespace nginx-ingress

    You will get output similar to the following.

    NAME                       TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)
    nginx-ingress-controller   LoadBalancer   10.63.248.70   35.233.191.177   80:30344/TCP,443:31386/TCP
  6. Set up DNS records corresponding to the controller service IP or hostname and map it to the system_domain defined in your kubecf-config-values.yaml.

  7. Obtain a PEM formatted certificate that is associated with the system_domain defined in your kubecf-config-values.yaml

  8. In your kubecf-config-values.yaml configuration file, enable the ingress feature and set the tls.crt and tls.key for the certificate from the previous step.

    features:
      ingress:
        enabled: true
        tls:
          crt: |
            -----BEGIN CERTIFICATE-----
            MIIE8jCCAtqgAwIBAgIUT/Yu/Sv8AUl5zHXXEKCy5RKJqmYwDQYJKoZIhvcMOQMM
            [...]
            xC8x/+zB7XlvcRJRio6kk670+25ABP==
            -----END CERTIFICATE-----
          key: |
            -----BEGIN RSA PRIVATE KEY-----
            MIIE8jCCAtqgAwIBAgIUSI02lj2b2ImLy/zMrjNgW5d8EygwQSVJKoZIhvcYEGAW
            [...]
            to2WV7rPMb9W9fd2vVUXKKHTc+PiNg==
            -----END RSA PRIVATE KEY-----

7.9 Affinity and Anti-affinity

Operators can set affinity/anti-affinity rules to restrict how the scheduler determines the placement of a given pod on a given node. This can be achieved through node affinity/anti-affinity, where placement is determined by node labels (see https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity), or pod affinity/anti-affinity, where pod placement is determined by labels on pods that are already running on the node (see https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity).

In SUSE Cloud Application Platform, a default configuration will have following affinity/anti-affinity rules already in place:

  • Instance groups have anti-affinity against themselves. This applies to all instance groups, including database, but not to the bits, eirini, and eirini-extensions subcharts.

  • The diego-cell and router instance groups have anti-affinity against each other.

Note that to ensure an optimal spread of the pods across worker nodes we recommend running 5 or more worker nodes to satisfy both of the default anti-affinity constraints. An operator can also specify custom affinity rules via the sizing.instance-group.affinity helm parameter and any affinity rules specified here will overwrite the default rule, not merge with it.

7.9.1 Configuring Rules

To add or override affinity/anti-affinity settings, add a sizing.INSTANCE_GROUP.affinity block to your kubecf-config-values.yaml. Repeat as necessary for each instance group where affinity/anti-affinity settings need to be applied. For information on the available fields and valid values within the affinity: block, see https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity. Repeat as necessary for each instance group where affinity/anti-affinity settings need to be applied.

Example 1, node affinity.

Using this configuration, the Kubernetes scheduler would place both the asactors and asapi instance groups on a node with a label where the key is topology.kubernetes.io/zone and the value is 0.

sizing:
   asactors:
     affinity:
       nodeAffinity:
         requiredDuringSchedulingIgnoredDuringExecution:
           nodeSelectorTerms:
           - matchExpressions:
             - key: topology.kubernetes.io/zone
               operator: In
               values:
               - 0
   asapi:
     affinity:
       nodeAffinity:
         requiredDuringSchedulingIgnoredDuringExecution:
           nodeSelectorTerms:
           - matchExpressions:
             - key: topology.kubernetes.io/zone
               operator: In
               values:
               - 0

Example 2, pod anti-affinity.

sizing:
  api:
    affinity:
      podAntiAffinity:
        preferredDuringSchedulingIgnoredDuringExecution:
        - weight: 100
          podAffinityTerm:
            labelSelector:
              matchExpressions:
              - key: quarks.cloudfoundry.org/quarks-statefulset-name
                operator: In
                values:
                - sample_group
            topologyKey: kubernetes.io/hostname
  database:
    affinity:
      podAntiAffinity:
        preferredDuringSchedulingIgnoredDuringExecution:
        - weight: 100
          podAffinityTerm:
            labelSelector:
              matchExpressions:
              - key: quarks.cloudfoundry.org/quarks-statefulset-name
                operator: In
                values:
                - sample_group
            topologyKey: kubernetes.io/hostname

Example 1 above uses topology.kubernetes.io/zone as its label, which is one of the standard labels that get attached to nodes by default. The list of standard labels can be found at https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#built-in-node-labels.

In addition to the standard labels, custom labels can be specified as in Example 2. To use custom labels, following the process described in this section https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector.

7.10 High Availability

7.10.1 Configuring Cloud Application Platform for High Availability

There are two ways to make your SUSE Cloud Application Platform deployment highly available. The first method is to set the high_availability parameter in your deployment configuration file to true. The second method is to create custom configuration files with your own sizing values.

7.10.1.1 Finding Default and Allowable Sizing Values

The sizing: section in the Helm values.yaml files for the kubecf chart describes which roles can be scaled, and the scaling options for each role. You may use helm inspect to read the sizing: section in the Helm chart:

tux > helm show suse/kubecf | less +/sizing:

Another way is to use Perl to extract the information for each role from the sizing: section.

tux > helm inspect values suse/kubecf | \
perl -ne '/^sizing/..0 and do { print $.,":",$_ if /^ [a-z]/ || /high avail|scale|count/ }'

The default values.yaml files are also included in this guide at Section A.1, “Complete suse/kubecf values.yaml File”.

7.10.1.2 Using the high_availability Helm Property

One way to make your SUSE Cloud Application Platform deployment highly available is to use the high_availability Helm property. In your kubecf-config-values.yaml, set this property to true. This changes the size of all roles to the minimum required for a highly available deployment. Your configuration file, kubecf-config-values.yaml, should include the following.

high_availability: true
Important
Important: Sizing Priority

When sizing values are specified, it takes precedence over the high_availability property.

7.10.1.3 Using Custom Sizing Configurations

Another method to make your SUSE Cloud Application Platform deployment highly available is to explicitly configure the instance count of an instance group.

Important
Important: Sizing Priority

When sizing values are specified, it takes precedence over the high_availability property.

To see the full list of configurable instance groups, refer to default KubeCF values.yaml file in the appendix at Section A.1, “Complete suse/kubecf values.yaml File”.

The following is an example High Availability configuration. The example values are not meant to be copied, as these depend on your particular deployment and requirements.

sizing:
  adapter:
    instances: 2
  api:
    instances: 2
  asactors:
    instances: 2
  asapi:
    instances: 2
  asmetrics:
    instances: 2
  asnozzle:
    instances: 2
  auctioneer:
    instances: 2
  bits:
    instances: 2
  cc_worker:
    instances: 2
  credhub:
    instances: 2
  database:
    instances: 2
  diego_api:
    instances: 2
  diego_cell:
    instances: 2
  doppler:
    instances: 2
  eirini:
    instances: 3
  log_api:
    instances: 2
  nats:
    instances: 2
  router:
    instances: 2
  routing_api:
    instances: 2
  scheduler:
    instances: 2
  uaa:
    instances: 2
  tcp_router:
    instances: 2

7.11 External Blobstore

Cloud Foundry Application Runtime (CFAR) uses a blobstore (see https://docs.cloudfoundry.org/concepts/cc-blobstore.html) to store the source code that developers push, stage, and run. This section explains how to configure an external blobstore for the Cloud Controller component of your SUSE Cloud Application Platform deployment.

SUSE Cloud Application Platform relies on ops files (see https://github.com/cloudfoundry/cf-deployment/blob/master/operations/README.md) provided by cf-deployment (see https://github.com/cloudfoundry/cf-deployment) releases for external blobstore configurations. The default configuration for the blobstore is singleton.

7.11.1 Configuration

Currently SUSE Cloud Application Platform supports Amazon Simple Storage Service (Amazon S3, see https://aws.amazon.com/s3/) as an external blobstore . In order to configure Amazon S3 as an external blobstore, set the following in your kubecf-config-values.yaml file and replace the example values.

features:
  blobstore:
    provider: s3
    s3:
      aws_region: "us-east-1"
      blobstore_access_key_id:  AWS-ACCESS-KEY-ID
      blobstore_secret_access_key: AWS-SECRET-ACCESS-KEY>
      # User provided value for the blobstore admin password.
      blobstore_admin_users_password: PASSWORD
      # The following values are used as S3 bucket names. The buckets are automatically created if not present.
      app_package_directory_key: APP-BUCKET-NAME
      buildpack_directory_key: BUILDPACK-BUCKET-NAME
      droplet_directory_key: DROPLET-BUCKET-NAME
      resource_directory_key: RESOURCE-BUCKET-NAME
Warning
Warning: us-east-1 as Only Valid Region

Currently, there is a limitation where only us-east-1 can be chosen as the aws_region. For more information about this issue, see https://github.com/cloudfoundry-incubator/kubecf/issues/656.

Ensure the supplied AWS credentials have appropriate permissions as described at https://docs.cloudfoundry.org/deploying/common/cc-blobstore-config.html#fog-aws-iam.

7.12 External Database

By default, SUSE Cloud Application Platform includes a single-availability database provided by the Percona XtraDB Cluster (PXC). SUSE Cloud Application Platform can be configured to use an external database system, such as a data service offered by a cloud service provider or an existing high availability database server.

To configure your deployment to use an external database, please follow the instructions below.

The current SUSE Cloud Application Platform release is compatible with the following types and versions of external databases:

  • MySQL 5.7

7.12.1 Configuration

This section describes how to enable and configure your deployment to connect to an external database. The configuration options are specified through Helm values inside the kubecf-config-values.yaml. The deployment and configuration of the external database itself is the responsibility of the operator and beyond the scope of this documentation. It is assumed the external database has been deployed and accessible.

Important
Important: Configuration during Initial Install Only

Configuration of SUSE Cloud Application Platform to use an external database must be done during the initial installation and cannot be changed afterwards.

All the databases listed in the config snippet below need to exist before installing KubeCF. One way of doing that is manually running CREATE DATABASE IF NOT EXISTS database-name for each database.

The following snippet of the kubecf-config-values.yaml contains an example of an external database configuration.

features:
  embedded_database:
    enabled: false
  external_database:
    enabled: true
    require_ssl: false
    ca_cert: ~
    type: mysql
    host: hostname
    port: 3306
    databases:
      uaa:
        name: uaa
        password: root
        username: root
      cc:
        name: cloud_controller
        password: root
        username: root
      bbs:
        name: diego
        password: root
        username: root
      routing_api:
        name: routing-api
        password: root
        username: root
      policy_server:
        name: network_policy
        password: root
        username: root
      silk_controller:
        name: network_connectivity
        password: root
        username: root
      locket: 
        name: locket
        password: root
        username: root
      credhub:        
        name: credhub
        password: root
        username: root

7.13 Add the Kubernetes charts repository

Download the SUSE Kubernetes charts repository with Helm:

tux > helm repo add suse https://kubernetes-charts.suse.com/

You may replace the example suse name with any name. Verify with helm:

tux > helm repo list
NAME       URL
stable     https://kubernetes-charts.storage.googleapis.com
local      http://127.0.0.1:8879/charts
suse       https://kubernetes-charts.suse.com/

List your chart names, as you will need these for some operations:

tux > helm search repo suse
NAME                            CHART VERSION        APP VERSION    DESCRIPTION
suse/cf-operator                4.5.13+0.gd4738712    2.0.1          A Helm chart for cf-operator, the k8s operator ....
suse/console                    4.0.1                2.0.1          A Helm chart for deploying SUSE Stratos Console
suse/kubecf                     2.2.3                2.0.1          A Helm chart for KubeCF
suse/metrics                    1.2.1                2.0.1          A Helm chart for Stratos Metrics
suse/minibroker                 0.3.1                               A minibroker for your minikube
suse/nginx-ingress              0.28.4               0.15.0         An nginx Ingress controller that uses ConfigMap to store ...
...

7.14 Deploying SUSE Cloud Application Platform

This section describes how to deploy SUSE Cloud Application Platform on Google GKE, and how to configure your DNS records.

Warning
Warning: KubeCF and cf-operator versions

KubeCF and cf-operator interoperate closely. Before you deploy a specific version combination, make sure they were confirmed to work. For more information see Section 3.5, “Releases and Associated Versions”.

7.14.1 Deploy the Operator

  1. First, create the namespace for the operator.

    tux > kubectl create namespace cf-operator
  2. Install the operator.

    The value of global.operator.watchNamespace indicates the namespace the operator will monitor for a KubeCF deployment. This namespace should be separate from the namespace used by the operator. In this example, this means KubeCF will be deployed into a namespace called kubecf.

    tux > helm install cf-operator suse/cf-operator \
    --namespace cf-operator \
    --set "global.operator.watchNamespace=kubecf" \
    --version 4.5.13+0.gd4738712
  3. Wait until cf-operator is successfully deployed before proceeding. Monitor the status of your cf-operator deployment using the watch command.

    tux > watch --color 'kubectl get pods --namespace cf-operator'

7.14.2 Deploy KubeCF

  1. Use Helm to deploy KubeCF.

    Note that you do not need to manually create the namespace for KubeCF.

    tux > helm install kubecf suse/kubecf \
    --namespace kubecf \
    --values kubecf-config-values.yaml \
    --version 2.2.3
  2. Monitor the status of your KubeCF deployment using the watch command.

    tux > watch --color 'kubectl get pods --namespace kubecf'
  3. Find the value of EXTERNAL-IP for each of the public services.

    tux > kubectl get service --namespace kubecf router-public
    
    tux > kubectl get service --namespace kubecf tcp-router-public
    
    tux > kubectl get service --namespace kubecf ssh-proxy-public
  4. Create DNS A records for the public services.

    1. For the router-public service, create a record mapping the EXTERNAL-IP value to <system_domain>.

    2. For the router-public service, create a record mapping the EXTERNAL-IP value to *.<system_domain>.

    3. For the tcp-router-public service, create a record mapping the EXTERNAL-IP value to tcp.<system_domain>.

    4. For the ssh-proxy-public service, create a record mapping the EXTERNAL-IP value to ssh.<system_domain>.

  5. When all pods are fully ready, verify your deployment.

    Connect and authenticate to the cluster.

    tux > cf api --skip-ssl-validation "https://api.<system_domain>"
    
    # Use the cf_admin_password set in kubecf-config-values.yaml
    tux > cf auth admin changeme

7.15 Deploying and Using the Google Cloud Platform Service Broker

The Google Cloud Platform (GCP) Service Broker is designed for use with Cloud Foundry and Kubernetes. It is compliant with v2.13 of the Open Service Broker API (see https://www.openservicebrokerapi.org/) and provides support for the services listed at https://github.com/GoogleCloudPlatform/gcp-service-broker/tree/master#open-service-broker-for-google-cloud-platform.

This section describes the how to deploy and use the GCP Service Broker, as a KubeCF application, on SUSE Cloud Application Platform.

7.15.1 Enable APIs

  1. From the GCP console, click the Navigation menu.

  2. Click APIs & Services and then Library.

  3. Enable the following:

  4. Additionally, enable the APIs for the services that will be used. Refer to https://github.com/GoogleCloudPlatform/gcp-service-broker/tree/master#open-service-broker-for-google-cloud-platform to see the services available and the corresponding APIs that will need to be enabled. The examples in this section will require enabling the following APIs:

7.15.2 Create a Service Account

A service account allows non-human users to authenticate with and be authorized to interact with Google APIs. To learn more about service accounts, see https://cloud.google.com/iam/docs/understanding-service-accounts. The service account created here will be used by the GCP Service Broker application so that it can interact with the APIs to provision resources.

  1. From the GCP console, click the Navigation menu.

  2. Go to IAM & admin and click Service accounts.

  3. Click Create Service Account.

  4. In the Service account name field, enter a name.

  5. Click Create.

  6. In the Service account permissions section, add the following roles:

    • Project > Editor

    • Cloud SQL > Cloud SQL Admin

    • Compute Engine > Compute Admin

    • Service Accounts > Service Account User

    • Cloud Services > Service Broker Admin

    • IAM > Security Admin

  7. Click Continue.

  8. In the Create key section, click Create Key.

  9. In the Key type field, select JSON and click Create. Save the file to a secure location. This will be required when deploying the GCP Service Broker application.

  10. Click Done to finish creating the service account.

7.15.3 Create a Database for the GCP Service Broker

The GCP Service Broker requires a database to store information about the resources it provisions. Any database that adheres to the MySQL protocol may be used, but it is recommended to use a GCP Cloud SQL instance, as outlined in the following steps.

  1. From the GCP console, click the Navigation menu.

  2. Under the Storage section, click SQL.

  3. Click Create Instance.

  4. Click Choose MySQL to select MySQL as the database engine.

  5. In the Instance ID field, enter an identifier for MySQL instance.

  6. In the Root password field, set a password for the root user.

  7. Click Show configuration options to see additonal configuration options.

  8. Under the Set connectivity section, click Add network to add an authorized network.

  9. In the Network field, enter 0.0.0.0/0 and click Done.

  10. Optionally, create SSL certificates for the database and store them in a secure location.

  11. Click Create and wait for the MySQL instance to finish creating.

  12. After the MySQL instance is finished creating, connect to it using either the Cloud Shell or the mysql command line client.

    • To connect using Cloud Shell:

      1. Click on the instance ID of the MySQL instance.

      2. In the Connect to this instance section of the Overview tab, click Connect using Cloud Shell.

      3. After the shell is opened, the gcloud sql connect command is displayed. Press Enter to connect to the MySQL instance as the root user.

      4. When prompted, enter the password for the root user set in an earlier step.

    • To connect using the mysql command line client:

      1. Click on the instance ID of the MySQL instance.

      2. In the Connect to this instance section of the Overview tab, take note of the IP address. For example, 11.22.33.44.

      3. Using the mysql command line client, run the following command.

        tux > mysql -h 11.22.33.44 -u root -p
      4. When prompted, enter the password for the root user set in an earlier step.

  13. After connecting to the MySQL instance, run the following commands to create an initial user. The service broker will use this user to connect to the service broker database.

    CREATE DATABASE servicebroker;
    CREATE USER 'gcpdbuser'@'%' IDENTIFIED BY 'gcpdbpassword';
    GRANT ALL PRIVILEGES ON servicebroker.* TO 'gcpdbuser'@'%' WITH GRANT OPTION;

    Where:

    gcpdbuser

    Is the username of the user the service broker will connect to the service broker database with. Replace gcpdbuser with a username of your choosing.

    gcpdbpassword

    Is the password of the user the service broker will connect to the service broker database with. Replace gcpdbpassword with a secure password of your choosing.

7.15.4 Deploy the Service Broker

The GCP Service Broker can be deployed as a Cloud Foundry application onto your deployment of SUSE Cloud Application Platform.

  1. Get the GCP Service Broker application from Github and change to the GCP Service Broker application directory.

    tux > git clone https://github.com/GoogleCloudPlatform/gcp-service-broker
    tux > cd gcp-service-broker
  2. Update the manifest.yml file and add the environment variables below and their associated values to the env section:

    ROOT_SERVICE_ACCOUNT_JSON

    The contents, as a string, of the JSON key file created for the service account created earlier (see Section 7.15.2, “Create a Service Account”).

    SECURITY_USER_NAME

    The username to authenticate broker requests. This will be the same one used in the cf create-service-broker command. In the examples, this is cfgcpbrokeruser.

    SECURITY_USER_PASSWORD

    The password to authenticate broker requests. This will be the same one used in the cf create-service-broker command. In the examples, this is cfgcpbrokerpassword.

    DB_HOST

    The host for the service broker database created earlier (see Section 7.15.3, “Create a Database for the GCP Service Broker”. This can be found in the GCP console by clicking on the name of the database instance and examining the Connect to this instance section of the Overview tab. In the examples, this is 11.22.33.44.

    DB_USERNAME

    The username used to connect to the service broker database. This was created by the mysql commands earlier while connected to the service broker database instance (see Section 7.15.3, “Create a Database for the GCP Service Broker”). In the examples, this is gcpdbuser.

    DB_PASSWORD

    The password of the user used to connect to the service broker database. This was created by the mysql commands earlier while connected to the service broker database instance (see Section 7.15.3, “Create a Database for the GCP Service Broker”). In the examples, this is gcpdbpassword.

    The manifest.yml should look similar to the example below.

    ### example manifest.yml for the GCP Service Broker
    ---
    applications:
    - name: gcp-service-broker
      memory: 1G
      buildpacks:
      - go_buildpack
      env:
        GOPACKAGENAME: github.com/GoogleCloudPlatform/gcp-service-broker
        GOVERSION: go1.12
        ROOT_SERVICE_ACCOUNT_JSON: '{ ... }'
        SECURITY_USER_NAME: cfgcpbrokeruser
        SECURITY_USER_PASSWORD: cfgcpbrokerpassword
        DB_HOST: 11.22.33.44
        DB_USERNAME: gcpdbuser
        DB_PASSWORD: gcpdbpassword
  3. After updating the manifest.yml file, deploy the service broker as an application to your Cloud Application Platform deployment. Specify a health check type of none.

    tux > cf push --health-check-type none
  4. After the service broker application is deployed, take note of the URL displayed in the route field. Alternatively, run cf app gcp-service-broker to find the URL in the route field. On a browser, go to the route (for example, https://gcp-service-broker.example.com). You should see the documentation for the GCP Service Broker.

  5. Create the service broker in KubeCF using the cf CLI.

    tux > cf create-service-broker gcp-service-broker cfgcpbrokeruser cfgcpbrokerpassword https://gcp-service-broker.example.com

    Where https://gcp-service-broker.example.com is replaced by the URL of the GCP Service Broker application deployed to SUSE Cloud Application Platform. Find the URL using cf app gcp-service-broker and examining the routes field.

  6. Verify the service broker has been successfully registered.

    tux > cf service-brokers
  7. List the available services and their associated plans for the GCP Service Broker. For more information about the services, see https://github.com/GoogleCloudPlatform/gcp-service-broker/tree/master#open-service-broker-for-google-cloud-platform.

    tux > cf service-access -b gcp-service-broker
  8. Enable access to a service. This example enables access to the Google CloudSQL MySQL service (see https://cloud.google.com/sql/).

    tux > cf enable-service-access google-cloudsql-mysql
  9. Create an instance of the Google CloudSQL MySQL service. This example uses the mysql-db-f1-micro plan. Use the -c flag to pass optional parameters when provisioning a service. See https://github.com/GoogleCloudPlatform/gcp-service-broker/blob/master/docs/use.md for the parameters that can be set for each service.

    tux > cf create-service google-cloudsql-mysql mysql-db-f1-micro mydb-instance

    Wait for the service to finish provisioning. Check the status using the GCP console or with the following command.

    tux > cf service mydb-instance | grep status

    The service can now be bound to applications and used.

7.16 LDAP Integration

SUSE Cloud Application Platform can be integrated with identity providers to help manage authentication of users. The Lightweight Directory Access Protocol (LDAP) is an example of an identity provider that Cloud Application Platform integrates with. This section describes the necessary components and steps in order to configure the integration. See User Account and Authentication LDAP Integration for more information.

7.16.1 Prerequisites

The following prerequisites are required in order to complete an LDAP integration with SUSE Cloud Application Platform.

  • cf, the Cloud Foundry command line interface. For more information, see https://docs.cloudfoundry.org/cf-cli/.

    For SUSE Linux Enterprise and openSUSE systems, install using zypper.

    tux > sudo zypper install cf-cli

    For SLE, ensure the SUSE Cloud Application Platform Tools Module has been added. Add the module using YaST or SUSEConnect.

    tux > SUSEConnect --product sle-module-cap-tools/15.1/x86_64

    For other systems, follow the instructions at https://docs.cloudfoundry.org/cf-cli/install-go-cli.html.

  • uaac, the Cloud Foundry uaa command line client (UAAC). See https://docs.cloudfoundry.org/uaa/uaa-user-management.html for more information and installation instructions.

    On SUSE Linux Enterprise systems, ensure the ruby-devel and gcc-c++ packages have been installed before installing the cf-uaac gem.

    tux > sudo zypper install ruby-devel gcc-c++
  • An LDAP server and the credentials for a user/service account with permissions to search the directory.

7.16.2 Example LDAP Integration

Run the following commands to complete the integration of your Cloud Application Platform deployment and LDAP server.

  1. Use UAAC to target your uaa server.

    tux > uaac target --skip-ssl-validation https://uaa.example.com:2793
  2. Authenticate to the uaa server as admin using the UAA_ADMIN_CLIENT_SECRET set in your kubecf-config-values.yaml file.

    tux > uaac token client get admin --secret password
  3. Create the LDAP identity provider. A 201 response will be returned when the identity provider is successfully created. See the UAA API Reference and Cloud Foundry UAA-LDAP Documentationfor information regarding the request parameters and additional options available to configure your identity provider.

    The following is an example of a uaac curl command and its request parameters used to create an identity provider. Specify the parameters according to your LDAP server's credentials and directory structure. Ensure the user specifed in the bindUserDn has permissions to search the directory.

    tux > uaac curl /identity-providers?rawConfig=true \
        --request POST \
        --insecure \
        --header 'Content-Type: application/json' \
        --header 'X-Identity-Zone-Subdomain: uaa' \
        --data '{
      "type" : "ldap",
      "config" : {
        "ldapProfileFile" : "ldap/ldap-search-and-bind.xml",
        "baseUrl" : "ldap://ldap.example.com:389",
        "bindUserDn" : "cn=admin,dc=example,dc=com",
        "bindPassword" : "password",
        "userSearchBase" : "dc=example,dc=com",
        "userSearchFilter" : "uid={0}",
        "ldapGroupFile" : "ldap/ldap-groups-map-to-scopes.xml",
        "groupSearchBase" : "dc=example,dc=com",
        "groupSearchFilter" : "member={0}"
      },
      "originKey" : "ldap",
      "name" : "My LDAP Server",
      "active" : true
      }'
  4. Verify the LDAP identify provider has been created in the kubecf zone. The output should now contain an entry for the ldap type.

    tux > uaac curl /identity-providers --insecure --header "X-Identity-Zone-Id: uaa"
  5. Use the cf CLI to target your SUSE Cloud Application Platform deployment.

    tux > cf api --skip-ssl-validation https://api.example.com
  6. Log in as an administrator.

    tux > cf login
    API endpoint: https://api.example.com
    
    Email> admin
    
    Password>
    Authenticating...
    OK
  7. Create users associated with your LDAP identity provider.

    tux > cf create-user username --origin ldap
    Creating user username...
    OK
    
    TIP: Assign roles with 'cf set-org-role' and 'cf set-space-role'.
  8. Assign the user a role. Roles define the permissions a user has for a given org or space and a user can be assigned multiple roles. See Orgs, Spaces, Roles, and Permissions for available roles and their corresponding permissions. The following example assumes that an org named Org and a space named Space have already been created.

    tux > cf set-space-role username Org Space SpaceDeveloper
    Assigning role RoleSpaceDeveloper to user username in org Org / space Space as admin...
    OK
    tux > cf set-org-role username Org OrgManager
    Assigning role OrgManager to user username in org Org as admin...
    OK
  9. Verify the user can log into your SUSE Cloud Application Platform deployment using their associated LDAP server credentials.

    tux > cf login
    API endpoint: https://api.example.com
    
    Email> username
    
    Password>
    Authenticating...
    OK
    
    
    
    API endpoint:   https://api.example.com (API version: 2.115.0)
    User:           username@ldap.example.com

If the LDAP identity provider is no longer needed, it can be removed with the following steps.

  1. Obtain the ID of your identity provider.

    tux > uaac curl /identity-providers \
        --insecure \
        --header "Content-Type:application/json" \
        --header "Accept:application/json" \
        --header"X-Identity-Zone-Id:uaa"
  2. Delete the identity provider.

    tux > uaac curl /identity-providers/IDENTITY_PROVIDER_ID \
        --request DELETE \
        --insecure \
        --header "X-Identity-Zone-Id:uaa"

7.17 Expanding Capacity of a Cloud Application Platform Deployment on Google GKE

If the current capacity of your Cloud Application Platform deployment is insufficient for your workloads, you can expand the capacity using the procedure in this section.

These instructions assume you have followed the procedure in Chapter 7, Deploying SUSE Cloud Application Platform on Google Kubernetes Engine (GKE) and have a running Cloud Application Platform deployment on Microsoft AKS. The instructions below will use environment variables defined in Section 7.2, “Creating a GKE cluster”.

  1. Get the most recently created node in the cluster.

    tux > RECENT_VM_NODE=$(gcloud compute instances list --filter=name~${CLUSTER_NAME:?required} --format json | jq --raw-output '[sort_by(.creationTimestamp) | .[].creationTimestamp ] | last | .[0:19] | strptime("%Y-%m-%dT%H:%M:%S") | mktime')
  2. Increase the Kubernetes node count in the cluster. Replace the example value with the number of nodes required for your workload.

    tux > gcloud container clusters resize $CLUSTER_NAME \
    --num-nodes 5
  3. Verify the new nodes are in a Ready state before proceeding.

    tux > kubectl get nodes
  4. Add or update the following in your kubecf-config-values.yaml file to increase the number of diego-cell in your Cloud Application Platform deployment. Replace the example value with the number required by your workflow.

    sizing:
      diego_cell:
        instances: 5
  5. Perform a helm upgrade to apply the change.

    tux > helm upgrade kubecf suse/kubecf \
    --namespace kubecf \
    --values kubecf-config-values.yaml \
    --version 2.2.3
  6. Monitor progress of the additional diego-cell pods:

    tux > watch --color 'kubectl get pods --namespace kubecf'