Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
Applies to SUSE Cloud Application Platform 2.0.1

6 Deploying SUSE Cloud Application Platform on Amazon Elastic Kubernetes Service (EKS)

Important
Important
README first!

Before you start deploying SUSE Cloud Application Platform, review the following documents:

Read the Release Notes: Release Notes SUSE Cloud Application Platform

Read Chapter 3, Deployment and Administration Notes

This chapter describes how to deploy SUSE Cloud Application Platform on Amazon Elastic Kubernetes Service (EKS), using Amazon's Elastic Load Balancer to provide fault-tolerant access to your cluster.

6.1 Prerequisites

The following are required to deploy and use SUSE Cloud Application Platform on EKS:

Important
Important

The prerequisites and configurations described is this chapter only reflect the requirements for a minimal SUSE Cloud Application Platform deployment. For a more production-ready environment, consider incoporating some of following optional features in this chapter and the Administration Guide at Part III, “SUSE Cloud Application Platform Administration”

6.2 Create an EKS Cluster

Now you can create an EKS cluster using eksctl. Be sure to keep in mind the following minimum requirements of the cluster.

  • Node sizes are at least t3.xlarge.

  • The NodeVolumeSize must be a minimum of 100 GB.

  • The Kubernetes version is at least 1.14.

As a minimal example, the following command will create an EKS cluster. To see additional configuration parameters, see eksctl create cluster --help.

tux > eksctl create cluster --name kubecf --version 1.14 \
--nodegroup-name standard-workers --node-type t3.xlarge \
--nodes 3 --node-volume-size 100 \
--region us-east-2 --managed \
--ssh-access --ssh-public-key /path/to/some_key.pub

6.3 Install the Helm Client

Helm is a Kubernetes package manager used to install and manage SUSE Cloud Application Platform. This requires installing the Helm client, helm, on your remote management workstation. Cloud Application Platform requires Helm 3. For more information regarding Helm, refer to the documentation at https://helm.sh/docs/.

If your remote management workstation has the SUSE CaaS Platform package repository, install helm by running

tux > sudo zypper install helm3

Otherwise, helm can be installed by referring to the documentation at https://helm.sh/docs/intro/install/.

6.4 Storage Class

In SUSE Cloud Application Platform some instance groups, such as bits, database, diego-cell, and singleton-blobstore require a storage class. To learn more about storage classes, see https://kubernetes.io/docs/concepts/storage/storage-classes/.

By default, SUSE Cloud Application Platform will use the cluster's default storage class. To designate or change the default storage class, refer to https://kubernetes.io/docs/tasks/administer-cluster/change-default-storage-class/

A storage class can be chosen by setting the kube.storage_class value in your kubecf-config-values.yaml configuration file as seen in this example. Note that if there is no storage class designated as the default this value must be set.

kube:
  storage_class: my-storage-class

6.5 Deployment Configuration

Use this example kubecf-config-values.yaml as a template for your configuration.

The format of the kubecf-config-values.yaml file has been restructured completely. Do not re-use the previous version of the file. Instead, source the default file from the appendix in Section A.1, “Complete suse/kubecf values.yaml File”.

Warning
Warning: Supported Domains

When selecting a domain, SUSE Cloud Application Platform expects DOMAIN to be either a subdomain or a root domain. Setting DOMAIN to a top-level domain, such as suse, is not supported.

### example deployment configuration file
### kubecf-config-values.yaml

system_domain: example.com

credentials:
  cf_admin_password: changeme
  uaa_admin_client_secret: alsochangeme

6.6 Certificates

This section describes the process to secure traffic passing through your SUSE Cloud Application Platform deployment. This is achieved by using certificates to set up Transport Layer Security (TLS) for the router component.

6.6.1 Certificate Characteristics

Ensure the certificates you use have the following characteristics:

  • The certificate is encoded in the PEM format.

  • The certificate is signed by an external Certificate Authority (CA).

  • The certificate's Subject Alternative Names (SAN) include the domain *.example.com, where example.com is replaced with the system_domain in your kubecf-config-values.yaml.

6.6.2 Deployment Configuration

The certificate used to secure your deployment is passed through the kubecf-config-values.yaml configuration file. To specify a certificate, set the value of the certificate and its corresponding private key using the router.tls.crt and router.tls.key Helm values in the settings: section.

Note
Note

Note the use of the "|" character which indicates the use of a literal scalar. See the http://yaml.org/spec/1.2/spec.html#id2795688 for more information.

settings:
  router:
    tls:
      crt: |
        -----BEGIN CERTIFICATE-----
        MIIEEjCCAfoCCQCWC4NErLzy3jANBgkqhkiG9w0BAQsFADBGMQswCQYDVQQGEwJD
        QTETMBEGA1UECAwKU29tZS1TdGF0ZTEOMAwGA1UECgwFTXlPcmcxEjAQBgNVBAMM
        CU15Q0Euc2l0ZTAeFw0xODA5MDYxNzA1MTRaFw0yMDAxMTkxNzA1MTRaMFAxCzAJ
        ...
        xtNNDwl2rnA+U0Q48uZIPSy6UzSmiNaP3PDR+cOak/mV8s1/7oUXM5ivqkz8pEJo
        M3KrIxZ7+MbdTvDOh8lQplvFTeGgjmUDd587Gs4JsormqOsGwKd1BLzQbGELryV9
        1usMOVbUuL8mSKVvgqhbz7vJlW1+zwmrpMV3qgTMoHoJWGx2n5g=
        -----END CERTIFICATE-----
      key: |
        -----BEGIN RSA PRIVATE KEY-----
        MIIEpAIBAAKCAQEAm4JMchGSqbZuqc4LdryJpX2HnarWPOW0hUkm60DL53f6ehPK
        T5Dtb2s+CoDX9A0iTjGZWRD7WwjpiiuXUcyszm8y9bJjP3sIcTnHWSgL/6Bb3KN5
        G5D8GHz7eMYkZBviFvygCqEs1hmfGCVNtgiTbAwgBTNsrmyx2NygnF5uy4KlkgwI
        ...
        GORpbQKBgQDB1/nLPjKxBqJmZ/JymBl6iBnhIgVkuUMuvmqES2nqqMI+r60EAKpX
        M5CD+pq71TuBtbo9hbjy5Buh0+QSIbJaNIOdJxU7idEf200+4anzdaipyCWXdZU+
        MPdJf40awgSWpGdiSv6hoj0AOm+lf4AsH6yAqw/eIHXNzhWLRvnqgA==
        -----END RSA PRIVATE KEY----

6.7 Using an Ingress Controller

By default, a SUSE Cloud Application Platform cluster is exposed through its Kubernetes services. This section describes how to use an ingress (see https://kubernetes.io/docs/concepts/services-networking/ingress/) to manage access to the services in the cluster.

Note that only the NGINX Ingress Controller has been verified to be compatible with Cloud Application Platform. Other Ingress controller alternatives may work, but compatibility with Cloud Application Platform is not supported.

6.7.1 Install and Configure the NGINX Ingress Controller

  1. Create a configuration file with the section below. The file is called nginx-ingress.yaml in this example.

    tcp:
      2222: "kubecf/scheduler:2222"
      20000: "kubecf/tcp-router:20000"
      20001: "kubecf/tcp-router:20001"
      20002: "kubecf/tcp-router:20002"
      20003: "kubecf/tcp-router:20003"
      20004: "kubecf/tcp-router:20004"
      20005: "kubecf/tcp-router:20005"
      20006: "kubecf/tcp-router:20006"
      20007: "kubecf/tcp-router:20007"
      20008: "kubecf/tcp-router:20008"
  2. Create the namespace.

    tux > kubectl create namespace nginx-ingress
  3. Install the NGINX Ingress Controller.

    tux > helm install nginx-ingress suse/nginx-ingress \
    --namespace nginx-ingress \
    --values nginx-ingress.yaml
  4. Monitor the progess of the deployment:

    tux > watch --color 'kubectl get pods --namespace nginx-ingress'
  5. After the deployment completes, the Ingress controller service will be deployed with either an external IP or a hostname.

    Find the external IP or hostname.

    tux > kubectl get services nginx-ingress-controller --namespace nginx-ingress

    You will get output similar to the following.

    NAME                       TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)
    nginx-ingress-controller   LoadBalancer   10.63.248.70   35.233.191.177   80:30344/TCP,443:31386/TCP
  6. Set up DNS records corresponding to the controller service IP or hostname and map it to the system_domain defined in your kubecf-config-values.yaml.

  7. Obtain a PEM formatted certificate that is associated with the system_domain defined in your kubecf-config-values.yaml

  8. In your kubecf-config-values.yaml configuration file, enable the ingress feature and set the tls.crt and tls.key for the certificate from the previous step.

    features:
      ingress:
        enabled: true
        tls:
          crt: |
            -----BEGIN CERTIFICATE-----
            MIIE8jCCAtqgAwIBAgIUT/Yu/Sv8AUl5zHXXEKCy5RKJqmYwDQYJKoZIhvcMOQMM
            [...]
            xC8x/+zB7XlvcRJRio6kk670+25ABP==
            -----END CERTIFICATE-----
          key: |
            -----BEGIN RSA PRIVATE KEY-----
            MIIE8jCCAtqgAwIBAgIUSI02lj2b2ImLy/zMrjNgW5d8EygwQSVJKoZIhvcYEGAW
            [...]
            to2WV7rPMb9W9fd2vVUXKKHTc+PiNg==
            -----END RSA PRIVATE KEY-----

6.8 Affinity and Anti-affinity

Operators can set affinity/anti-affinity rules to restrict how the scheduler determines the placement of a given pod on a given node. This can be achieved through node affinity/anti-affinity, where placement is determined by node labels (see https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity), or pod affinity/anti-affinity, where pod placement is determined by labels on pods that are already running on the node (see https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity).

In SUSE Cloud Application Platform, a default configuration will have following affinity/anti-affinity rules already in place:

  • Instance groups have anti-affinity against themselves. This applies to all instance groups, including database, but not to the bits, eirini, and eirini-extensions subcharts.

  • The diego-cell and router instance groups have anti-affinity against each other.

Note that to ensure an optimal spread of the pods across worker nodes we recommend running 5 or more worker nodes to satisfy both of the default anti-affinity constraints. An operator can also specify custom affinity rules via the sizing.instance-group.affinity helm parameter and any affinity rules specified here will overwrite the default rule, not merge with it.

6.8.1 Configuring Rules

To add or override affinity/anti-affinity settings, add a sizing.INSTANCE_GROUP.affinity block to your kubecf-config-values.yaml. Repeat as necessary for each instance group where affinity/anti-affinity settings need to be applied. For information on the available fields and valid values within the affinity: block, see https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity. Repeat as necessary for each instance group where affinity/anti-affinity settings need to be applied.

Example 1, node affinity.

Using this configuration, the Kubernetes scheduler would place both the asactors and asapi instance groups on a node with a label where the key is topology.kubernetes.io/zone and the value is 0.

sizing:
   asactors:
     affinity:
       nodeAffinity:
         requiredDuringSchedulingIgnoredDuringExecution:
           nodeSelectorTerms:
           - matchExpressions:
             - key: topology.kubernetes.io/zone
               operator: In
               values:
               - 0
   asapi:
     affinity:
       nodeAffinity:
         requiredDuringSchedulingIgnoredDuringExecution:
           nodeSelectorTerms:
           - matchExpressions:
             - key: topology.kubernetes.io/zone
               operator: In
               values:
               - 0

Example 2, pod anti-affinity.

sizing:
  api:
    affinity:
      podAntiAffinity:
        preferredDuringSchedulingIgnoredDuringExecution:
        - weight: 100
          podAffinityTerm:
            labelSelector:
              matchExpressions:
              - key: quarks.cloudfoundry.org/quarks-statefulset-name
                operator: In
                values:
                - sample_group
            topologyKey: kubernetes.io/hostname
  database:
    affinity:
      podAntiAffinity:
        preferredDuringSchedulingIgnoredDuringExecution:
        - weight: 100
          podAffinityTerm:
            labelSelector:
              matchExpressions:
              - key: quarks.cloudfoundry.org/quarks-statefulset-name
                operator: In
                values:
                - sample_group
            topologyKey: kubernetes.io/hostname

Example 1 above uses topology.kubernetes.io/zone as its label, which is one of the standard labels that get attached to nodes by default. The list of standard labels can be found at https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#built-in-node-labels.

In addition to the standard labels, custom labels can be specified as in Example 2. To use custom labels, following the process described in this section https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector.

6.9 High Availability

6.9.1 Configuring Cloud Application Platform for High Availability

There are two ways to make your SUSE Cloud Application Platform deployment highly available. The first method is to set the high_availability parameter in your deployment configuration file to true. The second method is to create custom configuration files with your own sizing values.

6.9.1.1 Finding Default and Allowable Sizing Values

The sizing: section in the Helm values.yaml files for the kubecf chart describes which roles can be scaled, and the scaling options for each role. You may use helm inspect to read the sizing: section in the Helm chart:

tux > helm show suse/kubecf | less +/sizing:

Another way is to use Perl to extract the information for each role from the sizing: section.

tux > helm inspect values suse/kubecf | \
perl -ne '/^sizing/..0 and do { print $.,":",$_ if /^ [a-z]/ || /high avail|scale|count/ }'

The default values.yaml files are also included in this guide at Section A.1, “Complete suse/kubecf values.yaml File”.

6.9.1.2 Using the high_availability Helm Property

One way to make your SUSE Cloud Application Platform deployment highly available is to use the high_availability Helm property. In your kubecf-config-values.yaml, set this property to true. This changes the size of all roles to the minimum required for a highly available deployment. Your configuration file, kubecf-config-values.yaml, should include the following.

high_availability: true
Important
Important: Sizing Priority

When sizing values are specified, it takes precedence over the high_availability property.

6.9.1.3 Using Custom Sizing Configurations

Another method to make your SUSE Cloud Application Platform deployment highly available is to explicitly configure the instance count of an instance group.

Important
Important: Sizing Priority

When sizing values are specified, it takes precedence over the high_availability property.

To see the full list of configurable instance groups, refer to default KubeCF values.yaml file in the appendix at Section A.1, “Complete suse/kubecf values.yaml File”.

The following is an example High Availability configuration. The example values are not meant to be copied, as these depend on your particular deployment and requirements.

sizing:
  adapter:
    instances: 2
  api:
    instances: 2
  asactors:
    instances: 2
  asapi:
    instances: 2
  asmetrics:
    instances: 2
  asnozzle:
    instances: 2
  auctioneer:
    instances: 2
  bits:
    instances: 2
  cc_worker:
    instances: 2
  credhub:
    instances: 2
  database:
    instances: 2
  diego_api:
    instances: 2
  diego_cell:
    instances: 2
  doppler:
    instances: 2
  eirini:
    instances: 3
  log_api:
    instances: 2
  nats:
    instances: 2
  router:
    instances: 2
  routing_api:
    instances: 2
  scheduler:
    instances: 2
  uaa:
    instances: 2
  tcp_router:
    instances: 2

6.10 External Blobstore

Cloud Foundry Application Runtime (CFAR) uses a blobstore (see https://docs.cloudfoundry.org/concepts/cc-blobstore.html) to store the source code that developers push, stage, and run. This section explains how to configure an external blobstore for the Cloud Controller component of your SUSE Cloud Application Platform deployment.

SUSE Cloud Application Platform relies on ops files (see https://github.com/cloudfoundry/cf-deployment/blob/master/operations/README.md) provided by cf-deployment (see https://github.com/cloudfoundry/cf-deployment) releases for external blobstore configurations. The default configuration for the blobstore is singleton.

6.10.1 Configuration

Currently SUSE Cloud Application Platform supports Amazon Simple Storage Service (Amazon S3, see https://aws.amazon.com/s3/) as an external blobstore . In order to configure Amazon S3 as an external blobstore, set the following in your kubecf-config-values.yaml file and replace the example values.

features:
  blobstore:
    provider: s3
    s3:
      aws_region: "us-east-1"
      blobstore_access_key_id:  AWS-ACCESS-KEY-ID
      blobstore_secret_access_key: AWS-SECRET-ACCESS-KEY>
      # User provided value for the blobstore admin password.
      blobstore_admin_users_password: PASSWORD
      # The following values are used as S3 bucket names. The buckets are automatically created if not present.
      app_package_directory_key: APP-BUCKET-NAME
      buildpack_directory_key: BUILDPACK-BUCKET-NAME
      droplet_directory_key: DROPLET-BUCKET-NAME
      resource_directory_key: RESOURCE-BUCKET-NAME
Warning
Warning: us-east-1 as Only Valid Region

Currently, there is a limitation where only us-east-1 can be chosen as the aws_region. For more information about this issue, see https://github.com/cloudfoundry-incubator/kubecf/issues/656.

Ensure the supplied AWS credentials have appropriate permissions as described at https://docs.cloudfoundry.org/deploying/common/cc-blobstore-config.html#fog-aws-iam.

6.11 External Database

By default, SUSE Cloud Application Platform includes a single-availability database provided by the Percona XtraDB Cluster (PXC). SUSE Cloud Application Platform can be configured to use an external database system, such as a data service offered by a cloud service provider or an existing high availability database server.

To configure your deployment to use an external database, please follow the instructions below.

The current SUSE Cloud Application Platform release is compatible with the following types and versions of external databases:

  • MySQL 5.7

6.11.1 Configuration

This section describes how to enable and configure your deployment to connect to an external database. The configuration options are specified through Helm values inside the kubecf-config-values.yaml. The deployment and configuration of the external database itself is the responsibility of the operator and beyond the scope of this documentation. It is assumed the external database has been deployed and accessible.

Important
Important: Configuration during Initial Install Only

Configuration of SUSE Cloud Application Platform to use an external database must be done during the initial installation and cannot be changed afterwards.

All the databases listed in the config snippet below need to exist before installing KubeCF. One way of doing that is manually running CREATE DATABASE IF NOT EXISTS database-name for each database.

The following snippet of the kubecf-config-values.yaml contains an example of an external database configuration.

features:
  embedded_database:
    enabled: false
  external_database:
    enabled: true
    require_ssl: false
    ca_cert: ~
    type: mysql
    host: hostname
    port: 3306
    databases:
      uaa:
        name: uaa
        password: root
        username: root
      cc:
        name: cloud_controller
        password: root
        username: root
      bbs:
        name: diego
        password: root
        username: root
      routing_api:
        name: routing-api
        password: root
        username: root
      policy_server:
        name: network_policy
        password: root
        username: root
      silk_controller:
        name: network_connectivity
        password: root
        username: root
      locket: 
        name: locket
        password: root
        username: root
      credhub:        
        name: credhub
        password: root
        username: root

6.12 Add the Kubernetes Charts Repository

Download the SUSE Kubernetes charts repository with Helm:

tux > helm repo add suse https://kubernetes-charts.suse.com/

You may replace the example suse name with any name. Verify with helm:

tux > helm repo list
NAME            URL
stable          https://kubernetes-charts.storage.googleapis.com
local           http://127.0.0.1:8879/charts
suse            https://kubernetes-charts.suse.com/

List your chart names, as you will need these for some operations:

tux > helm search repo suse
NAME                            CHART VERSION        APP VERSION    DESCRIPTION
suse/cf-operator                4.5.13+0.gd4738712    2.0.1          A Helm chart for cf-operator, the k8s operator ....
suse/console                    4.0.1                2.0.1          A Helm chart for deploying SUSE Stratos Console
suse/kubecf                     2.2.3                2.0.1          A Helm chart for KubeCF
suse/metrics                    1.2.1                2.0.1          A Helm chart for Stratos Metrics
suse/minibroker                 0.3.1                               A minibroker for your minikube
suse/nginx-ingress              0.28.4               0.15.0         An nginx Ingress controller that uses ConfigMap to store ...
...

6.13 Deploying SUSE Cloud Application Platform

This section describes how to deploy SUSE Cloud Application Platform on Amazon EKS.

Warning
Warning: KubeCF and cf-operator versions

KubeCF and cf-operator interoperate closely. Before you deploy a specific version combination, make sure they were confirmed to work. For more information see Section 3.5, “Releases and Associated Versions”.

6.13.1 Deploy the Operator

  1. First, create the namespace for the operator.

    tux > kubectl create namespace cf-operator
  2. Install the operator.

    The value of global.operator.watchNamespace indicates the namespace the operator will monitor for a KubeCF deployment. This namespace should be separate from the namespace used by the operator. In this example, this means KubeCF will be deployed into a namespace called kubecf.

    tux > helm install cf-operator suse/cf-operator \
    --namespace cf-operator \
    --set "global.operator.watchNamespace=kubecf" \
    --version 4.5.13+0.gd4738712
  3. Wait until cf-operator is successfully deployed before proceeding. Monitor the status of your cf-operator deployment using the watch command.

    tux > watch --color 'kubectl get pods --namespace cf-operator'

6.13.2 Deploy KubeCF

  1. Use Helm to deploy KubeCF.

    Note that you do not need to manually create the namespace for KubeCF.

    tux > helm install kubecf suse/kubecf \
    --namespace kubecf \
    --values kubecf-config-values.yaml \
    --version 2.2.3
  2. Monitor the status of your KubeCF deployment using the watch command.

    tux > watch --color 'kubectl get pods --namespace kubecf'
  3. Find the value of EXTERNAL-IP for each of the public services.

    tux > kubectl get service --namespace kubecf router-public
    
    tux > kubectl get service --namespace kubecf tcp-router-public
    
    tux > kubectl get service --namespace kubecf ssh-proxy-public
  4. Create DNS CNAME records for the public services.

    1. For the router-public service, create a record mapping the EXTERNAL-IP value to <system_domain>.

    2. For the router-public service, create a record mapping the EXTERNAL-IP value to *.<system_domain>.

    3. For the tcp-router-public service, create a record mapping the EXTERNAL-IP value to tcp.<system_domain>.

    4. For the ssh-proxy-public service, create a record mapping the EXTERNAL-IP value to ssh.<system_domain>.

  5. When all pods are fully ready, verify your deployment.

    Connect and authenticate to the cluster.

    tux > cf api --skip-ssl-validation "https://api.<system_domain>"
    
    # Use the cf_admin_password set in kubecf-config-values.yaml
    tux > cf auth admin changeme

6.14 Deploying and Using the AWS Service Broker

The AWS Service Broker provides integration of native AWS services with SUSE Cloud Application Platform.

6.14.1 Prerequisites

Deploying and using the AWS Service Broker requires the following:

6.14.2 Setup

  1. Create the required DynamoDB table where the AWS service broker will store its data. This example creates a table named awssb:

    tux > aws dynamodb create-table \
    		--attribute-definitions \
    			AttributeName=id,AttributeType=S \
    			AttributeName=userid,AttributeType=S \
    			AttributeName=type,AttributeType=S \
    		--key-schema \
    			AttributeName=id,KeyType=HASH \
    			AttributeName=userid,KeyType=RANGE \
    		--global-secondary-indexes \
    			'IndexName=type-userid-index,KeySchema=[{AttributeName=type,KeyType=HASH},{AttributeName=userid,KeyType=RANGE}],Projection={ProjectionType=INCLUDE,NonKeyAttributes=[id,userid,type,locked]},ProvisionedThroughput={ReadCapacityUnits=5,WriteCapacityUnits=5}' \
    		--provisioned-throughput \
    			ReadCapacityUnits=5,WriteCapacityUnits=5 \
    		--region ${AWS_REGION} --table-name awssb
  2. Wait until the table has been created. When it is ready, the TableStatus will change to ACTIVE. Check the status using the describe-table command:

    aws dynamodb describe-table --table-name awssb

    (For more information about the describe-table command, see https://docs.aws.amazon.com/cli/latest/reference/dynamodb/describe-table.html.)

  3. Set a name for the Kubernetes namespace you will install the service broker to. This name will also be used in the service broker URL:

    tux > BROKER_NAMESPACE=aws-sb
  4. Create a server certificate for the service broker:

    1. Create and use a separate directory to avoid conflicts with other CA files:

      tux > mkdir /tmp/aws-service-broker-certificates && cd $_
    2. Get the CA certificate:

      kubectl get secret --namespace kubecf --output jsonpath='{.items[*].data.internal-ca-cert}' | base64 -di > ca.pem
    3. Get the CA private key:

      kubectl get secret --namespace kubecf --output jsonpath='{.items[*].data.internal-ca-cert-key}' | base64 -di > ca.key
    4. Create a signing request. Replace BROKER_NAMESPACE with the namespace assigned in Step 3:

      tux > openssl req -newkey rsa:4096 -keyout tls.key.encrypted -out tls.req -days 365 \
        -passout pass:1234 \
        -subj '/CN=aws-servicebroker.'${BROKER_NAMESPACE} -batch \
        -subj '/CN=aws-servicebroker-aws-servicebroker.aws-sb.svc.cluster.local' -batch
        </dev/null
    5. Decrypt the generated broker private key:

      tux > openssl rsa -in tls.key.encrypted -passin pass:1234 -out tls.key
    6. Sign the request with the CA certificate:

      tux > openssl x509 -req -CA ca.pem -CAkey ca.key -CAcreateserial -in tls.req -out tls.pem
  5. Install the AWS service broker as documented at https://github.com/awslabs/aws-servicebroker/blob/master/docs/getting-started-k8s.md. Skip the installation of the Kubernetes Service Catalog. While installing the AWS Service Broker, make sure to update the Helm chart version (the version as of this writing is 1.0.1). For the broker install, pass in a value indicating the Cluster Service Broker should not be installed (for example --set deployClusterServiceBroker=false). Ensure an account and role with adequate IAM rights is chosen (see Section 6.14.1, “Prerequisites”:

    tux > kubectl create namespace $BROKER_NAMESPACE
    
    tux > helm install aws-servicebroker aws-sb/aws-servicebroker \
    --namespace $BROKER_NAMESPACE \
    --version 1.0.1 \
    --set aws.secretkey=$AWS_ACCESS_KEY \
    --set aws.accesskeyid=$AWS_KEY_ID \
    --set deployClusterServiceBroker=false \
    --set tls.cert="$(base64 -w0 tls.pem)" \
    --set tls.key="$(base64 -w0 tls.key)" \
    --set-string aws.targetaccountid=$AWS_TARGET_ACCOUNT_ID \
    --set aws.targetrolename=$AWS_TARGET_ROLE_NAME \
    --set aws.tablename=awssb \
    --set aws.vpcid=$VPC_ID \
    --set aws.region=$AWS_REGION \
    --set authenticate=false

    To find the values of aws.targetaccoundid, aws.targetrolename, and vpcId run the following command.

    tux > aws eks describe-cluster --name $CLUSTER_NAME

    For aws.targetaccoundid and aws.targetrolename , examine the cluster.roleArn field. For vpcId, refer to the cluster.resourcesVpcConfig.vpcId field.

  6. Log into your Cloud Application Platform deployment. Select an organization and space to work with, creating them if needed:

    tux > cf api --skip-ssl-validation https://api.example.com
    tux > cf login -u admin -p password
    tux > cf create-org org
    tux > cf create-space space
    tux > cf target -o org -s space
  7. Create a service broker in kubecf. Note the name of the service broker should be the same as the one specified for the the helm install step (for example aws-servicebroker. Note that the username and password parameters are only used as dummy values to pass to the cf command:

    tux > cf create-service-broker aws-servicebroker username password https://aws-servicebroker-aws-servicebroker.aws-sb.svc.cluster.local
  8. Verify the service broker has been registered:

    tux > cf service-brokers
  9. List the available service plans:

    tux > cf service-access
  10. Enable access to a service. This example uses the -p to enable access to a specific service plan. See https://github.com/awslabs/aws-servicebroker/blob/master/templates/rdsmysql/template.yaml for information about all available services and their associated plans:

    tux > cf enable-service-access rdsmysql -p custom
  11. Create a service instance. As an example, a custom MySQL instance can be created as:

    tux > cf create-service rdsmysql custom mysql-instance-name -c '{
      "AccessCidr": "192.0.2.24/32",
      "BackupRetentionPeriod": 0,
      "MasterUsername": "master",
      "DBInstanceClass": "db.t2.micro",
      "EngineVersion": "5.7.17",
      "PubliclyAccessible": "true",
      "region": "$AWS_REGION",
      "StorageEncrypted": "false",
      "VpcId": "$VPC_ID",
      "target_account_id": "$AWS_TARGET_ACCOUNT_ID",
      "target_role_name": "$AWS_TARGET_ROLE_NAME"
    }'

6.14.3 Cleanup

When the AWS Service Broker and its services are no longer required, perform the following steps:

  1. Unbind any applications using any service instances then delete the service instance:

    tux > cf unbind-service my_app mysql-instance-name
    tux > cf delete-service mysql-instance-name
  2. Delete the service broker in kubecf:

    tux > cf delete-service-broker aws-servicebroker
  3. Delete the deployed Helm chart and the namespace:

    tux > helm delete aws-servicebroker
    tux > kubectl delete namespace ${BROKER_NAMESPACE}
  4. The manually created DynamoDB table will need to be deleted as well:

    tux > aws dynamodb delete-table --table-name awssb --region ${AWS_REGION}

6.15 LDAP Integration

SUSE Cloud Application Platform can be integrated with identity providers to help manage authentication of users. The Lightweight Directory Access Protocol (LDAP) is an example of an identity provider that Cloud Application Platform integrates with. This section describes the necessary components and steps in order to configure the integration. See User Account and Authentication LDAP Integration for more information.

6.15.1 Prerequisites

The following prerequisites are required in order to complete an LDAP integration with SUSE Cloud Application Platform.

  • cf, the Cloud Foundry command line interface. For more information, see https://docs.cloudfoundry.org/cf-cli/.

    For SUSE Linux Enterprise and openSUSE systems, install using zypper.

    tux > sudo zypper install cf-cli

    For SLE, ensure the SUSE Cloud Application Platform Tools Module has been added. Add the module using YaST or SUSEConnect.

    tux > SUSEConnect --product sle-module-cap-tools/15.1/x86_64

    For other systems, follow the instructions at https://docs.cloudfoundry.org/cf-cli/install-go-cli.html.

  • uaac, the Cloud Foundry uaa command line client (UAAC). See https://docs.cloudfoundry.org/uaa/uaa-user-management.html for more information and installation instructions.

    On SUSE Linux Enterprise systems, ensure the ruby-devel and gcc-c++ packages have been installed before installing the cf-uaac gem.

    tux > sudo zypper install ruby-devel gcc-c++
  • An LDAP server and the credentials for a user/service account with permissions to search the directory.

6.15.2 Example LDAP Integration

Run the following commands to complete the integration of your Cloud Application Platform deployment and LDAP server.

  1. Use UAAC to target your uaa server.

    tux > uaac target --skip-ssl-validation https://uaa.example.com:2793
  2. Authenticate to the uaa server as admin using the UAA_ADMIN_CLIENT_SECRET set in your kubecf-config-values.yaml file.

    tux > uaac token client get admin --secret password
  3. Create the LDAP identity provider. A 201 response will be returned when the identity provider is successfully created. See the UAA API Reference and Cloud Foundry UAA-LDAP Documentationfor information regarding the request parameters and additional options available to configure your identity provider.

    The following is an example of a uaac curl command and its request parameters used to create an identity provider. Specify the parameters according to your LDAP server's credentials and directory structure. Ensure the user specifed in the bindUserDn has permissions to search the directory.

    tux > uaac curl /identity-providers?rawConfig=true \
        --request POST \
        --insecure \
        --header 'Content-Type: application/json' \
        --header 'X-Identity-Zone-Subdomain: uaa' \
        --data '{
      "type" : "ldap",
      "config" : {
        "ldapProfileFile" : "ldap/ldap-search-and-bind.xml",
        "baseUrl" : "ldap://ldap.example.com:389",
        "bindUserDn" : "cn=admin,dc=example,dc=com",
        "bindPassword" : "password",
        "userSearchBase" : "dc=example,dc=com",
        "userSearchFilter" : "uid={0}",
        "ldapGroupFile" : "ldap/ldap-groups-map-to-scopes.xml",
        "groupSearchBase" : "dc=example,dc=com",
        "groupSearchFilter" : "member={0}"
      },
      "originKey" : "ldap",
      "name" : "My LDAP Server",
      "active" : true
      }'
  4. Verify the LDAP identify provider has been created in the kubecf zone. The output should now contain an entry for the ldap type.

    tux > uaac curl /identity-providers --insecure --header "X-Identity-Zone-Id: uaa"
  5. Use the cf CLI to target your SUSE Cloud Application Platform deployment.

    tux > cf api --skip-ssl-validation https://api.example.com
  6. Log in as an administrator.

    tux > cf login
    API endpoint: https://api.example.com
    
    Email> admin
    
    Password>
    Authenticating...
    OK
  7. Create users associated with your LDAP identity provider.

    tux > cf create-user username --origin ldap
    Creating user username...
    OK
    
    TIP: Assign roles with 'cf set-org-role' and 'cf set-space-role'.
  8. Assign the user a role. Roles define the permissions a user has for a given org or space and a user can be assigned multiple roles. See Orgs, Spaces, Roles, and Permissions for available roles and their corresponding permissions. The following example assumes that an org named Org and a space named Space have already been created.

    tux > cf set-space-role username Org Space SpaceDeveloper
    Assigning role RoleSpaceDeveloper to user username in org Org / space Space as admin...
    OK
    tux > cf set-org-role username Org OrgManager
    Assigning role OrgManager to user username in org Org as admin...
    OK
  9. Verify the user can log into your SUSE Cloud Application Platform deployment using their associated LDAP server credentials.

    tux > cf login
    API endpoint: https://api.example.com
    
    Email> username
    
    Password>
    Authenticating...
    OK
    
    
    
    API endpoint:   https://api.example.com (API version: 2.115.0)
    User:           username@ldap.example.com

If the LDAP identity provider is no longer needed, it can be removed with the following steps.

  1. Obtain the ID of your identity provider.

    tux > uaac curl /identity-providers \
        --insecure \
        --header "Content-Type:application/json" \
        --header "Accept:application/json" \
        --header"X-Identity-Zone-Id:uaa"
  2. Delete the identity provider.

    tux > uaac curl /identity-providers/IDENTITY_PROVIDER_ID \
        --request DELETE \
        --insecure \
        --header "X-Identity-Zone-Id:uaa"