Jump to content
SUSE Cloud Application Platform 2.0.1

Deployment, Administration, and User Guides

Introducing SUSE Cloud Application Platform, a software platform for cloud-native application deployment based on KubeCF and Kubernetes.

Authors: Carla Schroder, Billy Tat, Claudia-Amelia Marin, and Lukas Kucharczyk
Publication Date: September 23, 2020
About This Guide
Required Background
Available Documentation
Feedback
Documentation Conventions
Support Statement for SUSE Cloud Application Platform
About the Making of This Documentation
I Overview of SUSE Cloud Application Platform
1 About SUSE Cloud Application Platform
1.1 New in Version 2.0.1
1.2 SUSE Cloud Application Platform Overview
1.3 SUSE Cloud Application Platform Architecture
2 Other Kubernetes Systems
2.1 Kubernetes Requirements
II Deploying SUSE Cloud Application Platform
3 Deployment and Administration Notes
3.1 README First
3.2 Important Changes
3.3 Status of Pods during Deployment
3.4 Length of Release Names
3.5 Releases and Associated Versions
4 Deploying SUSE Cloud Application Platform on SUSE CaaS Platform
4.1 Prerequisites
4.2 Creating a SUSE CaaS Platform Cluster
4.3 Install the Helm Client
4.4 Storage Class
4.5 Deployment Configuration
4.6 Certificates
4.7 Using an Ingress Controller
4.8 Affinity and Anti-affinity
4.9 High Availability
4.10 External Blobstore
4.11 External Database
4.12 Add the Kubernetes Charts Repository
4.13 Deploying SUSE Cloud Application Platform
4.14 LDAP Integration
4.15 Expanding Capacity of a Cloud Application Platform Deployment on SUSE® CaaS Platform
5 Deploying SUSE Cloud Application Platform on Microsoft Azure Kubernetes Service (AKS)
5.1 Prerequisites
5.2 Create Resource Group and AKS Instance
5.3 Install the Helm Client
5.4 Storage Class
5.5 Deployment Configuration
5.6 Certificates
5.7 Using an Ingress Controller
5.8 Affinity and Anti-affinity
5.9 High Availability
5.10 External Blobstore
5.11 External Database
5.12 Add the Kubernetes Charts Repository
5.13 Deploying SUSE Cloud Application Platform
5.14 Configuring and Testing the Native Microsoft AKS Service Broker
5.15 LDAP Integration
5.16 Expanding Capacity of a Cloud Application Platform Deployment on Microsoft AKS
6 Deploying SUSE Cloud Application Platform on Amazon Elastic Kubernetes Service (EKS)
6.1 Prerequisites
6.2 Create an EKS Cluster
6.3 Install the Helm Client
6.4 Storage Class
6.5 Deployment Configuration
6.6 Certificates
6.7 Using an Ingress Controller
6.8 Affinity and Anti-affinity
6.9 High Availability
6.10 External Blobstore
6.11 External Database
6.12 Add the Kubernetes Charts Repository
6.13 Deploying SUSE Cloud Application Platform
6.14 Deploying and Using the AWS Service Broker
6.15 LDAP Integration
7 Deploying SUSE Cloud Application Platform on Google Kubernetes Engine (GKE)
7.1 Prerequisites
7.2 Creating a GKE cluster
7.3 Get kubeconfig File
7.4 Install the Helm Client
7.5 Storage Class
7.6 Deployment Configuration
7.7 Certificates
7.8 Using an Ingress Controller
7.9 Affinity and Anti-affinity
7.10 High Availability
7.11 External Blobstore
7.12 External Database
7.13 Add the Kubernetes charts repository
7.14 Deploying SUSE Cloud Application Platform
7.15 Deploying and Using the Google Cloud Platform Service Broker
7.16 LDAP Integration
7.17 Expanding Capacity of a Cloud Application Platform Deployment on Google GKE
8 Installing the Stratos Web Console
8.1 Deploy Stratos on SUSE® CaaS Platform
8.2 Deploy Stratos on Amazon EKS
8.3 Deploy Stratos on Microsoft AKS
8.4 Deploy Stratos on Google GKE
8.5 Upgrading Stratos
8.6 Stratos Metrics
9 Eirini
9.1 Enabling Eirini
III SUSE Cloud Application Platform Administration
10 Upgrading SUSE Cloud Application Platform
10.1 Important Considerations
10.2 Upgrading SUSE Cloud Application Platform
11 Configuration Changes
11.1 Configuration Change Example
11.2 Other Examples
12 Creating Admin Users
12.1 Prerequisites
12.2 Creating an Example Cloud Application Platform Cluster Administrator
13 Managing Passwords
13.1 Password Management with the Cloud Foundry Client
13.2 Changing User Passwords with Stratos
14 Accessing the UAA User Interface
14.1 Prerequisites
14.2 Procedure
15 Cloud Controller Database Secret Rotation
15.1 Tables with Encrypted Information
16 Rotating Automatically Generated Secrets
16.1 Finding Secrets
16.2 Rotating Specific Secrets
17 Backup and Restore
17.1 Backup and Restore Using cf-plugin-backup
17.2 Disaster Recovery through Raw Data Backup and Restore
18 Service Brokers
18.1 Provisioning Services with Minibroker
19 App-AutoScaler
19.1 Prerequisites
19.2 Enabling and Disabling the App-AutoScaler Service
19.3 Using the App-AutoScaler Service
19.4 Policies
20 Integrating CredHub with SUSE Cloud Application Platform
20.1 Installing the CredHub Client
20.2 Enabling and Disabling CredHub
20.3 Connecting to the CredHub Service
21 Buildpacks
21.1 System Buildpacks
21.2 Using Buildpacks
21.3 Adding Buildpacks
21.4 Updating Buildpacks
21.5 Offline Buildpacks
IV SUSE Cloud Application Platform User Guide
22 Deploying and Managing Applications with the Cloud Foundry Client
22.1 Using the cf CLI with SUSE Cloud Application Platform
V Troubleshooting
23 Troubleshooting
23.1 Logging
23.2 Using Supportconfig
23.3 Deployment Is Taking Too Long
23.4 Deleting and Rebuilding a Deployment
23.5 Querying with Kubectl
23.6 Admission webhook denied
23.7 Namespace does not exist
A Appendix
A.1 Complete suse/kubecf values.yaml File
A.2 Complete suse/cf-operator values.yaml File
B GNU Licenses
B.1 GNU Free Documentation License

Copyright © 2006– 2020 SUSE LLC and contributors. All rights reserved.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled GNU Free Documentation License.

For SUSE trademarks, see http://www.suse.com/company/legal/. All other third-party trademarks are the property of their respective owners. Trademark symbols (®, ™ etc.) denote trademarks of SUSE and its affiliates. Asterisks (*) denote third-party trademarks.

All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its affiliates, the authors nor the translators shall be held liable for possible errors or the consequences thereof.

About This Guide

SUSE Cloud Application Platform is a software platform for cloud-native applications based on Cloud Foundry Application Runtime (cf-operator, KubeCF, and Stratos) with additional supporting components.

Cloud Application Platform is designed to run on any Kubernetes cluster. This guide describes how to deploy it on:

1 Required Background

To keep the scope of these guidelines manageable, certain technical assumptions have been made:

  • You have some computer experience and are familiar with common technical terms.

  • You are familiar with the documentation for your system and the network on which it runs.

  • You have a basic understanding of Linux systems.

2 Available Documentation

We provide HTML and PDF versions of our books in different languages. Documentation for our products is available at http://documentation.suse.com/, where you can also find the latest updates and browse or download the documentation in various formats.

The following documentation is available for this product:

Deployment, Administration, and User Guides

The SUSE Cloud Application Platform guide is a comprehensive guide providing deployment, administration, and user guides, and architecture and minimum system requirements.

3 Feedback

Several feedback channels are available:

Bugs and Enhancement Requests

For services and support options available for your product, refer to http://www.suse.com/support/.

To report bugs for a product component, go to https://scc.suse.com/support/requests, log in, and click Create New.

User Comments

We want to hear your comments about and suggestions for this manual and the other documentation included with this product. Use the User Comments feature at the bottom of each page in the online documentation or go to http://documentation.suse.com/feedback.html and enter your comments there.

Mail

For feedback on the documentation of this product, you can also send a mail to doc-team@suse.com. Make sure to include the document title, the product version and the publication date of the documentation. To report errors or suggest enhancements, provide a concise description of the problem and refer to the respective section number and page (or URL).

4 Documentation Conventions

The following notices and typographical conventions are used in this documentation:

  • /etc/passwd: directory names and file names

  • PLACEHOLDER: replace PLACEHOLDER with the actual value

  • PATH: the environment variable PATH

  • ls, --help: commands, options, and parameters

  • user: users or groups

  • package name : name of a package

  • Alt, AltF1: a key to press or a key combination; keys are shown in uppercase as on a keyboard

  • File, File › Save As: menu items, buttons

  • AMD/Intel This paragraph is only relevant for the AMD64/Intel 64 architecture. The arrows mark the beginning and the end of the text block.

    IBM Z, POWER This paragraph is only relevant for the architectures z Systems and POWER. The arrows mark the beginning and the end of the text block.

  • Dancing Penguins (Chapter Penguins, ↑Another Manual): This is a reference to a chapter in another manual.

  • Commands that must be run with root privileges. Often you can also prefix these commands with the sudo command to run them as non-privileged user.

    root # command
    tux > sudo command
  • Commands that can be run by non-privileged users.

    tux > command
  • Notices

    Warning
    Warning: Warning Notice

    Vital information you must be aware of before proceeding. Warns you about security issues, potential loss of data, damage to hardware, or physical hazards.

    Important
    Important: Important Notice

    Important information you should be aware of before proceeding.

    Note
    Note: Note Notice

    Additional information, for example about differences in software versions.

    Tip
    Tip: Tip Notice

    Helpful information, like a guideline or a piece of practical advice.

5 Support Statement for SUSE Cloud Application Platform

To receive support, you need an appropriate subscription with SUSE. For more information, see https://www.suse.com/support/?id=SUSE_Cloud_Application_Platform.

The following definitions apply:

5.1 Version Support

Technical Support and Troubleshooting (L1 - L2)

Current and previous major versions (n-1). For example, SUSE will provide technical support and troubleshooting for versions 1.0, 1.1, 1.2, 1.3 (and all 2.x point releases) until the release of 3.0.

Patches and updates (L3)

On the latest or last minor release of each major release. For example, SUSE will provide patches and updates for 1.3 (and 2.latest) until the release of 3.0.

SUSE Cloud Application Platform closely follows upstream Cloud Foundry releases which may implement fixes and changes which are not backwards compatible with previous releases. SUSE will backport patches for critical bugs and security issues on a best efforts basis.

5.2 Platform Support

SUSE Cloud Application Platform is fully supported on Amazon EKS, Microsoft Azure AKS and Google GKE. Each release is tested by SUSE Cloud Application Platform QA on these platforms.

SUSE Cloud Application Platform is fully supported on SUSE CaaS Platform, wherever it happens to be installed. If SUSE CaaS Platform is supported on a particular cloud service provider (CSP), the customer can get support for SUSE Cloud Application Platform in that context.

SUSE can provide support for SUSE Cloud Application Platform on 3rd party/generic Kubernetes on a case-by-case basis provided:

  1. The Kubernetes cluster satisfies the Requirements listed here at https://documentation.suse.com/suse-cap/2.0.1/html/cap-guides/cha-cap-depl-kube-requirements.html#sec-cap-changes-kube-reqs.

  2. The kube-ready-state-check.sh script has been run on the target Kubernetes cluster and does not show any configuration problems.

  3. A SUSE Services or Sales Engineer has verified that SUSE Cloud Application Platform works correctly on the target Kubernetes cluster.

5.3 Technology Previews

Technology previews are packages, stacks, or features delivered by SUSE to provide glimpses into upcoming innovations. The previews are included for your convenience to give you the chance to test new technologies within your environment. We would appreciate your feedback! If you test a technology preview, please contact your SUSE representative and let them know about your experience and use cases. Your input is helpful for future development.

However, technology previews come with the following limitations:

  • Technology previews are still in development. Therefore, they may be functionally incomplete, unstable, or in other ways not suitable for production use.

  • Technology previews are not supported.

  • Details and functionality of technology previews are subject to change. As a result, upgrading to subsequent releases of a technology preview may be impossible and require a fresh installation.

  • Technology previews can be dropped at any time. For example, if SUSE discovers that a preview does not meet the customer or market needs, or does not prove to comply with enterprise standards. SUSE does not commit to providing a supported version of such technologies in the future.

For an overview of technology previews shipped with your product, see the release notes at https://www.suse.com/releasenotes/.

6 About the Making of This Documentation

This documentation is written in GeekoDoc, a subset of DocBook 5. The XML source files were validated by jing (see https://code.google.com/p/jing-trang/), processed by xsltproc, and converted into XSL-FO using a customized version of Norman Walsh's stylesheets. The final PDF is formatted through FOP from Apache Software Foundation. The open source tools and the environment used to build this documentation are provided by the DocBook Authoring and Publishing Suite (DAPS). The project's home page can be found at https://github.com/openSUSE/daps.

The XML source code of this documentation can be found at https://github.com/SUSE/doc-cap.

Part I Overview of SUSE Cloud Application Platform

1 About SUSE Cloud Application Platform

KubeCF has been updated to 2.2.3:

2 Other Kubernetes Systems

SUSE Cloud Application Platform is designed to run on any Kubernetes system that meets the following requirements:

1 About SUSE Cloud Application Platform

1.1 New in Version 2.0.1

See all product manuals for SUSE Cloud Application Platform 2.x at https://documentation.suse.com/suse-cap/2/.

Tip
Tip: Read the Release Notes

Make sure to review the release notes for SUSE Cloud Application Platform published at https://www.suse.com/releasenotes/x86_64/SUSE-CAP/2.0/.

1.2 SUSE Cloud Application Platform Overview

SUSE Cloud Application Platform is a software platform for cloud-native applications based on Cloud Foundry Application Runtime (cf-operator, KubeCF, and Stratos) with additional supporting components.

SUSE Cloud Application Platform describes the complete software stack, including the operating system, Kubernetes, and KubeCF.

SUSE Cloud Application Platform is comprised of cf-operator (cf-operator), KubeCF (kubecf), the Stratos Web user interface, and Stratos Metrics.

The Cloud Foundry code base provides the basic functionality. KubeCF differentiates itself from other Cloud Foundry distributions by running in Linux containers managed by Kubernetes, rather than virtual machines managed with BOSH, for greater fault tolerance and lower memory use.

All Docker images for the SUSE Linux Enterprise builds are hosted on registry.suse.com. These are the commercially-supported images. (Community-supported images for openSUSE are hosted on Docker Hub.) Product manuals on https://documentation.suse.com/suse-cap/2/ refer to the commercially-supported SUSE Linux Enterprise version.

Cloud Application Platform is designed to run on any Kubernetes cluster as described in Section 5.2, “Platform Support”. This guide describes how to deploy it:

SUSE Cloud Application Platform serves different but complementary purposes for operators and application developers.

For operators, the platform is:

  • Easy to install, manage, and maintain

  • Secure by design

  • Fault tolerant and self-healing

  • Offers high availability for critical components

  • Uses industry-standard components

  • Avoids single vendor lock-in

For developers, the platform:

  • Allocates computing resources on demand via API or Web interface

  • Offers users a choice of language and Web framework

  • Gives access to databases and other data services

  • Emits and aggregates application log streams

  • Tracks resource usage for users and groups

  • Makes the software development workflow more efficient

The principle interface and API for deploying applications to SUSE Cloud Application Platform is KubeCF. Most Cloud Foundry distributions run on virtual machines managed by BOSH. KubeCF runs in SUSE Linux Enterprise containers managed by Kubernetes. Containerizing the components of the platform itself has these advantages:

  • Improves fault tolerance. Kubernetes monitors the health of all containers, and automatically restarts faulty containers faster than virtual machines can be restarted or replaced.

  • Reduces physical memory overhead. KubeCF components deployed in containers consume substantially less memory, as host-level operations are shared between containers by Kubernetes.

SUSE Cloud Application Platform uses cf-operator, a Kubernetes Operator deployed via a Helm chart, to install custom resource definitions that convert BOSH releases into Kubernetes resources, such as Pod, Deployment, and StatefulSet. This is made possible by leveraging KubeCF, a version of Cloud Foundry deployed as Helm chart.

1.3 SUSE Cloud Application Platform Architecture

The following figures illustrate the main structural concepts of SUSE Cloud Application Platform. Figure 1.1, “Cloud Platform Comparisons” shows a comparison of the basic cloud platforms:

  • Infrastructure as a Service (IaaS)

  • Container as a Service (CaaS)

  • Platform as a Service (PaaS)

  • Software as a Service (SaaS)

SUSE CaaS Platform is a Container as a Service platform, and SUSE Cloud Application Platform is a PaaS.

Comparison of cloud platforms.
Figure 1.1: Cloud Platform Comparisons

Figure 1.2, “Containerized Platforms” illustrates how SUSE Cloud Application Platform containerize the platform itself on top of a cloud provider.

SUSE SUSE CaaS Platform and SUSE Cloud Application Platform containerize the platform itself.
Figure 1.2: Containerized Platforms

Figure 1.3, “SUSE Cloud Application Platform Stack” shows the relationships of the major components of the software stack. SUSE Cloud Application Platform runs on Kubernetes, which in turn runs on multiple platforms, from bare metal to various cloud stacks. Your applications run on SUSE Cloud Application Platform and provide services.

Relationships of the main Cloud Application Platform components.
Figure 1.3: SUSE Cloud Application Platform Stack

1.3.1 KubeCF Components

KubeCF is comprised of developer and administrator clients, trusted download sites, transient and long-running components, APIs, and authentication:

  • Clients for developers and admins to interact with KubeCF: the cf CLI, which provides the cf command, Stratos Web interface, IDE plugins.

  • Docker Trusted Registry owned by SUSE.

  • SUSE Helm chart repository.

  • Helm, the Kubernetes package manager, and the helm command line client.

  • kubectl, the command line client for Kubernetes.

  • cf-operator, a Kubernetes Operator that converts BOSH releases to Kubernetes resources.

  • KubeCF, a version of Cloud Foundry deployed via cf-operator.

  • Long-running KubeCF components.

  • KubeCF post-deployment components: Transient KubeCF components that start after all KubeCF components are started, perform their tasks, and then exit.

  • KubeCF Linux cell, an elastic runtime component that runs Linux applications.

  • uaa, a Cloud Application Platform service for authentication and authorization.

  • The Kubernetes API.

1.3.2 KubeCF Containers

Figure 1.4, “KubeCF Containers, Grouped by Function” provides a look at KubeCF's containers.

Figure 1.4: KubeCF Containers, Grouped by Function
List of KubeCF Containers
adapter

Part of the logging system, manages connections to user application syslog drains.

api

Contains the KubeCF Cloud Controller, which implements the CF API. It is exposed via the router.

cc-worker

Sidekick to the Cloud Controller, processes background tasks.

database

A PXC database to store persistent data for various CAP components such as the cloud controller, UAA, etc.

diego-api

API for the Diego scheduler.

diego-cell (privileged)

The elastic layer of KubeCF, where applications live.

eirini

An alternative to the Diego scheduler.

eirini-persi

Enables persistent storage for applications when using the Eirini scheduler.

eirini-ssh

Provides SSH access to user applications when using the Eirini scheduler.

doppler

Routes log messages from applications and components.

log-api

Part of the logging system; exposes log streams to users using web sockets and proxies user application log messages to syslog drains. Exposed using the router.

nats

A pub-sub messaging queue for the routing system.

router

Routes application and API traffic. Exposed using a Kubernetes service.

routing-api

API for the routing system.

scheduler

Service used to create, schedule and interact with jobs that execute on Cloud Foundry

singleton-blobstore

A WebDAV blobstore for storing application bits, buildpacks, and stacks.

tcp-router

Routes TCP traffic for your applications.

uaa

User account and authentication.

1.3.3 KubeCF Service Diagram

This simple service diagram illustrates how KubeCF components communicate with each other (Figure 1.5, “Simple Services Diagram”). See Figure 1.6, “Detailed Services Diagram” for a more detailed view.

Figure 1.5: Simple Services Diagram

This table describes how these services operate.

Interface, Network Name, Network ProtocolRequestor & RequestRequest Credentials & Request AuthorizationListener, Response & Response CredentialsDescription of Operation

1

External (HTTPS)

Requestor: Helm Client

Request: Deploy Cloud Application Platform

Request Credentials: OAuth2 Bearer token

Request Authorization: Deployment of Cloud Application Platform Services on Kubernetes

Listener: Helm/Kubernetes API

Response: Operation ack and handle

Response Credentials: TLS certificate on external endpoint

Operator deploys Cloud Application Platform on Kubernetes

2

External (HTTPS)

Requestor: Internal Kubernetes components

Request: Download Docker Images

Request Credentials: Refer to registry.suse.com

Request Authorization: Refer to registry.suse.com

Listener: registry.suse.com

Response: Docker images

Response Credentials: None

Docker images that make up Cloud Application Platform are downloaded

3

Tenant (HTTPS)

Requestor: Cloud Application Platform components

Request: Get tokens

Request Credentials: OAuth2 client secret

Request Authorization: Varies, based on configured OAuth2 client scopes

Listener: uaa

Response: An OAuth2 refresh token used to interact with other service

Response Credentials: TLS certificate

KubeCF components ask uaa for tokens so they can talk to each other

4

External (HTTPS)

Requestor: KubeCF clients

Request: KubeCF API Requests

Request Credentials: OAuth2 Bearer token

Request Authorization: KubeCF application management

Listener: Cloud Application Platform components

Response: JSON object and HTTP Status code

Response Credentials: TLS certificate on external endpoint

Cloud Application Platform Clients interact with the KubeCF API (for example users deploying apps)

5

External (WSS)

Requestor: KubeCF clients

Request: Log streaming

Request Credentials: OAuth2 Bearer token

Request Authorization: KubeCF application management

Listener: Cloud Application Platform components

Response: A stream of KubeCF logs

Response Credentials: TLS certificate on external endpoint

KubeCF clients ask for logs (for example user looking at application logs or administrator viewing system logs)

6

External (SSH)

Requestor: KubeCF clients, SSH clients

Request: SSH Access to Application

Request Credentials: OAuth2 bearer token

Request Authorization: KubeCF application management

Listener: Cloud Application Platform components

Response: A duplex connection is created allowing the user to interact with a shell

Response Credentials: RSA SSH Key on external endpoint

KubeCF Clients open an SSH connection to an application's container (for example users debugging their applications)

7

External (HTTPS)

Requestor: Helm

Request: Download charts

Request Credentials: Refer to kubernetes-charts.suse.com

Request Authorization: Refer to kubernetes-charts.suse.com

Listener: kubernetes-charts.suse.com

Response: Helm charts

Response Credentials: Helm charts for Cloud Application Platform are downloaded

Helm charts for Cloud Application Platform are downloaded

1.3.4 Detailed Services Diagram

Figure 1.6, “Detailed Services Diagram” presents a more detailed view of KubeCF services and how they interact with each other. Services labeled in red are unencrypted, while services labeled in green run over HTTPS.

Figure 1.6: Detailed Services Diagram

2 Other Kubernetes Systems

2.1 Kubernetes Requirements

SUSE Cloud Application Platform is designed to run on any Kubernetes system that meets the following requirements:

  • Kubernetes API version of at least 1.14

  • Ensure nodes use a mininum kernel version of 3.19 and the kernel parameter max_user_namespaces should be set greater than 0.

  • The container runtime storage driver should not be aufs.

  • Presence of a storage class for SUSE Cloud Application Platform to use

  • kubectl can authenticate with the apiserver

  • kube-dns or core-dns should be running and ready

  • ntp, systemd-timesyncd, or chrony must be installed and active

  • The container runtime must be configured to allow privileged containers

  • Privileged container must be enabled in kube-apiserver. See kube-apiserver.

  • For Kubernetes deployments prior to version 1.15, privileged must be enabled in kubelet

  • The TasksMax property of the containerd service definition must be set to infinity

Part II Deploying SUSE Cloud Application Platform

3 Deployment and Administration Notes

Important things to know before deploying SUSE Cloud Application Platform.

4 Deploying SUSE Cloud Application Platform on SUSE CaaS Platform

Before you start deploying SUSE Cloud Application Platform, review the following documents:

5 Deploying SUSE Cloud Application Platform on Microsoft Azure Kubernetes Service (AKS)

Before you start deploying SUSE Cloud Application Platform, review the following documents:

6 Deploying SUSE Cloud Application Platform on Amazon Elastic Kubernetes Service (EKS)

Before you start deploying SUSE Cloud Application Platform, review the following documents:

7 Deploying SUSE Cloud Application Platform on Google Kubernetes Engine (GKE)

Before you start deploying SUSE Cloud Application Platform, review the following documents:

8 Installing the Stratos Web Console

The Stratos user interface (UI) is a modern web-based management application for Cloud Foundry. It provides a graphical management console for both developers and system administrators. Install Stratos with Helm after all of the kubecf pods are running.

9 Eirini

Eirini, an alternative to Diego, is a scheduler for the Cloud Foundry Application Runtime (CFAR) that runs Cloud Foundry user applications in Kubernetes. For details about Eirini, see https://www.cloudfoundry.org/project-eirini/ and http://eirini.cf

3 Deployment and Administration Notes

Important things to know before deploying SUSE Cloud Application Platform.

3.1 README First

README first!

Before you start deploying SUSE Cloud Application Platform, review the following documents:

Read the Release Notes: Release Notes SUSE Cloud Application Platform

Read Chapter 3, Deployment and Administration Notes

3.2 Important Changes

Different schedulers, such as Diego and Eirini, and different stacks, such as cflinuxfs3 or sle15, have different memory requirements for applications. Not every combination is tested so there is no universal memory setting for Cloud Application Platform, and because it depends on the application deployed, it is up to the user to adjust the setting based on their application.

3.3 Status of Pods during Deployment

During deployment, pods are spawned over time, starting with a single pod whose name stars with ig-. This pod will eventually disappear and will be replaced by other pods whose progress then can be followed as usual.

The whole process can take around 20—30 minutes to finish.

The initial stage may look like this:

tux > kubectl get pods --namespace kubecf
ig-kubecf-f9085246244fbe70-jvg4z   1/21    Running             0          8m28s

Later the progress may look like this:

NAME                        READY   STATUS       RESTARTS   AGE
adapter-0                   4/4     Running      0          6m45s
api-0                       0/15    Init:30/63   0          6m38s
bits-0                      0/6     Init:8/15    0          6m34s
bosh-dns-7787b4bb88-2wg9s   1/1     Running      0          7m7s
bosh-dns-7787b4bb88-t42mh   1/1     Running      0          7m7s
cc-worker-0                 0/4     Init:5/9     0          6m36s
credhub-0                   0/5     Init:6/11    0          6m33s
database-0                  2/2     Running      0          6m36s
diego-api-0                 6/6     Running      2          6m38s
doppler-0                   0/9     Init:7/16    0          6m40s
eirini-0                    9/9     Running      0          6m37s
log-api-0                   0/7     Init:6/13    0          6m35s
nats-0                      4/4     Running      0          6m39s
router-0                    0/5     Init:5/11    0          6m33s
routing-api-0               0/4     Init:5/10    0          6m42s
scheduler-0                 0/8     Init:8/17    0          6m35s
singleton-blobstore-0       0/6     Init:6/11    0          6m46s
tcp-router-0                0/5     Init:5/11    0          6m37s
uaa-0                       0/6     Init:8/13    0          6m36s

3.4 Length of Release Names

Release names (for example, when you run helm install RELEASE_NAME) have a maximum length of 36 characters.

3.5 Releases and Associated Versions

Warning
Warning: KubeCF and cf-operator versions

KubeCF and cf-operator interoperate closely. Before you deploy a specific version combination, make sure they were confirmed to work. For more information see Section 3.5, “Releases and Associated Versions”.

The supported upgrade method is to install all upgrades, in order. Skipping releases is not supported. This table matches the Helm chart versions to each release as well as other version related information.

CAP Releasecf-operator Helm Chart VersionKubeCF Helm Chart VersionStratos Helm Chart VersionStratos Metrics Helm Chart VersionMinimum Kubernetes Version RequiredCF API ImplementedKnown Compatible CF CLI VersionCF CLI URL
2.0.1 (current release)4.5.13+0.gd47387122.2.34.0.11.2.11.142.144.06.49.0https://github.com/cloudfoundry/cli/releases/tag/v6.49.0
2.04.5.6+0.gffc6f9422.2.23.2.11.2.11.142.144.06.49.0https://github.com/cloudfoundry/cli/releases/tag/v6.49.0

4 Deploying SUSE Cloud Application Platform on SUSE CaaS Platform

Important
Important
README first!

Before you start deploying SUSE Cloud Application Platform, review the following documents:

Read the Release Notes: Release Notes SUSE Cloud Application Platform

Read Chapter 3, Deployment and Administration Notes

SUSE Cloud Application Platform supports deployment on SUSE CaaS Platform. SUSE CaaS Platform is an enterprise-class container management solution that enables IT and DevOps professionals to more easily deploy, manage, and scale container-based applications and services. It includes Kubernetes to automate lifecycle management of modern applications, and surrounding technologies that enrich Kubernetes and make the platform itself easy to operate. As a result, enterprises that use SUSE CaaS Platform can reduce application delivery cycle times and improve business agility. This chapter describes the steps to prepare a SUSE Cloud Application Platform deployment on SUSE CaaS Platform. See https://documentation.suse.com/suse-caasp/4.2/ for more information on SUSE CaaS Platform.

4.1 Prerequisites

The following are required to deploy and use SUSE Cloud Application Platform on SUSE CaaS Platform:

Important
Important

The prerequisites and configurations described is this chapter only reflect the requirements for a minimal SUSE Cloud Application Platform deployment. For a more production-ready environment, consider incoporating some of following optional features in this chapter and the Administration Guide at Part III, “SUSE Cloud Application Platform Administration”

4.2 Creating a SUSE CaaS Platform Cluster

When creating a SUSE CaaS Platform cluster, take note of the following general guidelines to ensure there are sufficient resources available to run a SUSE Cloud Application Platform deployment:

  • Minimum 2.3 GHz processor

  • 2 vCPU per physical core

  • 4 GB RAM per vCPU

  • Worker nodes need a minimum of 4 vCPU and 16 GB RAM

As a minimum, a SUSE Cloud Application Platform deployment with a basic workload will require:

  • 1 master node

    • vCPU: 2

    • RAM: 8 GB

    • Storage: 60 GB (SSD)

  • 2 worker nodes. Each node configured with:

    • (v)CPU: 4

    • RAM: 16 GB

    • Storage: 100 GB

  • Persistent storage: 40 GB

For steps to deploy a SUSE CaaS Platform cluster, refer to the SUSE CaaS Platform Deployment Guide at https://documentation.suse.com/suse-caasp/4.2/single-html/caasp-deployment/

When proceeding through the instructions, take note of the following to ensure the SUSE CaaS Platform cluster is suitable for a deployment of SUSE Cloud Application Platform:

4.3 Install the Helm Client

Helm is a Kubernetes package manager used to install and manage SUSE Cloud Application Platform. This requires installing the Helm client, helm, on your remote management workstation. Cloud Application Platform requires Helm 3. For more information regarding Helm, refer to the documentation at https://helm.sh/docs/.

If your remote management workstation has the SUSE CaaS Platform package repository, install helm by running

tux > sudo zypper install helm3

Otherwise, helm can be installed by referring to the documentation at https://helm.sh/docs/intro/install/.

4.4 Storage Class

In SUSE Cloud Application Platform some instance groups, such as bits, database and singleton-blobstore require a storage class. To learn more about storage classes, see https://kubernetes.io/docs/concepts/storage/storage-classes/. Examples of provisioners include:

By default, SUSE Cloud Application Platform will use the cluster's default storage class. To designate or change the default storage class, refer to https://kubernetes.io/docs/tasks/administer-cluster/change-default-storage-class/

A storage class can be chosen by setting the kube.storage_class value in your kubecf-config-values.yaml configuration file as seen in this example. Note that if there is no storage class designated as the default this values must be set.

kube:
  storage_class: my-storage-class

4.5 Deployment Configuration

SUSE Cloud Application Platform is configured using Helm values (see https://helm.sh/docs/chart_template_guide/values_files/ . Helm values can be set as either command line parameters or using a values.yaml file. The following values.yaml file, called kubecf-config-values.yaml in this guide, provides an example of a SUSE Cloud Application Platform configuration.

The format of the kubecf-config-values.yaml file has been restructured completely. Do not re-use the previous version of the file. Instead, source the default file from the appendix in Section A.1, “Complete suse/kubecf values.yaml File”.

Ensure system_domain maps to the load balancer configured for your SUSE CaaS Platform cluster (see https://documentation.suse.com/suse-caasp/4.2/single-html/caasp-deployment/#loadbalancer).

Warning
Warning: Supported Domains

When selecting a domain, SUSE Cloud Application Platform expects DOMAIN to be either a subdomain or a root domain. Setting DOMAIN to a top-level domain, such as suse, is not supported.

### example deployment configuration file
### kubecf-config-values.yaml

system_domain: example.com

credentials:
  cf_admin_password: changeme
  uaa_admin_client_secret: alsochangeme

4.6 Certificates

This section describes the process to secure traffic passing through your SUSE Cloud Application Platform deployment. This is achieved by using certificates to set up Transport Layer Security (TLS) for the router component.

4.6.1 Certificate Characteristics

Ensure the certificates you use have the following characteristics:

  • The certificate is encoded in the PEM format.

  • The certificate is signed by an external Certificate Authority (CA).

  • The certificate's Subject Alternative Names (SAN) include the domain *.example.com, where example.com is replaced with the system_domain in your kubecf-config-values.yaml.

4.6.2 Deployment Configuration

The certificate used to secure your deployment is passed through the kubecf-config-values.yaml configuration file. To specify a certificate, set the value of the certificate and its corresponding private key using the router.tls.crt and router.tls.key Helm values in the settings: section.

Note
Note

Note the use of the "|" character which indicates the use of a literal scalar. See the http://yaml.org/spec/1.2/spec.html#id2795688 for more information.

settings:
  router:
    tls:
      crt: |
        -----BEGIN CERTIFICATE-----
        MIIEEjCCAfoCCQCWC4NErLzy3jANBgkqhkiG9w0BAQsFADBGMQswCQYDVQQGEwJD
        QTETMBEGA1UECAwKU29tZS1TdGF0ZTEOMAwGA1UECgwFTXlPcmcxEjAQBgNVBAMM
        CU15Q0Euc2l0ZTAeFw0xODA5MDYxNzA1MTRaFw0yMDAxMTkxNzA1MTRaMFAxCzAJ
        ...
        xtNNDwl2rnA+U0Q48uZIPSy6UzSmiNaP3PDR+cOak/mV8s1/7oUXM5ivqkz8pEJo
        M3KrIxZ7+MbdTvDOh8lQplvFTeGgjmUDd587Gs4JsormqOsGwKd1BLzQbGELryV9
        1usMOVbUuL8mSKVvgqhbz7vJlW1+zwmrpMV3qgTMoHoJWGx2n5g=
        -----END CERTIFICATE-----
      key: |
        -----BEGIN RSA PRIVATE KEY-----
        MIIEpAIBAAKCAQEAm4JMchGSqbZuqc4LdryJpX2HnarWPOW0hUkm60DL53f6ehPK
        T5Dtb2s+CoDX9A0iTjGZWRD7WwjpiiuXUcyszm8y9bJjP3sIcTnHWSgL/6Bb3KN5
        G5D8GHz7eMYkZBviFvygCqEs1hmfGCVNtgiTbAwgBTNsrmyx2NygnF5uy4KlkgwI
        ...
        GORpbQKBgQDB1/nLPjKxBqJmZ/JymBl6iBnhIgVkuUMuvmqES2nqqMI+r60EAKpX
        M5CD+pq71TuBtbo9hbjy5Buh0+QSIbJaNIOdJxU7idEf200+4anzdaipyCWXdZU+
        MPdJf40awgSWpGdiSv6hoj0AOm+lf4AsH6yAqw/eIHXNzhWLRvnqgA==
        -----END RSA PRIVATE KEY----

4.7 Using an Ingress Controller

By default, a SUSE Cloud Application Platform cluster is exposed through its Kubernetes services. This section describes how to use an ingress (see https://kubernetes.io/docs/concepts/services-networking/ingress/) to manage access to the services in the cluster.

Note that only the NGINX Ingress Controller has been verified to be compatible with Cloud Application Platform. Other Ingress controller alternatives may work, but compatibility with Cloud Application Platform is not supported.

4.7.1 Install and Configure the NGINX Ingress Controller

  1. Create a configuration file with the section below. The file is called nginx-ingress.yaml in this example.

    tcp:
      2222: "kubecf/scheduler:2222"
      20000: "kubecf/tcp-router:20000"
      20001: "kubecf/tcp-router:20001"
      20002: "kubecf/tcp-router:20002"
      20003: "kubecf/tcp-router:20003"
      20004: "kubecf/tcp-router:20004"
      20005: "kubecf/tcp-router:20005"
      20006: "kubecf/tcp-router:20006"
      20007: "kubecf/tcp-router:20007"
      20008: "kubecf/tcp-router:20008"
  2. Create the namespace.

    tux > kubectl create namespace nginx-ingress
  3. Install the NGINX Ingress Controller.

    tux > helm install nginx-ingress suse/nginx-ingress \
    --namespace nginx-ingress \
    --values nginx-ingress.yaml
  4. Monitor the progess of the deployment:

    tux > watch --color 'kubectl get pods --namespace nginx-ingress'
  5. After the deployment completes, the Ingress controller service will be deployed with either an external IP or a hostname.

    Find the external IP or hostname.

    tux > kubectl get services nginx-ingress-controller --namespace nginx-ingress

    You will get output similar to the following.

    NAME                       TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)
    nginx-ingress-controller   LoadBalancer   10.63.248.70   35.233.191.177   80:30344/TCP,443:31386/TCP
  6. Set up DNS records corresponding to the controller service IP or hostname and map it to the system_domain defined in your kubecf-config-values.yaml.

  7. Obtain a PEM formatted certificate that is associated with the system_domain defined in your kubecf-config-values.yaml

  8. In your kubecf-config-values.yaml configuration file, enable the ingress feature and set the tls.crt and tls.key for the certificate from the previous step.

    features:
      ingress:
        enabled: true
        tls:
          crt: |
            -----BEGIN CERTIFICATE-----
            MIIE8jCCAtqgAwIBAgIUT/Yu/Sv8AUl5zHXXEKCy5RKJqmYwDQYJKoZIhvcMOQMM
            [...]
            xC8x/+zB7XlvcRJRio6kk670+25ABP==
            -----END CERTIFICATE-----
          key: |
            -----BEGIN RSA PRIVATE KEY-----
            MIIE8jCCAtqgAwIBAgIUSI02lj2b2ImLy/zMrjNgW5d8EygwQSVJKoZIhvcYEGAW
            [...]
            to2WV7rPMb9W9fd2vVUXKKHTc+PiNg==
            -----END RSA PRIVATE KEY-----

4.8 Affinity and Anti-affinity

Operators can set affinity/anti-affinity rules to restrict how the scheduler determines the placement of a given pod on a given node. This can be achieved through node affinity/anti-affinity, where placement is determined by node labels (see https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity), or pod affinity/anti-affinity, where pod placement is determined by labels on pods that are already running on the node (see https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity).

In SUSE Cloud Application Platform, a default configuration will have following affinity/anti-affinity rules already in place:

  • Instance groups have anti-affinity against themselves. This applies to all instance groups, including database, but not to the bits, eirini, and eirini-extensions subcharts.

  • The diego-cell and router instance groups have anti-affinity against each other.

Note that to ensure an optimal spread of the pods across worker nodes we recommend running 5 or more worker nodes to satisfy both of the default anti-affinity constraints. An operator can also specify custom affinity rules via the sizing.instance-group.affinity helm parameter and any affinity rules specified here will overwrite the default rule, not merge with it.

4.8.1 Configuring Rules

To add or override affinity/anti-affinity settings, add a sizing.INSTANCE_GROUP.affinity block to your kubecf-config-values.yaml. Repeat as necessary for each instance group where affinity/anti-affinity settings need to be applied. For information on the available fields and valid values within the affinity: block, see https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity. Repeat as necessary for each instance group where affinity/anti-affinity settings need to be applied.

Example 1, node affinity.

Using this configuration, the Kubernetes scheduler would place both the asactors and asapi instance groups on a node with a label where the key is topology.kubernetes.io/zone and the value is 0.

sizing:
   asactors:
     affinity:
       nodeAffinity:
         requiredDuringSchedulingIgnoredDuringExecution:
           nodeSelectorTerms:
           - matchExpressions:
             - key: topology.kubernetes.io/zone
               operator: In
               values:
               - 0
   asapi:
     affinity:
       nodeAffinity:
         requiredDuringSchedulingIgnoredDuringExecution:
           nodeSelectorTerms:
           - matchExpressions:
             - key: topology.kubernetes.io/zone
               operator: In
               values:
               - 0

Example 2, pod anti-affinity.

sizing:
  api:
    affinity:
      podAntiAffinity:
        preferredDuringSchedulingIgnoredDuringExecution:
        - weight: 100
          podAffinityTerm:
            labelSelector:
              matchExpressions:
              - key: quarks.cloudfoundry.org/quarks-statefulset-name
                operator: In
                values:
                - sample_group
            topologyKey: kubernetes.io/hostname
  database:
    affinity:
      podAntiAffinity:
        preferredDuringSchedulingIgnoredDuringExecution:
        - weight: 100
          podAffinityTerm:
            labelSelector:
              matchExpressions:
              - key: quarks.cloudfoundry.org/quarks-statefulset-name
                operator: In
                values:
                - sample_group
            topologyKey: kubernetes.io/hostname

Example 1 above uses topology.kubernetes.io/zone as its label, which is one of the standard labels that get attached to nodes by default. The list of standard labels can be found at https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#built-in-node-labels.

In addition to the standard labels, custom labels can be specified as in Example 2. To use custom labels, following the process described in this section https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector.

4.9 High Availability

4.9.1 Configuring Cloud Application Platform for High Availability

There are two ways to make your SUSE Cloud Application Platform deployment highly available. The first method is to set the high_availability parameter in your deployment configuration file to true. The second method is to create custom configuration files with your own sizing values.

4.9.1.1 Finding Default and Allowable Sizing Values

The sizing: section in the Helm values.yaml files for the kubecf chart describes which roles can be scaled, and the scaling options for each role. You may use helm inspect to read the sizing: section in the Helm chart:

tux > helm show suse/kubecf | less +/sizing:

Another way is to use Perl to extract the information for each role from the sizing: section.

tux > helm inspect values suse/kubecf | \
perl -ne '/^sizing/..0 and do { print $.,":",$_ if /^ [a-z]/ || /high avail|scale|count/ }'

The default values.yaml files are also included in this guide at Section A.1, “Complete suse/kubecf values.yaml File”.

4.9.1.2 Using the high_availability Helm Property

One way to make your SUSE Cloud Application Platform deployment highly available is to use the high_availability Helm property. In your kubecf-config-values.yaml, set this property to true. This changes the size of all roles to the minimum required for a highly available deployment. Your configuration file, kubecf-config-values.yaml, should include the following.

high_availability: true
Important
Important: Sizing Priority

When sizing values are specified, it takes precedence over the high_availability property.

4.9.1.3 Using Custom Sizing Configurations

Another method to make your SUSE Cloud Application Platform deployment highly available is to explicitly configure the instance count of an instance group.

Important
Important: Sizing Priority

When sizing values are specified, it takes precedence over the high_availability property.

To see the full list of configurable instance groups, refer to default KubeCF values.yaml file in the appendix at Section A.1, “Complete suse/kubecf values.yaml File”.

The following is an example High Availability configuration. The example values are not meant to be copied, as these depend on your particular deployment and requirements.

sizing:
  adapter:
    instances: 2
  api:
    instances: 2
  asactors:
    instances: 2
  asapi:
    instances: 2
  asmetrics:
    instances: 2
  asnozzle:
    instances: 2
  auctioneer:
    instances: 2
  bits:
    instances: 2
  cc_worker:
    instances: 2
  credhub:
    instances: 2
  database:
    instances: 2
  diego_api:
    instances: 2
  diego_cell:
    instances: 2
  doppler:
    instances: 2
  eirini:
    instances: 3
  log_api:
    instances: 2
  nats:
    instances: 2
  router:
    instances: 2
  routing_api:
    instances: 2
  scheduler:
    instances: 2
  uaa:
    instances: 2
  tcp_router:
    instances: 2

4.10 External Blobstore

Cloud Foundry Application Runtime (CFAR) uses a blobstore (see https://docs.cloudfoundry.org/concepts/cc-blobstore.html) to store the source code that developers push, stage, and run. This section explains how to configure an external blobstore for the Cloud Controller component of your SUSE Cloud Application Platform deployment.

SUSE Cloud Application Platform relies on ops files (see https://github.com/cloudfoundry/cf-deployment/blob/master/operations/README.md) provided by cf-deployment (see https://github.com/cloudfoundry/cf-deployment) releases for external blobstore configurations. The default configuration for the blobstore is singleton.

4.10.1 Configuration

Currently SUSE Cloud Application Platform supports Amazon Simple Storage Service (Amazon S3, see https://aws.amazon.com/s3/) as an external blobstore . In order to configure Amazon S3 as an external blobstore, set the following in your kubecf-config-values.yaml file and replace the example values.

features:
  blobstore:
    provider: s3
    s3:
      aws_region: "us-east-1"
      blobstore_access_key_id:  AWS-ACCESS-KEY-ID
      blobstore_secret_access_key: AWS-SECRET-ACCESS-KEY>
      # User provided value for the blobstore admin password.
      blobstore_admin_users_password: PASSWORD
      # The following values are used as S3 bucket names. The buckets are automatically created if not present.
      app_package_directory_key: APP-BUCKET-NAME
      buildpack_directory_key: BUILDPACK-BUCKET-NAME
      droplet_directory_key: DROPLET-BUCKET-NAME
      resource_directory_key: RESOURCE-BUCKET-NAME
Warning
Warning: us-east-1 as Only Valid Region

Currently, there is a limitation where only us-east-1 can be chosen as the aws_region. For more information about this issue, see https://github.com/cloudfoundry-incubator/kubecf/issues/656.

Ensure the supplied AWS credentials have appropriate permissions as described at https://docs.cloudfoundry.org/deploying/common/cc-blobstore-config.html#fog-aws-iam.

4.11 External Database

By default, SUSE Cloud Application Platform includes a single-availability database provided by the Percona XtraDB Cluster (PXC). SUSE Cloud Application Platform can be configured to use an external database system, such as a data service offered by a cloud service provider or an existing high availability database server.

To configure your deployment to use an external database, please follow the instructions below.

The current SUSE Cloud Application Platform release is compatible with the following types and versions of external databases:

  • MySQL 5.7

4.11.1 Configuration

This section describes how to enable and configure your deployment to connect to an external database. The configuration options are specified through Helm values inside the kubecf-config-values.yaml. The deployment and configuration of the external database itself is the responsibility of the operator and beyond the scope of this documentation. It is assumed the external database has been deployed and accessible.

Important
Important: Configuration during Initial Install Only

Configuration of SUSE Cloud Application Platform to use an external database must be done during the initial installation and cannot be changed afterwards.

All the databases listed in the config snippet below need to exist before installing KubeCF. One way of doing that is manually running CREATE DATABASE IF NOT EXISTS database-name for each database.

The following snippet of the kubecf-config-values.yaml contains an example of an external database configuration.

features:
  embedded_database:
    enabled: false
  external_database:
    enabled: true
    require_ssl: false
    ca_cert: ~
    type: mysql
    host: hostname
    port: 3306
    databases:
      uaa:
        name: uaa
        password: root
        username: root
      cc:
        name: cloud_controller
        password: root
        username: root
      bbs:
        name: diego
        password: root
        username: root
      routing_api:
        name: routing-api
        password: root
        username: root
      policy_server:
        name: network_policy
        password: root
        username: root
      silk_controller:
        name: network_connectivity
        password: root
        username: root
      locket: 
        name: locket
        password: root
        username: root
      credhub:        
        name: credhub
        password: root
        username: root

4.12 Add the Kubernetes Charts Repository

Download the SUSE Kubernetes charts repository with Helm:

tux > helm repo add suse https://kubernetes-charts.suse.com/

You may replace the example suse name with any name. Verify with helm:

tux > helm repo list
NAME       URL
stable     https://kubernetes-charts.storage.googleapis.com
local      http://127.0.0.1:8879/charts
suse       https://kubernetes-charts.suse.com/

List your chart names, as you will need these for some operations:

tux > helm search repo suse
NAME                            CHART VERSION        APP VERSION    DESCRIPTION
suse/cf-operator                4.5.13+0.gd4738712    2.0.1          A Helm chart for cf-operator, the k8s operator ....
suse/console                    4.0.1                2.0.1          A Helm chart for deploying SUSE Stratos Console
suse/kubecf                     2.2.3                2.0.1          A Helm chart for KubeCF
suse/metrics                    1.2.1                2.0.1          A Helm chart for Stratos Metrics
suse/minibroker                 0.3.1                               A minibroker for your minikube
suse/nginx-ingress              0.28.4               0.15.0         An nginx Ingress controller that uses ConfigMap to store ...
...

4.13 Deploying SUSE Cloud Application Platform

Warning
Warning: KubeCF and cf-operator versions

KubeCF and cf-operator interoperate closely. Before you deploy a specific version combination, make sure they were confirmed to work. For more information see Section 3.5, “Releases and Associated Versions”.

4.13.1 Deploy the Operator

  1. First, create the namespace for the operator.

    tux > kubectl create namespace cf-operator
  2. Install the operator.

    The value of global.operator.watchNamespace indicates the namespace the operator will monitor for a KubeCF deployment. This namespace should be separate from the namespace used by the operator. In this example, this means KubeCF will be deployed into a namespace called kubecf.

    tux > helm install cf-operator suse/cf-operator \
    --namespace cf-operator \
    --set "global.operator.watchNamespace=kubecf" \
    --version 4.5.13+0.gd4738712
  3. Wait until cf-operator is successfully deployed before proceeding. Monitor the status of your cf-operator deployment using the watch command.

    tux > watch --color 'kubectl get pods --namespace cf-operator'

4.13.2 Deploy KubeCF

  1. Use Helm to deploy KubeCF.

    Note that you do not need to manually create the namespace for KubeCF.

    tux > helm install kubecf suse/kubecf \
    --namespace kubecf \
    --values kubecf-config-values.yaml \
    --version 2.2.3
  2. Monitor the status of your KubeCF deployment using the watch command.

    tux > watch --color 'kubectl get pods --namespace kubecf'
  3. Find the value of EXTERNAL-IP for each of the public services.

    tux > kubectl get service --namespace kubecf router-public
    
    tux > kubectl get service --namespace kubecf tcp-router-public
    
    tux > kubectl get service --namespace kubecf ssh-proxy-public
  4. Create DNS A records for the public services.

    1. For the router-public service, create a record mapping the EXTERNAL-IP value to <system_domain>.

    2. For the router-public service, create a record mapping the EXTERNAL-IP value to *.<system_domain>.

    3. For the tcp-router-public service, create a record mapping the EXTERNAL-IP value to tcp.<system_domain>.

    4. For the ssh-proxy-public service, create a record mapping the EXTERNAL-IP value to ssh.<system_domain>.

  5. When all pods are fully ready, verify your deployment.

    Connect and authenticate to the cluster.

    tux > cf api --skip-ssl-validation "https://api.<system_domain>"
    
    # Use the cf_admin_password set in kubecf-config-values.yaml
    tux > cf auth admin changeme

4.14 LDAP Integration

SUSE Cloud Application Platform can be integrated with identity providers to help manage authentication of users. The Lightweight Directory Access Protocol (LDAP) is an example of an identity provider that Cloud Application Platform integrates with. This section describes the necessary components and steps in order to configure the integration. See User Account and Authentication LDAP Integration for more information.

4.14.1 Prerequisites

The following prerequisites are required in order to complete an LDAP integration with SUSE Cloud Application Platform.

  • cf, the Cloud Foundry command line interface. For more information, see https://docs.cloudfoundry.org/cf-cli/.

    For SUSE Linux Enterprise and openSUSE systems, install using zypper.

    tux > sudo zypper install cf-cli

    For SLE, ensure the SUSE Cloud Application Platform Tools Module has been added. Add the module using YaST or SUSEConnect.

    tux > SUSEConnect --product sle-module-cap-tools/15.1/x86_64

    For other systems, follow the instructions at https://docs.cloudfoundry.org/cf-cli/install-go-cli.html.

  • uaac, the Cloud Foundry uaa command line client (UAAC). See https://docs.cloudfoundry.org/uaa/uaa-user-management.html for more information and installation instructions.

    On SUSE Linux Enterprise systems, ensure the ruby-devel and gcc-c++ packages have been installed before installing the cf-uaac gem.

    tux > sudo zypper install ruby-devel gcc-c++
  • An LDAP server and the credentials for a user/service account with permissions to search the directory.

4.14.2 Example LDAP Integration

Run the following commands to complete the integration of your Cloud Application Platform deployment and LDAP server.

  1. Use UAAC to target your uaa server.

    tux > uaac target --skip-ssl-validation https://uaa.example.com:2793
  2. Authenticate to the uaa server as admin using the UAA_ADMIN_CLIENT_SECRET set in your kubecf-config-values.yaml file.

    tux > uaac token client get admin --secret password
  3. Create the LDAP identity provider. A 201 response will be returned when the identity provider is successfully created. See the UAA API Reference and Cloud Foundry UAA-LDAP Documentationfor information regarding the request parameters and additional options available to configure your identity provider.

    The following is an example of a uaac curl command and its request parameters used to create an identity provider. Specify the parameters according to your LDAP server's credentials and directory structure. Ensure the user specifed in the bindUserDn has permissions to search the directory.

    tux > uaac curl /identity-providers?rawConfig=true \
        --request POST \
        --insecure \
        --header 'Content-Type: application/json' \
        --header 'X-Identity-Zone-Subdomain: uaa' \
        --data '{
      "type" : "ldap",
      "config" : {
        "ldapProfileFile" : "ldap/ldap-search-and-bind.xml",
        "baseUrl" : "ldap://ldap.example.com:389",
        "bindUserDn" : "cn=admin,dc=example,dc=com",
        "bindPassword" : "password",
        "userSearchBase" : "dc=example,dc=com",
        "userSearchFilter" : "uid={0}",
        "ldapGroupFile" : "ldap/ldap-groups-map-to-scopes.xml",
        "groupSearchBase" : "dc=example,dc=com",
        "groupSearchFilter" : "member={0}"
      },
      "originKey" : "ldap",
      "name" : "My LDAP Server",
      "active" : true
      }'
  4. Verify the LDAP identify provider has been created in the kubecf zone. The output should now contain an entry for the ldap type.

    tux > uaac curl /identity-providers --insecure --header "X-Identity-Zone-Id: uaa"
  5. Use the cf CLI to target your SUSE Cloud Application Platform deployment.

    tux > cf api --skip-ssl-validation https://api.example.com
  6. Log in as an administrator.

    tux > cf login
    API endpoint: https://api.example.com
    
    Email> admin
    
    Password>
    Authenticating...
    OK
  7. Create users associated with your LDAP identity provider.

    tux > cf create-user username --origin ldap
    Creating user username...
    OK
    
    TIP: Assign roles with 'cf set-org-role' and 'cf set-space-role'.
  8. Assign the user a role. Roles define the permissions a user has for a given org or space and a user can be assigned multiple roles. See Orgs, Spaces, Roles, and Permissions for available roles and their corresponding permissions. The following example assumes that an org named Org and a space named Space have already been created.

    tux > cf set-space-role username Org Space SpaceDeveloper
    Assigning role RoleSpaceDeveloper to user username in org Org / space Space as admin...
    OK
    tux > cf set-org-role username Org OrgManager
    Assigning role OrgManager to user username in org Org as admin...
    OK
  9. Verify the user can log into your SUSE Cloud Application Platform deployment using their associated LDAP server credentials.

    tux > cf login
    API endpoint: https://api.example.com
    
    Email> username
    
    Password>
    Authenticating...
    OK
    
    
    
    API endpoint:   https://api.example.com (API version: 2.115.0)
    User:           username@ldap.example.com

If the LDAP identity provider is no longer needed, it can be removed with the following steps.

  1. Obtain the ID of your identity provider.

    tux > uaac curl /identity-providers \
        --insecure \
        --header "Content-Type:application/json" \
        --header "Accept:application/json" \
        --header"X-Identity-Zone-Id:uaa"
  2. Delete the identity provider.

    tux > uaac curl /identity-providers/IDENTITY_PROVIDER_ID \
        --request DELETE \
        --insecure \
        --header "X-Identity-Zone-Id:uaa"

4.15 Expanding Capacity of a Cloud Application Platform Deployment on SUSE® CaaS Platform

If the current capacity of your Cloud Application Platform deployment is insufficient for your workloads, you can expand the capacity using the procedure in this section.

These instructions assume you have followed the procedure in Chapter 4, Deploying SUSE Cloud Application Platform on SUSE CaaS Platform and have a running Cloud Application Platform deployment on SUSE® CaaS Platform.

  1. Add additional nodes to your SUSE® CaaS Platform cluster as described in https://documentation.suse.com/suse-caasp/4.2/html/caasp-admin/_cluster_management.html#adding_nodes.

  2. Verify the new nodes are in a Ready state before proceeding.

    tux > kubectl get nodes
  3. Add or update the following in your kubecf-config-values.yaml file to increase the number of diego-cell in your Cloud Application Platform deployment. Replace the example value with the number required by your workflow.

    sizing:
      diego_cell:
        instances: 5
  4. Perform a helm upgrade to apply the change.

    tux > helm upgrade kubecf suse/kubecf \
    --namespace kubecf \
    --values kubecf-config-values.yaml \
    --version 2.2.3
  5. Monitor progress of the additional diego-cell pods:

    tux > watch --color 'kubectl get pods --namespace kubecf'

5 Deploying SUSE Cloud Application Platform on Microsoft Azure Kubernetes Service (AKS)

Important
Important
README first!

Before you start deploying SUSE Cloud Application Platform, review the following documents:

Read the Release Notes: Release Notes SUSE Cloud Application Platform

Read Chapter 3, Deployment and Administration Notes

SUSE Cloud Application Platform supports deployment on Microsoft Azure Kubernetes Service (AKS), Microsoft's managed Kubernetes service. This chapter describes the steps for preparing Azure for a SUSE Cloud Application Platform deployment, deployed with the default Azure Standard SKU load balancer (see https://docs.microsoft.com/en-us/azure/aks/load-balancer-standard).

In Kubernetes terminology a node used to be a minion, which was the name for a worker node. Now the correct term is simply node (see https://kubernetes.io/docs/concepts/architecture/nodes/). This can be confusing, as computing nodes have traditionally been defined as any device in a network that has an IP address. In Azure they are called agent nodes. In this chapter we call them agent nodes or Kubernetes nodes.

5.1 Prerequisites

The following are required to deploy and use SUSE Cloud Application Platform on AKS:

Important
Important

The prerequisites and configurations described is this chapter only reflect the requirements for a minimal SUSE Cloud Application Platform deployment. For a more production-ready environment, consider incoporating some of following optional features in this chapter and the Administration Guide at Part III, “SUSE Cloud Application Platform Administration”

5.2 Create Resource Group and AKS Instance

Log in to your Azure account, which should have the Contributor role.

tux > az login

You can set up an AKS cluster with an automatically generated service principal. Note that to be be able to create a service principal your user account must have permissions to register an application with your Azure Active Directory tenant, and to assign the application to a role in your subscription. For details, see https://docs.microsoft.com/en-us/azure/aks/kubernetes-service-principal#automatically-create-and-use-a-service-principal.

Alternatively, you can specify an existing service principal but the service principal must have sufficient rights to be able to create resources at the appropriate level, for example resource group, subscription etc. For more details please see:

Specify the following additional parameters for creating the cluster: node count, a username for SSH access to the nodes, SSH key, VM type, VM disk size and optionally, the Kubernetes version and a nodepool name.

tux > az aks create --resource-group my-resource-group --name cap-aks \
 --node-count 3 --admin-username cap-user \
 --ssh-key-value /path/to/some_key.pub --node-vm-size Standard_DS4_v2 \
 --node-osdisk-size 100 --nodepool-name mypool

For more az aks create options see https://docs.microsoft.com/en-us/cli/azure/aks?view=azure-cli-latest#az-aks-create.

This takes a few minutes. When it is completed, fetch your kubectl credentials. The default behavior for az aks get-credentials is to merge the new credentials with the existing default configuration, and to set the new credentials as as the current Kubernetes context. The context name is your AKS_NAME value. You should first backup your current configuration, or move it to a different location, then fetch the new credentials:

tux > az aks get-credentials --resource-group $RG_NAME --name $AKS_NAME
 Merged "cap-aks" as current context in /home/tux/.kube/config

Verify that you can connect to your cluster:

tux > kubectl get nodes

When all nodes are in a ready state and all pods are running, proceed to the next steps.

5.3 Install the Helm Client

Helm is a Kubernetes package manager used to install and manage SUSE Cloud Application Platform. This requires installing the Helm client, helm, on your remote management workstation. Cloud Application Platform requires Helm 3. For more information regarding Helm, refer to the documentation at https://helm.sh/docs/.

If your remote management workstation has the SUSE CaaS Platform package repository, install helm by running

tux > sudo zypper install helm3

Otherwise, helm can be installed by referring to the documentation at https://helm.sh/docs/intro/install/.

5.4 Storage Class

In SUSE Cloud Application Platform some instance groups, such as bits, database, diego-cell, and singleton-blobstore require a storage class. To learn more about storage classes, see https://kubernetes.io/docs/concepts/storage/storage-classes/.

By default, SUSE Cloud Application Platform will use the cluster's default storage class. To designate or change the default storage class, refer to https://kubernetes.io/docs/tasks/administer-cluster/change-default-storage-class/

A storage class can be chosen by setting the kube.storage_class value in your kubecf-config-values.yaml configuration file as seen in this example. Note that if there is no storage class designated as the default this value must be set.

kube:
  storage_class: my-storage-class

5.5 Deployment Configuration

The following file, kubecf-config-values.yaml, provides a complete example deployment configuration.

The format of the kubecf-config-values.yaml file has been restructured completely. Do not re-use the previous version of the file. Instead, source the default file from the appendix in Section A.1, “Complete suse/kubecf values.yaml File”.

Warning
Warning: Supported Domains

When selecting a domain, SUSE Cloud Application Platform expects DOMAIN to be either a subdomain or a root domain. Setting DOMAIN to a top-level domain, such as suse, is not supported.

### example deployment configuration file
### kubecf-config-values.yaml

system_domain: example.com

credentials:
  cf_admin_password: changeme
  uaa_admin_client_secret: alsochangeme

5.6 Certificates

This section describes the process to secure traffic passing through your SUSE Cloud Application Platform deployment. This is achieved by using certificates to set up Transport Layer Security (TLS) for the router component.

5.6.1 Certificate Characteristics

Ensure the certificates you use have the following characteristics:

  • The certificate is encoded in the PEM format.

  • The certificate is signed by an external Certificate Authority (CA).

  • The certificate's Subject Alternative Names (SAN) include the domain *.example.com, where example.com is replaced with the system_domain in your kubecf-config-values.yaml.

5.6.2 Deployment Configuration

The certificate used to secure your deployment is passed through the kubecf-config-values.yaml configuration file. To specify a certificate, set the value of the certificate and its corresponding private key using the router.tls.crt and router.tls.key Helm values in the settings: section.

Note
Note

Note the use of the "|" character which indicates the use of a literal scalar. See the http://yaml.org/spec/1.2/spec.html#id2795688 for more information.

settings:
  router:
    tls:
      crt: |
        -----BEGIN CERTIFICATE-----
        MIIEEjCCAfoCCQCWC4NErLzy3jANBgkqhkiG9w0BAQsFADBGMQswCQYDVQQGEwJD
        QTETMBEGA1UECAwKU29tZS1TdGF0ZTEOMAwGA1UECgwFTXlPcmcxEjAQBgNVBAMM
        CU15Q0Euc2l0ZTAeFw0xODA5MDYxNzA1MTRaFw0yMDAxMTkxNzA1MTRaMFAxCzAJ
        ...
        xtNNDwl2rnA+U0Q48uZIPSy6UzSmiNaP3PDR+cOak/mV8s1/7oUXM5ivqkz8pEJo
        M3KrIxZ7+MbdTvDOh8lQplvFTeGgjmUDd587Gs4JsormqOsGwKd1BLzQbGELryV9
        1usMOVbUuL8mSKVvgqhbz7vJlW1+zwmrpMV3qgTMoHoJWGx2n5g=
        -----END CERTIFICATE-----
      key: |
        -----BEGIN RSA PRIVATE KEY-----
        MIIEpAIBAAKCAQEAm4JMchGSqbZuqc4LdryJpX2HnarWPOW0hUkm60DL53f6ehPK
        T5Dtb2s+CoDX9A0iTjGZWRD7WwjpiiuXUcyszm8y9bJjP3sIcTnHWSgL/6Bb3KN5
        G5D8GHz7eMYkZBviFvygCqEs1hmfGCVNtgiTbAwgBTNsrmyx2NygnF5uy4KlkgwI
        ...
        GORpbQKBgQDB1/nLPjKxBqJmZ/JymBl6iBnhIgVkuUMuvmqES2nqqMI+r60EAKpX
        M5CD+pq71TuBtbo9hbjy5Buh0+QSIbJaNIOdJxU7idEf200+4anzdaipyCWXdZU+
        MPdJf40awgSWpGdiSv6hoj0AOm+lf4AsH6yAqw/eIHXNzhWLRvnqgA==
        -----END RSA PRIVATE KEY----

5.7 Using an Ingress Controller

By default, a SUSE Cloud Application Platform cluster is exposed through its Kubernetes services. This section describes how to use an ingress (see https://kubernetes.io/docs/concepts/services-networking/ingress/) to manage access to the services in the cluster.

Note that only the NGINX Ingress Controller has been verified to be compatible with Cloud Application Platform. Other Ingress controller alternatives may work, but compatibility with Cloud Application Platform is not supported.

5.7.1 Install and Configure the NGINX Ingress Controller

  1. Create a configuration file with the section below. The file is called nginx-ingress.yaml in this example.

    tcp:
      2222: "kubecf/scheduler:2222"
      20000: "kubecf/tcp-router:20000"
      20001: "kubecf/tcp-router:20001"
      20002: "kubecf/tcp-router:20002"
      20003: "kubecf/tcp-router:20003"
      20004: "kubecf/tcp-router:20004"
      20005: "kubecf/tcp-router:20005"
      20006: "kubecf/tcp-router:20006"
      20007: "kubecf/tcp-router:20007"
      20008: "kubecf/tcp-router:20008"
  2. Create the namespace.

    tux > kubectl create namespace nginx-ingress
  3. Install the NGINX Ingress Controller.

    tux > helm install nginx-ingress suse/nginx-ingress \
    --namespace nginx-ingress \
    --values nginx-ingress.yaml
  4. Monitor the progess of the deployment:

    tux > watch --color 'kubectl get pods --namespace nginx-ingress'
  5. After the deployment completes, the Ingress controller service will be deployed with either an external IP or a hostname.

    Find the external IP or hostname.

    tux > kubectl get services nginx-ingress-controller --namespace nginx-ingress

    You will get output similar to the following.

    NAME                       TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)
    nginx-ingress-controller   LoadBalancer   10.63.248.70   35.233.191.177   80:30344/TCP,443:31386/TCP
  6. Set up DNS records corresponding to the controller service IP or hostname and map it to the system_domain defined in your kubecf-config-values.yaml.

  7. Obtain a PEM formatted certificate that is associated with the system_domain defined in your kubecf-config-values.yaml

  8. In your kubecf-config-values.yaml configuration file, enable the ingress feature and set the tls.crt and tls.key for the certificate from the previous step.

    features:
      ingress:
        enabled: true
        tls:
          crt: |
            -----BEGIN CERTIFICATE-----
            MIIE8jCCAtqgAwIBAgIUT/Yu/Sv8AUl5zHXXEKCy5RKJqmYwDQYJKoZIhvcMOQMM
            [...]
            xC8x/+zB7XlvcRJRio6kk670+25ABP==
            -----END CERTIFICATE-----
          key: |
            -----BEGIN RSA PRIVATE KEY-----
            MIIE8jCCAtqgAwIBAgIUSI02lj2b2ImLy/zMrjNgW5d8EygwQSVJKoZIhvcYEGAW
            [...]
            to2WV7rPMb9W9fd2vVUXKKHTc+PiNg==
            -----END RSA PRIVATE KEY-----

5.8 Affinity and Anti-affinity

Operators can set affinity/anti-affinity rules to restrict how the scheduler determines the placement of a given pod on a given node. This can be achieved through node affinity/anti-affinity, where placement is determined by node labels (see https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity), or pod affinity/anti-affinity, where pod placement is determined by labels on pods that are already running on the node (see https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity).

In SUSE Cloud Application Platform, a default configuration will have following affinity/anti-affinity rules already in place:

  • Instance groups have anti-affinity against themselves. This applies to all instance groups, including database, but not to the bits, eirini, and eirini-extensions subcharts.

  • The diego-cell and router instance groups have anti-affinity against each other.

Note that to ensure an optimal spread of the pods across worker nodes we recommend running 5 or more worker nodes to satisfy both of the default anti-affinity constraints. An operator can also specify custom affinity rules via the sizing.instance-group.affinity helm parameter and any affinity rules specified here will overwrite the default rule, not merge with it.

5.8.1 Configuring Rules

To add or override affinity/anti-affinity settings, add a sizing.INSTANCE_GROUP.affinity block to your kubecf-config-values.yaml. Repeat as necessary for each instance group where affinity/anti-affinity settings need to be applied. For information on the available fields and valid values within the affinity: block, see https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity. Repeat as necessary for each instance group where affinity/anti-affinity settings need to be applied.

Example 1, node affinity.

Using this configuration, the Kubernetes scheduler would place both the asactors and asapi instance groups on a node with a label where the key is topology.kubernetes.io/zone and the value is 0.

sizing:
   asactors:
     affinity:
       nodeAffinity:
         requiredDuringSchedulingIgnoredDuringExecution:
           nodeSelectorTerms:
           - matchExpressions:
             - key: topology.kubernetes.io/zone
               operator: In
               values:
               - 0
   asapi:
     affinity:
       nodeAffinity:
         requiredDuringSchedulingIgnoredDuringExecution:
           nodeSelectorTerms:
           - matchExpressions:
             - key: topology.kubernetes.io/zone
               operator: In
               values:
               - 0

Example 2, pod anti-affinity.

sizing:
  api:
    affinity:
      podAntiAffinity:
        preferredDuringSchedulingIgnoredDuringExecution:
        - weight: 100
          podAffinityTerm:
            labelSelector:
              matchExpressions:
              - key: quarks.cloudfoundry.org/quarks-statefulset-name
                operator: In
                values:
                - sample_group
            topologyKey: kubernetes.io/hostname
  database:
    affinity:
      podAntiAffinity:
        preferredDuringSchedulingIgnoredDuringExecution:
        - weight: 100
          podAffinityTerm:
            labelSelector:
              matchExpressions:
              - key: quarks.cloudfoundry.org/quarks-statefulset-name
                operator: In
                values:
                - sample_group
            topologyKey: kubernetes.io/hostname

Example 1 above uses topology.kubernetes.io/zone as its label, which is one of the standard labels that get attached to nodes by default. The list of standard labels can be found at https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#built-in-node-labels.

In addition to the standard labels, custom labels can be specified as in Example 2. To use custom labels, following the process described in this section https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector.

5.9 High Availability

5.9.1 Configuring Cloud Application Platform for High Availability

There are two ways to make your SUSE Cloud Application Platform deployment highly available. The first method is to set the high_availability parameter in your deployment configuration file to true. The second method is to create custom configuration files with your own sizing values.

5.9.1.1 Finding Default and Allowable Sizing Values

The sizing: section in the Helm values.yaml files for the kubecf chart describes which roles can be scaled, and the scaling options for each role. You may use helm inspect to read the sizing: section in the Helm chart:

tux > helm show suse/kubecf | less +/sizing:

Another way is to use Perl to extract the information for each role from the sizing: section.

tux > helm inspect values suse/kubecf | \
perl -ne '/^sizing/..0 and do { print $.,":",$_ if /^ [a-z]/ || /high avail|scale|count/ }'

The default values.yaml files are also included in this guide at Section A.1, “Complete suse/kubecf values.yaml File”.

5.9.1.2 Using the high_availability Helm Property

One way to make your SUSE Cloud Application Platform deployment highly available is to use the high_availability Helm property. In your kubecf-config-values.yaml, set this property to true. This changes the size of all roles to the minimum required for a highly available deployment. Your configuration file, kubecf-config-values.yaml, should include the following.

high_availability: true
Important
Important: Sizing Priority

When sizing values are specified, it takes precedence over the high_availability property.

5.9.1.3 Using Custom Sizing Configurations

Another method to make your SUSE Cloud Application Platform deployment highly available is to explicitly configure the instance count of an instance group.

Important
Important: Sizing Priority

When sizing values are specified, it takes precedence over the high_availability property.

To see the full list of configurable instance groups, refer to default KubeCF values.yaml file in the appendix at Section A.1, “Complete suse/kubecf values.yaml File”.

The following is an example High Availability configuration. The example values are not meant to be copied, as these depend on your particular deployment and requirements.

sizing:
  adapter:
    instances: 2
  api:
    instances: 2
  asactors:
    instances: 2
  asapi:
    instances: 2
  asmetrics:
    instances: 2
  asnozzle:
    instances: 2
  auctioneer:
    instances: 2
  bits:
    instances: 2
  cc_worker:
    instances: 2
  credhub:
    instances: 2
  database:
    instances: 2
  diego_api:
    instances: 2
  diego_cell:
    instances: 2
  doppler:
    instances: 2
  eirini:
    instances: 3
  log_api:
    instances: 2
  nats:
    instances: 2
  router:
    instances: 2
  routing_api:
    instances: 2
  scheduler:
    instances: 2
  uaa:
    instances: 2
  tcp_router:
    instances: 2

5.10 External Blobstore

Cloud Foundry Application Runtime (CFAR) uses a blobstore (see https://docs.cloudfoundry.org/concepts/cc-blobstore.html) to store the source code that developers push, stage, and run. This section explains how to configure an external blobstore for the Cloud Controller component of your SUSE Cloud Application Platform deployment.

SUSE Cloud Application Platform relies on ops files (see https://github.com/cloudfoundry/cf-deployment/blob/master/operations/README.md) provided by cf-deployment (see https://github.com/cloudfoundry/cf-deployment) releases for external blobstore configurations. The default configuration for the blobstore is singleton.

5.10.1 Configuration

Currently SUSE Cloud Application Platform supports Amazon Simple Storage Service (Amazon S3, see https://aws.amazon.com/s3/) as an external blobstore . In order to configure Amazon S3 as an external blobstore, set the following in your kubecf-config-values.yaml file and replace the example values.

features:
  blobstore:
    provider: s3
    s3:
      aws_region: "us-east-1"
      blobstore_access_key_id:  AWS-ACCESS-KEY-ID
      blobstore_secret_access_key: AWS-SECRET-ACCESS-KEY>
      # User provided value for the blobstore admin password.
      blobstore_admin_users_password: PASSWORD
      # The following values are used as S3 bucket names. The buckets are automatically created if not present.
      app_package_directory_key: APP-BUCKET-NAME
      buildpack_directory_key: BUILDPACK-BUCKET-NAME
      droplet_directory_key: DROPLET-BUCKET-NAME
      resource_directory_key: RESOURCE-BUCKET-NAME
Warning
Warning: us-east-1 as Only Valid Region

Currently, there is a limitation where only us-east-1 can be chosen as the aws_region. For more information about this issue, see https://github.com/cloudfoundry-incubator/kubecf/issues/656.

Ensure the supplied AWS credentials have appropriate permissions as described at https://docs.cloudfoundry.org/deploying/common/cc-blobstore-config.html#fog-aws-iam.

5.11 External Database

By default, SUSE Cloud Application Platform includes a single-availability database provided by the Percona XtraDB Cluster (PXC). SUSE Cloud Application Platform can be configured to use an external database system, such as a data service offered by a cloud service provider or an existing high availability database server.

To configure your deployment to use an external database, please follow the instructions below.

The current SUSE Cloud Application Platform release is compatible with the following types and versions of external databases:

  • MySQL 5.7

5.11.1 Configuration

This section describes how to enable and configure your deployment to connect to an external database. The configuration options are specified through Helm values inside the kubecf-config-values.yaml. The deployment and configuration of the external database itself is the responsibility of the operator and beyond the scope of this documentation. It is assumed the external database has been deployed and accessible.

Important
Important: Configuration during Initial Install Only

Configuration of SUSE Cloud Application Platform to use an external database must be done during the initial installation and cannot be changed afterwards.

All the databases listed in the config snippet below need to exist before installing KubeCF. One way of doing that is manually running CREATE DATABASE IF NOT EXISTS database-name for each database.

The following snippet of the kubecf-config-values.yaml contains an example of an external database configuration.

features:
  embedded_database:
    enabled: false
  external_database:
    enabled: true
    require_ssl: false
    ca_cert: ~
    type: mysql
    host: hostname
    port: 3306
    databases:
      uaa:
        name: uaa
        password: root
        username: root
      cc:
        name: cloud_controller
        password: root
        username: root
      bbs:
        name: diego
        password: root
        username: root
      routing_api:
        name: routing-api
        password: root
        username: root
      policy_server:
        name: network_policy
        password: root
        username: root
      silk_controller:
        name: network_connectivity
        password: root
        username: root
      locket: 
        name: locket
        password: root
        username: root
      credhub:        
        name: credhub
        password: root
        username: root

5.12 Add the Kubernetes Charts Repository

Download the SUSE Kubernetes charts repository with Helm:

tux > helm repo add suse https://kubernetes-charts.suse.com/

You may replace the example suse name with any name. Verify with helm:

tux > helm repo list
NAME       URL
stable     https://kubernetes-charts.storage.googleapis.com
local      http://127.0.0.1:8879/charts
suse       https://kubernetes-charts.suse.com/

List your chart names, as you will need these for some operations:

tux > helm search repo suse
NAME                            CHART VERSION        APP VERSION    DESCRIPTION
suse/cf-operator                4.5.13+0.gd4738712    2.0.1          A Helm chart for cf-operator, the k8s operator ....
suse/console                    4.0.1                2.0.1          A Helm chart for deploying SUSE Stratos Console
suse/kubecf                     2.2.3                2.0.1          A Helm chart for KubeCF
suse/metrics                    1.2.1                2.0.1          A Helm chart for Stratos Metrics
suse/minibroker                 0.3.1                               A minibroker for your minikube
suse/nginx-ingress              0.28.4               0.15.0         An nginx Ingress controller that uses ConfigMap to store ...
...

5.13 Deploying SUSE Cloud Application Platform

This section describes how to deploy SUSE Cloud Application Platform with a Azure Standard SKU load balancer.

Warning
Warning: KubeCF and cf-operator versions

KubeCF and cf-operator interoperate closely. Before you deploy a specific version combination, make sure they were confirmed to work. For more information see Section 3.5, “Releases and Associated Versions”.

5.13.1 Deploy the Operator

  1. First, create the namespace for the operator.

    tux > kubectl create namespace cf-operator
  2. Install the operator.

    The value of global.operator.watchNamespace indicates the namespace the operator will monitor for a KubeCF deployment. This namespace should be separate from the namespace used by the operator. In this example, this means KubeCF will be deployed into a namespace called kubecf.

    tux > helm install cf-operator suse/cf-operator \
    --namespace cf-operator \
    --set "global.operator.watchNamespace=kubecf" \
    --version 4.5.13+0.gd4738712
  3. Wait until cf-operator is successfully deployed before proceeding. Monitor the status of your cf-operator deployment using the watch command.

    tux > watch --color 'kubectl get pods --namespace cf-operator'

5.13.2 Deploy KubeCF

  1. Use Helm to deploy KubeCF.

    Note that you do not need to manually create the namespace for KubeCF.

    tux > helm install kubecf suse/kubecf \
    --namespace kubecf \
    --values kubecf-config-values.yaml \
    --version 2.2.3
  2. Monitor the status of your KubeCF deployment using the watch command.

    tux > watch --color 'kubectl get pods --namespace kubecf'
  3. Find the value of EXTERNAL-IP for each of the public services.

    tux > kubectl get service --namespace kubecf router-public
    
    tux > kubectl get service --namespace kubecf tcp-router-public
    
    tux > kubectl get service --namespace kubecf ssh-proxy-public
  4. Create DNS A records for the public services.

    1. For the router-public service, create a record mapping the EXTERNAL-IP value to <system_domain>.

    2. For the router-public service, create a record mapping the EXTERNAL-IP value to *.<system_domain>.

    3. For the tcp-router-public service, create a record mapping the EXTERNAL-IP value to tcp.<system_domain>.

    4. For the ssh-proxy-public service, create a record mapping the EXTERNAL-IP value to ssh.<system_domain>.

  5. When all pods are fully ready, verify your deployment.

    Connect and authenticate to the cluster.

    tux > cf api --skip-ssl-validation "https://api.<system_domain>"
    
    # Use the cf_admin_password set in kubecf-config-values.yaml
    tux > cf auth admin changeme

5.14 Configuring and Testing the Native Microsoft AKS Service Broker

Microsoft Azure Kubernetes Service provides a service broker called the Open Service Broker for Azure (see https://github.com/Azure/open-service-broker-azure. This section describes how to use it with your SUSE Cloud Application Platform deployment.

Usage of the broker requires a cluster running Kubernetes 1.15 or earlier.

Start by extracting and setting a batch of environment variables:

tux > SBRG_NAME=$(tr -dc 'a-zA-Z0-9' < /dev/urandom | head -c 8)-service-broker

tux > REGION=eastus

tux > export SUBSCRIPTION_ID=$(az account show | jq -r '.id')

tux > az group create --name ${SBRG_NAME} --location ${REGION}

tux > SERVICE_PRINCIPAL_INFO=$(az ad sp create-for-rbac --name ${SBRG_NAME})

tux > TENANT_ID=$(echo ${SERVICE_PRINCIPAL_INFO} | jq -r '.tenant')

tux > CLIENT_ID=$(echo ${SERVICE_PRINCIPAL_INFO} | jq -r '.appId')

tux > CLIENT_SECRET=$(echo ${SERVICE_PRINCIPAL_INFO} | jq -r '.password')

tux > echo SBRG_NAME=${SBRGNAME}

tux > echo REGION=${REGION}

tux > echo SUBSCRIPTION_ID=${SUBSCRIPTION_ID} \; TENANT_ID=${TENANT_ID}\; CLIENT_ID=${CLIENT_ID}\; CLIENT_SECRET=${CLIENT_SECRET}

Add and install the catalog Helm chart. The CPU and memory's requests and limits must be increased, otherewise the installation fails due to a OOMKilled state. This example, increases these to double the default:

tux > helm repo add svc-cat https://svc-catalog-charts.storage.googleapis.com

tux > helm repo update

tux > kubectl create namespace catalog

tux > helm install catalog svc-cat/catalog \
 --namespace catalog \
 --set controllerManager.healthcheck.enabled=false \
 --set apiserver.healthcheck.enabled=false \
 --set controllerManager.resources.requests.cpu=200m \
 --set controllerManager.resources.requests.memory=40Mi \
 --set controllerManager.resources.limits.cpu=200m \
 --set controllerManager.resources.limits.memory=40Mi

tux > kubectl get apiservice

tux > helm repo add azure https://kubernetescharts.blob.core.windows.net/azure

tux > helm repo update

Set up the service broker with your variables:

tux > kubectl create namespace osba

tux > helm install osba azure/open-service-broker-azure \
--namespace osba \
--set azure.subscriptionId=${SUBSCRIPTION_ID} \
--set azure.tenantId=${TENANT_ID} \
--set azure.clientId=${CLIENT_ID} \
--set azure.clientSecret=${CLIENT_SECRET} \
--set azure.defaultLocation=${REGION} \
--set redis.persistence.storageClass=default \
--set basicAuth.username=$(tr -dc 'a-zA-Z0-9' < /dev/urandom | head -c 16) \
--set basicAuth.password=$(tr -dc 'a-zA-Z0-9' < /dev/urandom | head -c 16) \
--set tls.enabled=false

Monitor the progress:

tux > watch --color 'kubectl get pods --namespace osba'

When all pods are running, create the service broker in KubeCF using the cf CLI:

tux > cf login

tux > cf create-service-broker azure $(kubectl get deployment osba-open-service-broker-azure \
--namespace osba --output jsonpath='{.spec.template.spec.containers[0].env[?(@.name == "BASIC_AUTH_USERNAME")].value}') $(kubectl get secret --namespace osba osba-open-service-broker-azure --output jsonpath='{.data.basic-auth-password}' | base64 --decode) http://osba-open-service-broker-azure.osba

List the available service plans. For more information about the services supported see https://github.com/Azure/open-service-broker-azure#supported-services:

tux > cf service-access -b azure

Use cf enable-service-access to enable access to a service plan. This example enables all basic plans:

tux > cf service-access -b azure | \
awk '($2 ~ /basic/) { system("cf enable-service-access " $1 " -p " $2)}'

Test your new service broker with an example PHP application. First create an organization and space to deploy your test application to:

tux > cf create-org testorg

tux > cf create-space kubecftest -o testorg

tux > cf target -o "testorg" -s "kubecftest"

tux > cf create-service azure-mysql-5-7 basic question2answer-db \
-c "{ \"location\": \"${REGION}\", \"resourceGroup\": \"${SBRG_NAME}\", \"firewallRules\": [{\"name\": \
\"AllowAll\", \"startIPAddress\":\"0.0.0.0\",\"endIPAddress\":\"255.255.255.255\"}]}"

tux > cf service question2answer-db | grep status

Find your new service and optionally disable TLS. You should not disable TLS on a production deployment, but it simplifies testing. The mysql2 gem must be configured to use TLS, see brianmario/mysql2/SSL options on GitHub:

tux > az mysql server list --resource-group $SBRG_NAME

tux > az mysql server update --resource-group $SBRG_NAME \
--name kubecftest --ssl-enforcement Disabled

Look in your Azure portal to find your database --name.

Build and push the example PHP application:

tux > git clone https://github.com/scf-samples/question2answer

tux > cd question2answer

tux > cf push

tux > cf service question2answer-db # => bound apps

When the application has finished deploying, use your browser and navigate to the URL specified in the routes field displayed at the end of the staging logs. For example, the application route could be question2answer.example.com.

Press the button to prepare the database. When the database is ready, further verify by creating an initial user and posting some test questions.

5.15 LDAP Integration

SUSE Cloud Application Platform can be integrated with identity providers to help manage authentication of users. The Lightweight Directory Access Protocol (LDAP) is an example of an identity provider that Cloud Application Platform integrates with. This section describes the necessary components and steps in order to configure the integration. See User Account and Authentication LDAP Integration for more information.

5.15.1 Prerequisites

The following prerequisites are required in order to complete an LDAP integration with SUSE Cloud Application Platform.

  • cf, the Cloud Foundry command line interface. For more information, see https://docs.cloudfoundry.org/cf-cli/.

    For SUSE Linux Enterprise and openSUSE systems, install using zypper.

    tux > sudo zypper install cf-cli

    For SLE, ensure the SUSE Cloud Application Platform Tools Module has been added. Add the module using YaST or SUSEConnect.

    tux > SUSEConnect --product sle-module-cap-tools/15.1/x86_64

    For other systems, follow the instructions at https://docs.cloudfoundry.org/cf-cli/install-go-cli.html.

  • uaac, the Cloud Foundry uaa command line client (UAAC). See https://docs.cloudfoundry.org/uaa/uaa-user-management.html for more information and installation instructions.

    On SUSE Linux Enterprise systems, ensure the ruby-devel and gcc-c++ packages have been installed before installing the cf-uaac gem.

    tux > sudo zypper install ruby-devel gcc-c++
  • An LDAP server and the credentials for a user/service account with permissions to search the directory.

5.15.2 Example LDAP Integration

Run the following commands to complete the integration of your Cloud Application Platform deployment and LDAP server.

  1. Use UAAC to target your uaa server.

    tux > uaac target --skip-ssl-validation https://uaa.example.com:2793
  2. Authenticate to the uaa server as admin using the UAA_ADMIN_CLIENT_SECRET set in your kubecf-config-values.yaml file.

    tux > uaac token client get admin --secret password
  3. Create the LDAP identity provider. A 201 response will be returned when the identity provider is successfully created. See the UAA API Reference and Cloud Foundry UAA-LDAP Documentationfor information regarding the request parameters and additional options available to configure your identity provider.

    The following is an example of a uaac curl command and its request parameters used to create an identity provider. Specify the parameters according to your LDAP server's credentials and directory structure. Ensure the user specifed in the bindUserDn has permissions to search the directory.

    tux > uaac curl /identity-providers?rawConfig=true \
        --request POST \
        --insecure \
        --header 'Content-Type: application/json' \
        --header 'X-Identity-Zone-Subdomain: uaa' \
        --data '{
      "type" : "ldap",
      "config" : {
        "ldapProfileFile" : "ldap/ldap-search-and-bind.xml",
        "baseUrl" : "ldap://ldap.example.com:389",
        "bindUserDn" : "cn=admin,dc=example,dc=com",
        "bindPassword" : "password",
        "userSearchBase" : "dc=example,dc=com",
        "userSearchFilter" : "uid={0}",
        "ldapGroupFile" : "ldap/ldap-groups-map-to-scopes.xml",
        "groupSearchBase" : "dc=example,dc=com",
        "groupSearchFilter" : "member={0}"
      },
      "originKey" : "ldap",
      "name" : "My LDAP Server",
      "active" : true
      }'
  4. Verify the LDAP identify provider has been created in the kubecf zone. The output should now contain an entry for the ldap type.

    tux > uaac curl /identity-providers --insecure --header "X-Identity-Zone-Id: uaa"
  5. Use the cf CLI to target your SUSE Cloud Application Platform deployment.

    tux > cf api --skip-ssl-validation https://api.example.com
  6. Log in as an administrator.

    tux > cf login
    API endpoint: https://api.example.com
    
    Email> admin
    
    Password>
    Authenticating...
    OK
  7. Create users associated with your LDAP identity provider.

    tux > cf create-user username --origin ldap
    Creating user username...
    OK
    
    TIP: Assign roles with 'cf set-org-role' and 'cf set-space-role'.
  8. Assign the user a role. Roles define the permissions a user has for a given org or space and a user can be assigned multiple roles. See Orgs, Spaces, Roles, and Permissions for available roles and their corresponding permissions. The following example assumes that an org named Org and a space named Space have already been created.

    tux > cf set-space-role username Org Space SpaceDeveloper
    Assigning role RoleSpaceDeveloper to user username in org Org / space Space as admin...
    OK
    tux > cf set-org-role username Org OrgManager
    Assigning role OrgManager to user username in org Org as admin...
    OK
  9. Verify the user can log into your SUSE Cloud Application Platform deployment using their associated LDAP server credentials.

    tux > cf login
    API endpoint: https://api.example.com
    
    Email> username
    
    Password>
    Authenticating...
    OK
    
    
    
    API endpoint:   https://api.example.com (API version: 2.115.0)
    User:           username@ldap.example.com

If the LDAP identity provider is no longer needed, it can be removed with the following steps.

  1. Obtain the ID of your identity provider.

    tux > uaac curl /identity-providers \
        --insecure \
        --header "Content-Type:application/json" \
        --header "Accept:application/json" \
        --header"X-Identity-Zone-Id:uaa"
  2. Delete the identity provider.

    tux > uaac curl /identity-providers/IDENTITY_PROVIDER_ID \
        --request DELETE \
        --insecure \
        --header "X-Identity-Zone-Id:uaa"

5.16 Expanding Capacity of a Cloud Application Platform Deployment on Microsoft AKS

If the current capacity of your Cloud Application Platform deployment is insufficient for your workloads, you can expand the capacity using the procedure in this section.

These instructions assume you have followed the procedure in Chapter 5, Deploying SUSE Cloud Application Platform on Microsoft Azure Kubernetes Service (AKS) and have a running Cloud Application Platform deployment on Microsoft AKS. The instructions below will use environment variables defined in Section 5.2, “Create Resource Group and AKS Instance”.

  1. Get the current number of Kubernetes nodes in the cluster.

    tux > export OLD_NODE_COUNT=$(kubectl get nodes --output json | jq '.items | length')
  2. Set the number of Kubernetes nodes the cluster will be expanded to. Replace the example value with the number of nodes required for your workload.

    tux > export NEW_NODE_COUNT=5
  3. Increase the Kubernetes node count in the cluster.

    tux > az aks scale --resource-group $RG_NAME --name $AKS_NAME \
    --node-count $NEW_NODE_COUNT \
    --nodepool-name $NODEPOOL_NAME
  4. Verify the new nodes are in a Ready state before proceeding.

    tux > kubectl get nodes
  5. Add or update the following in your kubecf-config-values.yaml file to increase the number of diego-cell in your Cloud Application Platform deployment. Replace the example value with the number required by your workflow.

    sizing:
      diego_cell:
        instances: 5
  6. Perform a helm upgrade to apply the change.

    tux > helm upgrade kubecf suse/kubecf \
    --namespace kubecf \
    --values kubecf-config-values.yaml \
    --version 2.2.3
  7. Monitor progress of the additional diego-cell pods:

    tux > watch --color 'kubectl get pods --namespace kubecf'

6 Deploying SUSE Cloud Application Platform on Amazon Elastic Kubernetes Service (EKS)

Important
Important
README first!

Before you start deploying SUSE Cloud Application Platform, review the following documents:

Read the Release Notes: Release Notes SUSE Cloud Application Platform

Read Chapter 3, Deployment and Administration Notes

This chapter describes how to deploy SUSE Cloud Application Platform on Amazon Elastic Kubernetes Service (EKS), using Amazon's Elastic Load Balancer to provide fault-tolerant access to your cluster.

6.1 Prerequisites

The following are required to deploy and use SUSE Cloud Application Platform on EKS:

Important
Important

The prerequisites and configurations described is this chapter only reflect the requirements for a minimal SUSE Cloud Application Platform deployment. For a more production-ready environment, consider incoporating some of following optional features in this chapter and the Administration Guide at Part III, “SUSE Cloud Application Platform Administration”

6.2 Create an EKS Cluster

Now you can create an EKS cluster using eksctl. Be sure to keep in mind the following minimum requirements of the cluster.

  • Node sizes are at least t3.xlarge.

  • The NodeVolumeSize must be a minimum of 100 GB.

  • The Kubernetes version is at least 1.14.

As a minimal example, the following command will create an EKS cluster. To see additional configuration parameters, see eksctl create cluster --help.

tux > eksctl create cluster --name kubecf --version 1.14 \
--nodegroup-name standard-workers --node-type t3.xlarge \
--nodes 3 --node-volume-size 100 \
--region us-east-2 --managed \
--ssh-access --ssh-public-key /path/to/some_key.pub

6.3 Install the Helm Client

Helm is a Kubernetes package manager used to install and manage SUSE Cloud Application Platform. This requires installing the Helm client, helm, on your remote management workstation. Cloud Application Platform requires Helm 3. For more information regarding Helm, refer to the documentation at https://helm.sh/docs/.

If your remote management workstation has the SUSE CaaS Platform package repository, install helm by running

tux > sudo zypper install helm3

Otherwise, helm can be installed by referring to the documentation at https://helm.sh/docs/intro/install/.

6.4 Storage Class

In SUSE Cloud Application Platform some instance groups, such as bits, database, diego-cell, and singleton-blobstore require a storage class. To learn more about storage classes, see https://kubernetes.io/docs/concepts/storage/storage-classes/.

By default, SUSE Cloud Application Platform will use the cluster's default storage class. To designate or change the default storage class, refer to https://kubernetes.io/docs/tasks/administer-cluster/change-default-storage-class/

A storage class can be chosen by setting the kube.storage_class value in your kubecf-config-values.yaml configuration file as seen in this example. Note that if there is no storage class designated as the default this value must be set.

kube:
  storage_class: my-storage-class

6.5 Deployment Configuration

Use this example kubecf-config-values.yaml as a template for your configuration.

The format of the kubecf-config-values.yaml file has been restructured completely. Do not re-use the previous version of the file. Instead, source the default file from the appendix in Section A.1, “Complete suse/kubecf values.yaml File”.

Warning
Warning: Supported Domains

When selecting a domain, SUSE Cloud Application Platform expects DOMAIN to be either a subdomain or a root domain. Setting DOMAIN to a top-level domain, such as suse, is not supported.

### example deployment configuration file
### kubecf-config-values.yaml

system_domain: example.com

credentials:
  cf_admin_password: changeme
  uaa_admin_client_secret: alsochangeme

6.6 Certificates

This section describes the process to secure traffic passing through your SUSE Cloud Application Platform deployment. This is achieved by using certificates to set up Transport Layer Security (TLS) for the router component.

6.6.1 Certificate Characteristics

Ensure the certificates you use have the following characteristics:

  • The certificate is encoded in the PEM format.

  • The certificate is signed by an external Certificate Authority (CA).

  • The certificate's Subject Alternative Names (SAN) include the domain *.example.com, where example.com is replaced with the system_domain in your kubecf-config-values.yaml.

6.6.2 Deployment Configuration

The certificate used to secure your deployment is passed through the kubecf-config-values.yaml configuration file. To specify a certificate, set the value of the certificate and its corresponding private key using the router.tls.crt and router.tls.key Helm values in the settings: section.

Note
Note

Note the use of the "|" character which indicates the use of a literal scalar. See the http://yaml.org/spec/1.2/spec.html#id2795688 for more information.

settings:
  router:
    tls:
      crt: |
        -----BEGIN CERTIFICATE-----
        MIIEEjCCAfoCCQCWC4NErLzy3jANBgkqhkiG9w0BAQsFADBGMQswCQYDVQQGEwJD
        QTETMBEGA1UECAwKU29tZS1TdGF0ZTEOMAwGA1UECgwFTXlPcmcxEjAQBgNVBAMM
        CU15Q0Euc2l0ZTAeFw0xODA5MDYxNzA1MTRaFw0yMDAxMTkxNzA1MTRaMFAxCzAJ
        ...
        xtNNDwl2rnA+U0Q48uZIPSy6UzSmiNaP3PDR+cOak/mV8s1/7oUXM5ivqkz8pEJo
        M3KrIxZ7+MbdTvDOh8lQplvFTeGgjmUDd587Gs4JsormqOsGwKd1BLzQbGELryV9
        1usMOVbUuL8mSKVvgqhbz7vJlW1+zwmrpMV3qgTMoHoJWGx2n5g=
        -----END CERTIFICATE-----
      key: |
        -----BEGIN RSA PRIVATE KEY-----
        MIIEpAIBAAKCAQEAm4JMchGSqbZuqc4LdryJpX2HnarWPOW0hUkm60DL53f6ehPK
        T5Dtb2s+CoDX9A0iTjGZWRD7WwjpiiuXUcyszm8y9bJjP3sIcTnHWSgL/6Bb3KN5
        G5D8GHz7eMYkZBviFvygCqEs1hmfGCVNtgiTbAwgBTNsrmyx2NygnF5uy4KlkgwI
        ...
        GORpbQKBgQDB1/nLPjKxBqJmZ/JymBl6iBnhIgVkuUMuvmqES2nqqMI+r60EAKpX
        M5CD+pq71TuBtbo9hbjy5Buh0+QSIbJaNIOdJxU7idEf200+4anzdaipyCWXdZU+
        MPdJf40awgSWpGdiSv6hoj0AOm+lf4AsH6yAqw/eIHXNzhWLRvnqgA==
        -----END RSA PRIVATE KEY----

6.7 Using an Ingress Controller

By default, a SUSE Cloud Application Platform cluster is exposed through its Kubernetes services. This section describes how to use an ingress (see https://kubernetes.io/docs/concepts/services-networking/ingress/) to manage access to the services in the cluster.

Note that only the NGINX Ingress Controller has been verified to be compatible with Cloud Application Platform. Other Ingress controller alternatives may work, but compatibility with Cloud Application Platform is not supported.

6.7.1 Install and Configure the NGINX Ingress Controller

  1. Create a configuration file with the section below. The file is called nginx-ingress.yaml in this example.

    tcp:
      2222: "kubecf/scheduler:2222"
      20000: "kubecf/tcp-router:20000"
      20001: "kubecf/tcp-router:20001"
      20002: "kubecf/tcp-router:20002"
      20003: "kubecf/tcp-router:20003"
      20004: "kubecf/tcp-router:20004"
      20005: "kubecf/tcp-router:20005"
      20006: "kubecf/tcp-router:20006"
      20007: "kubecf/tcp-router:20007"
      20008: "kubecf/tcp-router:20008"
  2. Create the namespace.

    tux > kubectl create namespace nginx-ingress
  3. Install the NGINX Ingress Controller.

    tux > helm install nginx-ingress suse/nginx-ingress \
    --namespace nginx-ingress \
    --values nginx-ingress.yaml
  4. Monitor the progess of the deployment:

    tux > watch --color 'kubectl get pods --namespace nginx-ingress'
  5. After the deployment completes, the Ingress controller service will be deployed with either an external IP or a hostname.

    Find the external IP or hostname.

    tux > kubectl get services nginx-ingress-controller --namespace nginx-ingress

    You will get output similar to the following.

    NAME                       TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)
    nginx-ingress-controller   LoadBalancer   10.63.248.70   35.233.191.177   80:30344/TCP,443:31386/TCP
  6. Set up DNS records corresponding to the controller service IP or hostname and map it to the system_domain defined in your kubecf-config-values.yaml.

  7. Obtain a PEM formatted certificate that is associated with the system_domain defined in your kubecf-config-values.yaml

  8. In your kubecf-config-values.yaml configuration file, enable the ingress feature and set the tls.crt and tls.key for the certificate from the previous step.

    features:
      ingress:
        enabled: true
        tls:
          crt: |
            -----BEGIN CERTIFICATE-----
            MIIE8jCCAtqgAwIBAgIUT/Yu/Sv8AUl5zHXXEKCy5RKJqmYwDQYJKoZIhvcMOQMM
            [...]
            xC8x/+zB7XlvcRJRio6kk670+25ABP==
            -----END CERTIFICATE-----
          key: |
            -----BEGIN RSA PRIVATE KEY-----
            MIIE8jCCAtqgAwIBAgIUSI02lj2b2ImLy/zMrjNgW5d8EygwQSVJKoZIhvcYEGAW
            [...]
            to2WV7rPMb9W9fd2vVUXKKHTc+PiNg==
            -----END RSA PRIVATE KEY-----

6.8 Affinity and Anti-affinity

Operators can set affinity/anti-affinity rules to restrict how the scheduler determines the placement of a given pod on a given node. This can be achieved through node affinity/anti-affinity, where placement is determined by node labels (see https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity), or pod affinity/anti-affinity, where pod placement is determined by labels on pods that are already running on the node (see https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity).

In SUSE Cloud Application Platform, a default configuration will have following affinity/anti-affinity rules already in place:

  • Instance groups have anti-affinity against themselves. This applies to all instance groups, including database, but not to the bits, eirini, and eirini-extensions subcharts.

  • The diego-cell and router instance groups have anti-affinity against each other.

Note that to ensure an optimal spread of the pods across worker nodes we recommend running 5 or more worker nodes to satisfy both of the default anti-affinity constraints. An operator can also specify custom affinity rules via the sizing.instance-group.affinity helm parameter and any affinity rules specified here will overwrite the default rule, not merge with it.

6.8.1 Configuring Rules

To add or override affinity/anti-affinity settings, add a sizing.INSTANCE_GROUP.affinity block to your kubecf-config-values.yaml. Repeat as necessary for each instance group where affinity/anti-affinity settings need to be applied. For information on the available fields and valid values within the affinity: block, see https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity. Repeat as necessary for each instance group where affinity/anti-affinity settings need to be applied.

Example 1, node affinity.

Using this configuration, the Kubernetes scheduler would place both the asactors and asapi instance groups on a node with a label where the key is topology.kubernetes.io/zone and the value is 0.

sizing:
   asactors:
     affinity:
       nodeAffinity:
         requiredDuringSchedulingIgnoredDuringExecution:
           nodeSelectorTerms:
           - matchExpressions:
             - key: topology.kubernetes.io/zone
               operator: In
               values:
               - 0
   asapi:
     affinity:
       nodeAffinity:
         requiredDuringSchedulingIgnoredDuringExecution:
           nodeSelectorTerms:
           - matchExpressions:
             - key: topology.kubernetes.io/zone
               operator: In
               values:
               - 0

Example 2, pod anti-affinity.

sizing:
  api:
    affinity:
      podAntiAffinity:
        preferredDuringSchedulingIgnoredDuringExecution:
        - weight: 100
          podAffinityTerm:
            labelSelector:
              matchExpressions:
              - key: quarks.cloudfoundry.org/quarks-statefulset-name
                operator: In
                values:
                - sample_group
            topologyKey: kubernetes.io/hostname
  database:
    affinity:
      podAntiAffinity:
        preferredDuringSchedulingIgnoredDuringExecution:
        - weight: 100
          podAffinityTerm:
            labelSelector:
              matchExpressions:
              - key: quarks.cloudfoundry.org/quarks-statefulset-name
                operator: In
                values:
                - sample_group
            topologyKey: kubernetes.io/hostname

Example 1 above uses topology.kubernetes.io/zone as its label, which is one of the standard labels that get attached to nodes by default. The list of standard labels can be found at https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#built-in-node-labels.

In addition to the standard labels, custom labels can be specified as in Example 2. To use custom labels, following the process described in this section https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector.

6.9 High Availability

6.9.1 Configuring Cloud Application Platform for High Availability

There are two ways to make your SUSE Cloud Application Platform deployment highly available. The first method is to set the high_availability parameter in your deployment configuration file to true. The second method is to create custom configuration files with your own sizing values.

6.9.1.1 Finding Default and Allowable Sizing Values

The sizing: section in the Helm values.yaml files for the kubecf chart describes which roles can be scaled, and the scaling options for each role. You may use helm inspect to read the sizing: section in the Helm chart:

tux > helm show suse/kubecf | less +/sizing:

Another way is to use Perl to extract the information for each role from the sizing: section.

tux > helm inspect values suse/kubecf | \
perl -ne '/^sizing/..0 and do { print $.,":",$_ if /^ [a-z]/ || /high avail|scale|count/ }'

The default values.yaml files are also included in this guide at Section A.1, “Complete suse/kubecf values.yaml File”.

6.9.1.2 Using the high_availability Helm Property

One way to make your SUSE Cloud Application Platform deployment highly available is to use the high_availability Helm property. In your kubecf-config-values.yaml, set this property to true. This changes the size of all roles to the minimum required for a highly available deployment. Your configuration file, kubecf-config-values.yaml, should include the following.

high_availability: true
Important
Important: Sizing Priority

When sizing values are specified, it takes precedence over the high_availability property.

6.9.1.3 Using Custom Sizing Configurations

Another method to make your SUSE Cloud Application Platform deployment highly available is to explicitly configure the instance count of an instance group.

Important
Important: Sizing Priority

When sizing values are specified, it takes precedence over the high_availability property.

To see the full list of configurable instance groups, refer to default KubeCF values.yaml file in the appendix at Section A.1, “Complete suse/kubecf values.yaml File”.

The following is an example High Availability configuration. The example values are not meant to be copied, as these depend on your particular deployment and requirements.

sizing:
  adapter:
    instances: 2
  api:
    instances: 2
  asactors:
    instances: 2
  asapi:
    instances: 2
  asmetrics:
    instances: 2
  asnozzle:
    instances: 2
  auctioneer:
    instances: 2
  bits:
    instances: 2
  cc_worker:
    instances: 2
  credhub:
    instances: 2
  database:
    instances: 2
  diego_api:
    instances: 2
  diego_cell:
    instances: 2
  doppler:
    instances: 2
  eirini:
    instances: 3
  log_api:
    instances: 2
  nats:
    instances: 2
  router:
    instances: 2
  routing_api:
    instances: 2
  scheduler:
    instances: 2
  uaa:
    instances: 2
  tcp_router:
    instances: 2

6.10 External Blobstore

Cloud Foundry Application Runtime (CFAR) uses a blobstore (see https://docs.cloudfoundry.org/concepts/cc-blobstore.html) to store the source code that developers push, stage, and run. This section explains how to configure an external blobstore for the Cloud Controller component of your SUSE Cloud Application Platform deployment.

SUSE Cloud Application Platform relies on ops files (see https://github.com/cloudfoundry/cf-deployment/blob/master/operations/README.md) provided by cf-deployment (see https://github.com/cloudfoundry/cf-deployment) releases for external blobstore configurations. The default configuration for the blobstore is singleton.

6.10.1 Configuration

Currently SUSE Cloud Application Platform supports Amazon Simple Storage Service (Amazon S3, see https://aws.amazon.com/s3/) as an external blobstore . In order to configure Amazon S3 as an external blobstore, set the following in your kubecf-config-values.yaml file and replace the example values.

features:
  blobstore:
    provider: s3
    s3:
      aws_region: "us-east-1"
      blobstore_access_key_id:  AWS-ACCESS-KEY-ID
      blobstore_secret_access_key: AWS-SECRET-ACCESS-KEY>
      # User provided value for the blobstore admin password.
      blobstore_admin_users_password: PASSWORD
      # The following values are used as S3 bucket names. The buckets are automatically created if not present.
      app_package_directory_key: APP-BUCKET-NAME
      buildpack_directory_key: BUILDPACK-BUCKET-NAME
      droplet_directory_key: DROPLET-BUCKET-NAME
      resource_directory_key: RESOURCE-BUCKET-NAME
Warning
Warning: us-east-1 as Only Valid Region

Currently, there is a limitation where only us-east-1 can be chosen as the aws_region. For more information about this issue, see https://github.com/cloudfoundry-incubator/kubecf/issues/656.

Ensure the supplied AWS credentials have appropriate permissions as described at https://docs.cloudfoundry.org/deploying/common/cc-blobstore-config.html#fog-aws-iam.

6.11 External Database

By default, SUSE Cloud Application Platform includes a single-availability database provided by the Percona XtraDB Cluster (PXC). SUSE Cloud Application Platform can be configured to use an external database system, such as a data service offered by a cloud service provider or an existing high availability database server.

To configure your deployment to use an external database, please follow the instructions below.

The current SUSE Cloud Application Platform release is compatible with the following types and versions of external databases:

  • MySQL 5.7

6.11.1 Configuration

This section describes how to enable and configure your deployment to connect to an external database. The configuration options are specified through Helm values inside the kubecf-config-values.yaml. The deployment and configuration of the external database itself is the responsibility of the operator and beyond the scope of this documentation. It is assumed the external database has been deployed and accessible.

Important
Important: Configuration during Initial Install Only

Configuration of SUSE Cloud Application Platform to use an external database must be done during the initial installation and cannot be changed afterwards.

All the databases listed in the config snippet below need to exist before installing KubeCF. One way of doing that is manually running CREATE DATABASE IF NOT EXISTS database-name for each database.

The following snippet of the kubecf-config-values.yaml contains an example of an external database configuration.

features:
  embedded_database:
    enabled: false
  external_database:
    enabled: true
    require_ssl: false
    ca_cert: ~
    type: mysql
    host: hostname
    port: 3306
    databases:
      uaa:
        name: uaa
        password: root
        username: root
      cc:
        name: cloud_controller
        password: root
        username: root
      bbs:
        name: diego
        password: root
        username: root
      routing_api:
        name: routing-api
        password: root
        username: root
      policy_server:
        name: network_policy
        password: root
        username: root
      silk_controller:
        name: network_connectivity
        password: root
        username: root
      locket: 
        name: locket
        password: root
        username: root
      credhub:        
        name: credhub
        password: root
        username: root

6.12 Add the Kubernetes Charts Repository

Download the SUSE Kubernetes charts repository with Helm:

tux > helm repo add suse https://kubernetes-charts.suse.com/

You may replace the example suse name with any name. Verify with helm:

tux > helm repo list
NAME            URL
stable          https://kubernetes-charts.storage.googleapis.com
local           http://127.0.0.1:8879/charts
suse            https://kubernetes-charts.suse.com/

List your chart names, as you will need these for some operations:

tux > helm search repo suse
NAME                            CHART VERSION        APP VERSION    DESCRIPTION
suse/cf-operator                4.5.13+0.gd4738712    2.0.1          A Helm chart for cf-operator, the k8s operator ....
suse/console                    4.0.1                2.0.1          A Helm chart for deploying SUSE Stratos Console
suse/kubecf                     2.2.3                2.0.1          A Helm chart for KubeCF
suse/metrics                    1.2.1                2.0.1          A Helm chart for Stratos Metrics
suse/minibroker                 0.3.1                               A minibroker for your minikube
suse/nginx-ingress              0.28.4               0.15.0         An nginx Ingress controller that uses ConfigMap to store ...
...

6.13 Deploying SUSE Cloud Application Platform

This section describes how to deploy SUSE Cloud Application Platform on Amazon EKS.

Warning
Warning: KubeCF and cf-operator versions

KubeCF and cf-operator interoperate closely. Before you deploy a specific version combination, make sure they were confirmed to work. For more information see Section 3.5, “Releases and Associated Versions”.

6.13.1 Deploy the Operator

  1. First, create the namespace for the operator.

    tux > kubectl create namespace cf-operator
  2. Install the operator.

    The value of global.operator.watchNamespace indicates the namespace the operator will monitor for a KubeCF deployment. This namespace should be separate from the namespace used by the operator. In this example, this means KubeCF will be deployed into a namespace called kubecf.

    tux > helm install cf-operator suse/cf-operator \
    --namespace cf-operator \
    --set "global.operator.watchNamespace=kubecf" \
    --version 4.5.13+0.gd4738712
  3. Wait until cf-operator is successfully deployed before proceeding. Monitor the status of your cf-operator deployment using the watch command.

    tux > watch --color 'kubectl get pods --namespace cf-operator'

6.13.2 Deploy KubeCF

  1. Use Helm to deploy KubeCF.

    Note that you do not need to manually create the namespace for KubeCF.

    tux > helm install kubecf suse/kubecf \
    --namespace kubecf \
    --values kubecf-config-values.yaml \
    --version 2.2.3
  2. Monitor the status of your KubeCF deployment using the watch command.

    tux > watch --color 'kubectl get pods --namespace kubecf'
  3. Find the value of EXTERNAL-IP for each of the public services.

    tux > kubectl get service --namespace kubecf router-public
    
    tux > kubectl get service --namespace kubecf tcp-router-public
    
    tux > kubectl get service --namespace kubecf ssh-proxy-public
  4. Create DNS CNAME records for the public services.

    1. For the router-public service, create a record mapping the EXTERNAL-IP value to <system_domain>.

    2. For the router-public service, create a record mapping the EXTERNAL-IP value to *.<system_domain>.

    3. For the tcp-router-public service, create a record mapping the EXTERNAL-IP value to tcp.<system_domain>.

    4. For the ssh-proxy-public service, create a record mapping the EXTERNAL-IP value to ssh.<system_domain>.

  5. When all pods are fully ready, verify your deployment.

    Connect and authenticate to the cluster.

    tux > cf api --skip-ssl-validation "https://api.<system_domain>"
    
    # Use the cf_admin_password set in kubecf-config-values.yaml
    tux > cf auth admin changeme

6.14 Deploying and Using the AWS Service Broker

The AWS Service Broker provides integration of native AWS services with SUSE Cloud Application Platform.

6.14.1 Prerequisites

Deploying and using the AWS Service Broker requires the following:

6.14.2 Setup

  1. Create the required DynamoDB table where the AWS service broker will store its data. This example creates a table named awssb:

    tux > aws dynamodb create-table \
    		--attribute-definitions \
    			AttributeName=id,AttributeType=S \
    			AttributeName=userid,AttributeType=S \
    			AttributeName=type,AttributeType=S \
    		--key-schema \
    			AttributeName=id,KeyType=HASH \
    			AttributeName=userid,KeyType=RANGE \
    		--global-secondary-indexes \
    			'IndexName=type-userid-index,KeySchema=[{AttributeName=type,KeyType=HASH},{AttributeName=userid,KeyType=RANGE}],Projection={ProjectionType=INCLUDE,NonKeyAttributes=[id,userid,type,locked]},ProvisionedThroughput={ReadCapacityUnits=5,WriteCapacityUnits=5}' \
    		--provisioned-throughput \
    			ReadCapacityUnits=5,WriteCapacityUnits=5 \
    		--region ${AWS_REGION} --table-name awssb
  2. Wait until the table has been created. When it is ready, the TableStatus will change to ACTIVE. Check the status using the describe-table command:

    aws dynamodb describe-table --table-name awssb

    (For more information about the describe-table command, see https://docs.aws.amazon.com/cli/latest/reference/dynamodb/describe-table.html.)

  3. Set a name for the Kubernetes namespace you will install the service broker to. This name will also be used in the service broker URL:

    tux > BROKER_NAMESPACE=aws-sb
  4. Create a server certificate for the service broker:

    1. Create and use a separate directory to avoid conflicts with other CA files:

      tux > mkdir /tmp/aws-service-broker-certificates && cd $_
    2. Get the CA certificate:

      kubectl get secret --namespace kubecf --output jsonpath='{.items[*].data.internal-ca-cert}' | base64 -di > ca.pem
    3. Get the CA private key:

      kubectl get secret --namespace kubecf --output jsonpath='{.items[*].data.internal-ca-cert-key}' | base64 -di > ca.key
    4. Create a signing request. Replace BROKER_NAMESPACE with the namespace assigned in Step 3:

      tux > openssl req -newkey rsa:4096 -keyout tls.key.encrypted -out tls.req -days 365 \
        -passout pass:1234 \
        -subj '/CN=aws-servicebroker.'${BROKER_NAMESPACE} -batch \
        -subj '/CN=aws-servicebroker-aws-servicebroker.aws-sb.svc.cluster.local' -batch
        </dev/null
    5. Decrypt the generated broker private key:

      tux > openssl rsa -in tls.key.encrypted -passin pass:1234 -out tls.key
    6. Sign the request with the CA certificate:

      tux > openssl x509 -req -CA ca.pem -CAkey ca.key -CAcreateserial -in tls.req -out tls.pem
  5. Install the AWS service broker as documented at https://github.com/awslabs/aws-servicebroker/blob/master/docs/getting-started-k8s.md. Skip the installation of the Kubernetes Service Catalog. While installing the AWS Service Broker, make sure to update the Helm chart version (the version as of this writing is 1.0.1). For the broker install, pass in a value indicating the Cluster Service Broker should not be installed (for example --set deployClusterServiceBroker=false). Ensure an account and role with adequate IAM rights is chosen (see Section 6.14.1, “Prerequisites”:

    tux > kubectl create namespace $BROKER_NAMESPACE
    
    tux > helm install aws-servicebroker aws-sb/aws-servicebroker \
    --namespace $BROKER_NAMESPACE \
    --version 1.0.1 \
    --set aws.secretkey=$AWS_ACCESS_KEY \
    --set aws.accesskeyid=$AWS_KEY_ID \
    --set deployClusterServiceBroker=false \
    --set tls.cert="$(base64 -w0 tls.pem)" \
    --set tls.key="$(base64 -w0 tls.key)" \
    --set-string aws.targetaccountid=$AWS_TARGET_ACCOUNT_ID \
    --set aws.targetrolename=$AWS_TARGET_ROLE_NAME \
    --set aws.tablename=awssb \
    --set aws.vpcid=$VPC_ID \
    --set aws.region=$AWS_REGION \
    --set authenticate=false

    To find the values of aws.targetaccoundid, aws.targetrolename, and vpcId run the following command.

    tux > aws eks describe-cluster --name $CLUSTER_NAME

    For aws.targetaccoundid and aws.targetrolename , examine the cluster.roleArn field. For vpcId, refer to the cluster.resourcesVpcConfig.vpcId field.

  6. Log into your Cloud Application Platform deployment. Select an organization and space to work with, creating them if needed:

    tux > cf api --skip-ssl-validation https://api.example.com
    tux > cf login -u admin -p password
    tux > cf create-org org
    tux > cf create-space space
    tux > cf target -o org -s space
  7. Create a service broker in kubecf. Note the name of the service broker should be the same as the one specified for the the helm install step (for example aws-servicebroker. Note that the username and password parameters are only used as dummy values to pass to the cf command:

    tux > cf create-service-broker aws-servicebroker username password https://aws-servicebroker-aws-servicebroker.aws-sb.svc.cluster.local
  8. Verify the service broker has been registered:

    tux > cf service-brokers
  9. List the available service plans:

    tux > cf service-access
  10. Enable access to a service. This example uses the -p to enable access to a specific service plan. See https://github.com/awslabs/aws-servicebroker/blob/master/templates/rdsmysql/template.yaml for information about all available services and their associated plans:

    tux > cf enable-service-access rdsmysql -p custom
  11. Create a service instance. As an example, a custom MySQL instance can be created as:

    tux > cf create-service rdsmysql custom mysql-instance-name -c '{
      "AccessCidr": "192.0.2.24/32",
      "BackupRetentionPeriod": 0,
      "MasterUsername": "master",
      "DBInstanceClass": "db.t2.micro",
      "EngineVersion": "5.7.17",
      "PubliclyAccessible": "true",
      "region": "$AWS_REGION",
      "StorageEncrypted": "false",
      "VpcId": "$VPC_ID",
      "target_account_id": "$AWS_TARGET_ACCOUNT_ID",
      "target_role_name": "$AWS_TARGET_ROLE_NAME"
    }'

6.14.3 Cleanup

When the AWS Service Broker and its services are no longer required, perform the following steps:

  1. Unbind any applications using any service instances then delete the service instance:

    tux > cf unbind-service my_app mysql-instance-name
    tux > cf delete-service mysql-instance-name
  2. Delete the service broker in kubecf:

    tux > cf delete-service-broker aws-servicebroker
  3. Delete the deployed Helm chart and the namespace:

    tux > helm delete aws-servicebroker
    tux > kubectl delete namespace ${BROKER_NAMESPACE}
  4. The manually created DynamoDB table will need to be deleted as well:

    tux > aws dynamodb delete-table --table-name awssb --region ${AWS_REGION}

6.15 LDAP Integration

SUSE Cloud Application Platform can be integrated with identity providers to help manage authentication of users. The Lightweight Directory Access Protocol (LDAP) is an example of an identity provider that Cloud Application Platform integrates with. This section describes the necessary components and steps in order to configure the integration. See User Account and Authentication LDAP Integration for more information.

6.15.1 Prerequisites

The following prerequisites are required in order to complete an LDAP integration with SUSE Cloud Application Platform.

  • cf, the Cloud Foundry command line interface. For more information, see https://docs.cloudfoundry.org/cf-cli/.

    For SUSE Linux Enterprise and openSUSE systems, install using zypper.

    tux > sudo zypper install cf-cli

    For SLE, ensure the SUSE Cloud Application Platform Tools Module has been added. Add the module using YaST or SUSEConnect.

    tux > SUSEConnect --product sle-module-cap-tools/15.1/x86_64

    For other systems, follow the instructions at https://docs.cloudfoundry.org/cf-cli/install-go-cli.html.

  • uaac, the Cloud Foundry uaa command line client (UAAC). See https://docs.cloudfoundry.org/uaa/uaa-user-management.html for more information and installation instructions.

    On SUSE Linux Enterprise systems, ensure the ruby-devel and gcc-c++ packages have been installed before installing the cf-uaac gem.

    tux > sudo zypper install ruby-devel gcc-c++
  • An LDAP server and the credentials for a user/service account with permissions to search the directory.

6.15.2 Example LDAP Integration

Run the following commands to complete the integration of your Cloud Application Platform deployment and LDAP server.

  1. Use UAAC to target your uaa server.

    tux > uaac target --skip-ssl-validation https://uaa.example.com:2793
  2. Authenticate to the uaa server as admin using the UAA_ADMIN_CLIENT_SECRET set in your kubecf-config-values.yaml file.

    tux > uaac token client get admin --secret password
  3. Create the LDAP identity provider. A 201 response will be returned when the identity provider is successfully created. See the UAA API Reference and Cloud Foundry UAA-LDAP Documentationfor information regarding the request parameters and additional options available to configure your identity provider.

    The following is an example of a uaac curl command and its request parameters used to create an identity provider. Specify the parameters according to your LDAP server's credentials and directory structure. Ensure the user specifed in the bindUserDn has permissions to search the directory.

    tux > uaac curl /identity-providers?rawConfig=true \
        --request POST \
        --insecure \
        --header 'Content-Type: application/json' \
        --header 'X-Identity-Zone-Subdomain: uaa' \
        --data '{
      "type" : "ldap",
      "config" : {
        "ldapProfileFile" : "ldap/ldap-search-and-bind.xml",
        "baseUrl" : "ldap://ldap.example.com:389",
        "bindUserDn" : "cn=admin,dc=example,dc=com",
        "bindPassword" : "password",
        "userSearchBase" : "dc=example,dc=com",
        "userSearchFilter" : "uid={0}",
        "ldapGroupFile" : "ldap/ldap-groups-map-to-scopes.xml",
        "groupSearchBase" : "dc=example,dc=com",
        "groupSearchFilter" : "member={0}"
      },
      "originKey" : "ldap",
      "name" : "My LDAP Server",
      "active" : true
      }'
  4. Verify the LDAP identify provider has been created in the kubecf zone. The output should now contain an entry for the ldap type.

    tux > uaac curl /identity-providers --insecure --header "X-Identity-Zone-Id: uaa"
  5. Use the cf CLI to target your SUSE Cloud Application Platform deployment.

    tux > cf api --skip-ssl-validation https://api.example.com
  6. Log in as an administrator.

    tux > cf login
    API endpoint: https://api.example.com
    
    Email> admin
    
    Password>
    Authenticating...
    OK
  7. Create users associated with your LDAP identity provider.

    tux > cf create-user username --origin ldap
    Creating user username...
    OK
    
    TIP: Assign roles with 'cf set-org-role' and 'cf set-space-role'.
  8. Assign the user a role. Roles define the permissions a user has for a given org or space and a user can be assigned multiple roles. See Orgs, Spaces, Roles, and Permissions for available roles and their corresponding permissions. The following example assumes that an org named Org and a space named Space have already been created.

    tux > cf set-space-role username Org Space SpaceDeveloper
    Assigning role RoleSpaceDeveloper to user username in org Org / space Space as admin...
    OK
    tux > cf set-org-role username Org OrgManager
    Assigning role OrgManager to user username in org Org as admin...
    OK
  9. Verify the user can log into your SUSE Cloud Application Platform deployment using their associated LDAP server credentials.

    tux > cf login
    API endpoint: https://api.example.com
    
    Email> username
    
    Password>
    Authenticating...
    OK
    
    
    
    API endpoint:   https://api.example.com (API version: 2.115.0)
    User:           username@ldap.example.com

If the LDAP identity provider is no longer needed, it can be removed with the following steps.

  1. Obtain the ID of your identity provider.

    tux > uaac curl /identity-providers \
        --insecure \
        --header "Content-Type:application/json" \
        --header "Accept:application/json" \
        --header"X-Identity-Zone-Id:uaa"
  2. Delete the identity provider.

    tux > uaac curl /identity-providers/IDENTITY_PROVIDER_ID \
        --request DELETE \
        --insecure \
        --header "X-Identity-Zone-Id:uaa"

7 Deploying SUSE Cloud Application Platform on Google Kubernetes Engine (GKE)

Important
Important
README first!

Before you start deploying SUSE Cloud Application Platform, review the following documents:

Read the Release Notes: Release Notes SUSE Cloud Application Platform

Read Chapter 3, Deployment and Administration Notes

SUSE Cloud Application Platform supports deployment on Google Kubernetes Engine (GKE). This chapter describes the steps to prepare a SUSE Cloud Application Platform deployment on GKE using its integrated network load balancers. See https://cloud.google.com/kubernetes-engine/ for more information on GKE.

7.1 Prerequisites

The following are required to deploy and use SUSE Cloud Application Platform on GKE:

Important
Important

The prerequisites and configurations described is this chapter only reflect the requirements for a minimal SUSE Cloud Application Platform deployment. For a more production-ready environment, consider incoporating some of following optional features in this chapter and the Administration Guide at Part III, “SUSE Cloud Application Platform Administration”

7.2 Creating a GKE cluster

In order to deploy SUSE Cloud Application Platform, create a cluster that:

  1. Set a name for your cluster:

    tux > export CLUSTER_NAME="cap"
  2. Set the zone for your cluster:

    tux > export CLUSTER_ZONE="us-west1-a"
  3. Set the number of nodes for your cluster:

    tux > export NODE_COUNT=3
  4. Create the cluster:

    tux > gcloud container clusters create ${CLUSTER_NAME} \
    --image-type=UBUNTU \
    --machine-type=n1-standard-4 \
    --zone ${CLUSTER_ZONE} \
    --num-nodes=$NODE_COUNT \
    --no-enable-basic-auth \
    --no-issue-client-certificate \
    --no-enable-autoupgrade
    • Specify the --no-enable-basic-auth and --no-issue-client-certificate flags so that kubectl does not use basic or client certificate authentication, but uses OAuth Bearer Tokens instead. Configure the flags to suit your desired authentication mechanism.

    • Specify --no-enable-autoupgrade to disable automatic upgrades.

    • Disable legacy metadata server endpoints using --metadata disable-legacy-endpoints=true as a best practice as indicated in https://cloud.google.com/compute/docs/storing-retrieving-metadata#default.

7.3 Get kubeconfig File

Get the kubeconfig file for your cluster.

tux > gcloud container clusters get-credentials --zone ${CLUSTER_ZONE:?required} ${CLUSTER_NAME:?required} --project example-project

7.4 Install the Helm Client

Helm is a Kubernetes package manager used to install and manage SUSE Cloud Application Platform. This requires installing the Helm client, helm, on your remote management workstation. Cloud Application Platform requires Helm 3. For more information regarding Helm, refer to the documentation at https://helm.sh/docs/.

If your remote management workstation has the SUSE CaaS Platform package repository, install helm by running

tux > sudo zypper install helm3

Otherwise, helm can be installed by referring to the documentation at https://helm.sh/docs/intro/install/.

7.5 Storage Class

In SUSE Cloud Application Platform some instance groups, such as bits, database, diego-cell, and singleton-blobstore require a storage class. To learn more about storage classes, see https://kubernetes.io/docs/concepts/storage/storage-classes/.

By default, SUSE Cloud Application Platform will use the cluster's default storage class. To designate or change the default storage class, refer to https://kubernetes.io/docs/tasks/administer-cluster/change-default-storage-class/

A storage class can be chosen by setting the kube.storage_class value in your kubecf-config-values.yaml configuration file as seen in this example. Note that if there is no storage class designated as the default this value must be set.

kube:
  storage_class: my-storage-class

7.6 Deployment Configuration

The following file, kubecf-config-values.yaml, provides a complete example deployment configuration.

The format of the kubecf-config-values.yaml file has been restructured completely. Do not re-use the previous version of the file. Instead, source the default file from the appendix in Section A.1, “Complete suse/kubecf values.yaml File”.

Warning
Warning: Supported Domains

When selecting a domain, SUSE Cloud Application Platform expects DOMAIN to be either a subdomain or a root domain. Setting DOMAIN to a top-level domain, such as suse, is not supported.

### example deployment configuration file
### kubecf-config-values.yaml

system_domain: example.com

credentials:
  cf_admin_password: changeme
  uaa_admin_client_secret: alsochangeme

7.7 Certificates

This section describes the process to secure traffic passing through your SUSE Cloud Application Platform deployment. This is achieved by using certificates to set up Transport Layer Security (TLS) for the router component.

7.7.1 Certificate Characteristics

Ensure the certificates you use have the following characteristics:

  • The certificate is encoded in the PEM format.

  • The certificate is signed by an external Certificate Authority (CA).

  • The certificate's Subject Alternative Names (SAN) include the domain *.example.com, where example.com is replaced with the system_domain in your kubecf-config-values.yaml.

7.7.2 Deployment Configuration

The certificate used to secure your deployment is passed through the kubecf-config-values.yaml configuration file. To specify a certificate, set the value of the certificate and its corresponding private key using the router.tls.crt and router.tls.key Helm values in the settings: section.

Note
Note

Note the use of the "|" character which indicates the use of a literal scalar. See the http://yaml.org/spec/1.2/spec.html#id2795688 for more information.

settings:
  router:
    tls:
      crt: |
        -----BEGIN CERTIFICATE-----
        MIIEEjCCAfoCCQCWC4NErLzy3jANBgkqhkiG9w0BAQsFADBGMQswCQYDVQQGEwJD
        QTETMBEGA1UECAwKU29tZS1TdGF0ZTEOMAwGA1UECgwFTXlPcmcxEjAQBgNVBAMM
        CU15Q0Euc2l0ZTAeFw0xODA5MDYxNzA1MTRaFw0yMDAxMTkxNzA1MTRaMFAxCzAJ
        ...
        xtNNDwl2rnA+U0Q48uZIPSy6UzSmiNaP3PDR+cOak/mV8s1/7oUXM5ivqkz8pEJo
        M3KrIxZ7+MbdTvDOh8lQplvFTeGgjmUDd587Gs4JsormqOsGwKd1BLzQbGELryV9
        1usMOVbUuL8mSKVvgqhbz7vJlW1+zwmrpMV3qgTMoHoJWGx2n5g=
        -----END CERTIFICATE-----
      key: |
        -----BEGIN RSA PRIVATE KEY-----
        MIIEpAIBAAKCAQEAm4JMchGSqbZuqc4LdryJpX2HnarWPOW0hUkm60DL53f6ehPK
        T5Dtb2s+CoDX9A0iTjGZWRD7WwjpiiuXUcyszm8y9bJjP3sIcTnHWSgL/6Bb3KN5
        G5D8GHz7eMYkZBviFvygCqEs1hmfGCVNtgiTbAwgBTNsrmyx2NygnF5uy4KlkgwI
        ...
        GORpbQKBgQDB1/nLPjKxBqJmZ/JymBl6iBnhIgVkuUMuvmqES2nqqMI+r60EAKpX
        M5CD+pq71TuBtbo9hbjy5Buh0+QSIbJaNIOdJxU7idEf200+4anzdaipyCWXdZU+
        MPdJf40awgSWpGdiSv6hoj0AOm+lf4AsH6yAqw/eIHXNzhWLRvnqgA==
        -----END RSA PRIVATE KEY----

7.8 Using an Ingress Controller

By default, a SUSE Cloud Application Platform cluster is exposed through its Kubernetes services. This section describes how to use an ingress (see https://kubernetes.io/docs/concepts/services-networking/ingress/) to manage access to the services in the cluster.

Note that only the NGINX Ingress Controller has been verified to be compatible with Cloud Application Platform. Other Ingress controller alternatives may work, but compatibility with Cloud Application Platform is not supported.

7.8.1 Install and Configure the NGINX Ingress Controller

  1. Create a configuration file with the section below. The file is called nginx-ingress.yaml in this example.

    tcp:
      2222: "kubecf/scheduler:2222"
      20000: "kubecf/tcp-router:20000"
      20001: "kubecf/tcp-router:20001"
      20002: "kubecf/tcp-router:20002"
      20003: "kubecf/tcp-router:20003"
      20004: "kubecf/tcp-router:20004"
      20005: "kubecf/tcp-router:20005"
      20006: "kubecf/tcp-router:20006"
      20007: "kubecf/tcp-router:20007"
      20008: "kubecf/tcp-router:20008"
  2. Create the namespace.

    tux > kubectl create namespace nginx-ingress
  3. Install the NGINX Ingress Controller.

    tux > helm install nginx-ingress suse/nginx-ingress \
    --namespace nginx-ingress \
    --values nginx-ingress.yaml
  4. Monitor the progess of the deployment:

    tux > watch --color 'kubectl get pods --namespace nginx-ingress'
  5. After the deployment completes, the Ingress controller service will be deployed with either an external IP or a hostname.

    Find the external IP or hostname.

    tux > kubectl get services nginx-ingress-controller --namespace nginx-ingress

    You will get output similar to the following.

    NAME                       TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)
    nginx-ingress-controller   LoadBalancer   10.63.248.70   35.233.191.177   80:30344/TCP,443:31386/TCP
  6. Set up DNS records corresponding to the controller service IP or hostname and map it to the system_domain defined in your kubecf-config-values.yaml.

  7. Obtain a PEM formatted certificate that is associated with the system_domain defined in your kubecf-config-values.yaml

  8. In your kubecf-config-values.yaml configuration file, enable the ingress feature and set the tls.crt and tls.key for the certificate from the previous step.

    features:
      ingress:
        enabled: true
        tls:
          crt: |
            -----BEGIN CERTIFICATE-----
            MIIE8jCCAtqgAwIBAgIUT/Yu/Sv8AUl5zHXXEKCy5RKJqmYwDQYJKoZIhvcMOQMM
            [...]
            xC8x/+zB7XlvcRJRio6kk670+25ABP==
            -----END CERTIFICATE-----
          key: |
            -----BEGIN RSA PRIVATE KEY-----
            MIIE8jCCAtqgAwIBAgIUSI02lj2b2ImLy/zMrjNgW5d8EygwQSVJKoZIhvcYEGAW
            [...]
            to2WV7rPMb9W9fd2vVUXKKHTc+PiNg==
            -----END RSA PRIVATE KEY-----

7.9 Affinity and Anti-affinity

Operators can set affinity/anti-affinity rules to restrict how the scheduler determines the placement of a given pod on a given node. This can be achieved through node affinity/anti-affinity, where placement is determined by node labels (see https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity), or pod affinity/anti-affinity, where pod placement is determined by labels on pods that are already running on the node (see https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity).

In SUSE Cloud Application Platform, a default configuration will have following affinity/anti-affinity rules already in place:

  • Instance groups have anti-affinity against themselves. This applies to all instance groups, including database, but not to the bits, eirini, and eirini-extensions subcharts.

  • The diego-cell and router instance groups have anti-affinity against each other.

Note that to ensure an optimal spread of the pods across worker nodes we recommend running 5 or more worker nodes to satisfy both of the default anti-affinity constraints. An operator can also specify custom affinity rules via the sizing.instance-group.affinity helm parameter and any affinity rules specified here will overwrite the default rule, not merge with it.

7.9.1 Configuring Rules

To add or override affinity/anti-affinity settings, add a sizing.INSTANCE_GROUP.affinity block to your kubecf-config-values.yaml. Repeat as necessary for each instance group where affinity/anti-affinity settings need to be applied. For information on the available fields and valid values within the affinity: block, see https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity. Repeat as necessary for each instance group where affinity/anti-affinity settings need to be applied.

Example 1, node affinity.

Using this configuration, the Kubernetes scheduler would place both the asactors and asapi instance groups on a node with a label where the key is topology.kubernetes.io/zone and the value is 0.

sizing:
   asactors:
     affinity:
       nodeAffinity:
         requiredDuringSchedulingIgnoredDuringExecution:
           nodeSelectorTerms:
           - matchExpressions:
             - key: topology.kubernetes.io/zone
               operator: In
               values:
               - 0
   asapi:
     affinity:
       nodeAffinity:
         requiredDuringSchedulingIgnoredDuringExecution:
           nodeSelectorTerms:
           - matchExpressions:
             - key: topology.kubernetes.io/zone
               operator: In
               values:
               - 0

Example 2, pod anti-affinity.

sizing:
  api:
    affinity:
      podAntiAffinity:
        preferredDuringSchedulingIgnoredDuringExecution:
        - weight: 100
          podAffinityTerm:
            labelSelector:
              matchExpressions:
              - key: quarks.cloudfoundry.org/quarks-statefulset-name
                operator: In
                values:
                - sample_group
            topologyKey: kubernetes.io/hostname
  database:
    affinity:
      podAntiAffinity:
        preferredDuringSchedulingIgnoredDuringExecution:
        - weight: 100
          podAffinityTerm:
            labelSelector:
              matchExpressions:
              - key: quarks.cloudfoundry.org/quarks-statefulset-name
                operator: In
                values:
                - sample_group
            topologyKey: kubernetes.io/hostname

Example 1 above uses topology.kubernetes.io/zone as its label, which is one of the standard labels that get attached to nodes by default. The list of standard labels can be found at https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#built-in-node-labels.

In addition to the standard labels, custom labels can be specified as in Example 2. To use custom labels, following the process described in this section https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector.

7.10 High Availability

7.10.1 Configuring Cloud Application Platform for High Availability

There are two ways to make your SUSE Cloud Application Platform deployment highly available. The first method is to set the high_availability parameter in your deployment configuration file to true. The second method is to create custom configuration files with your own sizing values.

7.10.1.1 Finding Default and Allowable Sizing Values

The sizing: section in the Helm values.yaml files for the kubecf chart describes which roles can be scaled, and the scaling options for each role. You may use helm inspect to read the sizing: section in the Helm chart:

tux > helm show suse/kubecf | less +/sizing:

Another way is to use Perl to extract the information for each role from the sizing: section.

tux > helm inspect values suse/kubecf | \
perl -ne '/^sizing/..0 and do { print $.,":",$_ if /^ [a-z]/ || /high avail|scale|count/ }'

The default values.yaml files are also included in this guide at Section A.1, “Complete suse/kubecf values.yaml File”.

7.10.1.2 Using the high_availability Helm Property

One way to make your SUSE Cloud Application Platform deployment highly available is to use the high_availability Helm property. In your kubecf-config-values.yaml, set this property to true. This changes the size of all roles to the minimum required for a highly available deployment. Your configuration file, kubecf-config-values.yaml, should include the following.

high_availability: true
Important
Important: Sizing Priority

When sizing values are specified, it takes precedence over the high_availability property.

7.10.1.3 Using Custom Sizing Configurations

Another method to make your SUSE Cloud Application Platform deployment highly available is to explicitly configure the instance count of an instance group.

Important
Important: Sizing Priority

When sizing values are specified, it takes precedence over the high_availability property.

To see the full list of configurable instance groups, refer to default KubeCF values.yaml file in the appendix at Section A.1, “Complete suse/kubecf values.yaml File”.

The following is an example High Availability configuration. The example values are not meant to be copied, as these depend on your particular deployment and requirements.

sizing:
  adapter:
    instances: 2
  api:
    instances: 2
  asactors:
    instances: 2
  asapi:
    instances: 2
  asmetrics:
    instances: 2
  asnozzle:
    instances: 2
  auctioneer:
    instances: 2
  bits:
    instances: 2
  cc_worker:
    instances: 2
  credhub:
    instances: 2
  database:
    instances: 2
  diego_api:
    instances: 2
  diego_cell:
    instances: 2
  doppler:
    instances: 2
  eirini:
    instances: 3
  log_api:
    instances: 2
  nats:
    instances: 2
  router:
    instances: 2
  routing_api:
    instances: 2
  scheduler:
    instances: 2
  uaa:
    instances: 2
  tcp_router:
    instances: 2

7.11 External Blobstore

Cloud Foundry Application Runtime (CFAR) uses a blobstore (see https://docs.cloudfoundry.org/concepts/cc-blobstore.html) to store the source code that developers push, stage, and run. This section explains how to configure an external blobstore for the Cloud Controller component of your SUSE Cloud Application Platform deployment.

SUSE Cloud Application Platform relies on ops files (see https://github.com/cloudfoundry/cf-deployment/blob/master/operations/README.md) provided by cf-deployment (see https://github.com/cloudfoundry/cf-deployment) releases for external blobstore configurations. The default configuration for the blobstore is singleton.

7.11.1 Configuration

Currently SUSE Cloud Application Platform supports Amazon Simple Storage Service (Amazon S3, see https://aws.amazon.com/s3/) as an external blobstore . In order to configure Amazon S3 as an external blobstore, set the following in your kubecf-config-values.yaml file and replace the example values.

features:
  blobstore:
    provider: s3
    s3:
      aws_region: "us-east-1"
      blobstore_access_key_id:  AWS-ACCESS-KEY-ID
      blobstore_secret_access_key: AWS-SECRET-ACCESS-KEY>
      # User provided value for the blobstore admin password.
      blobstore_admin_users_password: PASSWORD
      # The following values are used as S3 bucket names. The buckets are automatically created if not present.
      app_package_directory_key: APP-BUCKET-NAME
      buildpack_directory_key: BUILDPACK-BUCKET-NAME
      droplet_directory_key: DROPLET-BUCKET-NAME
      resource_directory_key: RESOURCE-BUCKET-NAME
Warning
Warning: us-east-1 as Only Valid Region

Currently, there is a limitation where only us-east-1 can be chosen as the aws_region. For more information about this issue, see https://github.com/cloudfoundry-incubator/kubecf/issues/656.

Ensure the supplied AWS credentials have appropriate permissions as described at https://docs.cloudfoundry.org/deploying/common/cc-blobstore-config.html#fog-aws-iam.

7.12 External Database

By default, SUSE Cloud Application Platform includes a single-availability database provided by the Percona XtraDB Cluster (PXC). SUSE Cloud Application Platform can be configured to use an external database system, such as a data service offered by a cloud service provider or an existing high availability database server.

To configure your deployment to use an external database, please follow the instructions below.

The current SUSE Cloud Application Platform release is compatible with the following types and versions of external databases:

  • MySQL 5.7

7.12.1 Configuration

This section describes how to enable and configure your deployment to connect to an external database. The configuration options are specified through Helm values inside the kubecf-config-values.yaml. The deployment and configuration of the external database itself is the responsibility of the operator and beyond the scope of this documentation. It is assumed the external database has been deployed and accessible.

Important
Important: Configuration during Initial Install Only

Configuration of SUSE Cloud Application Platform to use an external database must be done during the initial installation and cannot be changed afterwards.

All the databases listed in the config snippet below need to exist before installing KubeCF. One way of doing that is manually running CREATE DATABASE IF NOT EXISTS database-name for each database.

The following snippet of the kubecf-config-values.yaml contains an example of an external database configuration.

features:
  embedded_database:
    enabled: false
  external_database:
    enabled: true
    require_ssl: false
    ca_cert: ~
    type: mysql
    host: hostname
    port: 3306
    databases:
      uaa:
        name: uaa
        password: root
        username: root
      cc:
        name: cloud_controller
        password: root
        username: root
      bbs:
        name: diego
        password: root
        username: root
      routing_api:
        name: routing-api
        password: root
        username: root
      policy_server:
        name: network_policy
        password: root
        username: root
      silk_controller:
        name: network_connectivity
        password: root
        username: root
      locket: 
        name: locket
        password: root
        username: root
      credhub:        
        name: credhub
        password: root
        username: root

7.13 Add the Kubernetes charts repository

Download the SUSE Kubernetes charts repository with Helm:

tux > helm repo add suse https://kubernetes-charts.suse.com/

You may replace the example suse name with any name. Verify with helm:

tux > helm repo list
NAME       URL
stable     https://kubernetes-charts.storage.googleapis.com
local      http://127.0.0.1:8879/charts
suse       https://kubernetes-charts.suse.com/

List your chart names, as you will need these for some operations:

tux > helm search repo suse
NAME                            CHART VERSION        APP VERSION    DESCRIPTION
suse/cf-operator                4.5.13+0.gd4738712    2.0.1          A Helm chart for cf-operator, the k8s operator ....
suse/console                    4.0.1                2.0.1          A Helm chart for deploying SUSE Stratos Console
suse/kubecf                     2.2.3                2.0.1          A Helm chart for KubeCF
suse/metrics                    1.2.1                2.0.1          A Helm chart for Stratos Metrics
suse/minibroker                 0.3.1                               A minibroker for your minikube
suse/nginx-ingress              0.28.4               0.15.0         An nginx Ingress controller that uses ConfigMap to store ...
...

7.14 Deploying SUSE Cloud Application Platform

This section describes how to deploy SUSE Cloud Application Platform on Google GKE, and how to configure your DNS records.

Warning
Warning: KubeCF and cf-operator versions

KubeCF and cf-operator interoperate closely. Before you deploy a specific version combination, make sure they were confirmed to work. For more information see Section 3.5, “Releases and Associated Versions”.

7.14.1 Deploy the Operator

  1. First, create the namespace for the operator.

    tux > kubectl create namespace cf-operator
  2. Install the operator.

    The value of global.operator.watchNamespace indicates the namespace the operator will monitor for a KubeCF deployment. This namespace should be separate from the namespace used by the operator. In this example, this means KubeCF will be deployed into a namespace called kubecf.

    tux > helm install cf-operator suse/cf-operator \
    --namespace cf-operator \
    --set "global.operator.watchNamespace=kubecf" \
    --version 4.5.13+0.gd4738712
  3. Wait until cf-operator is successfully deployed before proceeding. Monitor the status of your cf-operator deployment using the watch command.

    tux > watch --color 'kubectl get pods --namespace cf-operator'

7.14.2 Deploy KubeCF

  1. Use Helm to deploy KubeCF.

    Note that you do not need to manually create the namespace for KubeCF.

    tux > helm install kubecf suse/kubecf \
    --namespace kubecf \
    --values kubecf-config-values.yaml \
    --version 2.2.3
  2. Monitor the status of your KubeCF deployment using the watch command.

    tux > watch --color 'kubectl get pods --namespace kubecf'
  3. Find the value of EXTERNAL-IP for each of the public services.

    tux > kubectl get service --namespace kubecf router-public
    
    tux > kubectl get service --namespace kubecf tcp-router-public
    
    tux > kubectl get service --namespace kubecf ssh-proxy-public
  4. Create DNS A records for the public services.

    1. For the router-public service, create a record mapping the EXTERNAL-IP value to <system_domain>.

    2. For the router-public service, create a record mapping the EXTERNAL-IP value to *.<system_domain>.

    3. For the tcp-router-public service, create a record mapping the EXTERNAL-IP value to tcp.<system_domain>.

    4. For the ssh-proxy-public service, create a record mapping the EXTERNAL-IP value to ssh.<system_domain>.

  5. When all pods are fully ready, verify your deployment.

    Connect and authenticate to the cluster.

    tux > cf api --skip-ssl-validation "https://api.<system_domain>"
    
    # Use the cf_admin_password set in kubecf-config-values.yaml
    tux > cf auth admin changeme

7.15 Deploying and Using the Google Cloud Platform Service Broker

The Google Cloud Platform (GCP) Service Broker is designed for use with Cloud Foundry and Kubernetes. It is compliant with v2.13 of the Open Service Broker API (see https://www.openservicebrokerapi.org/) and provides support for the services listed at https://github.com/GoogleCloudPlatform/gcp-service-broker/tree/master#open-service-broker-for-google-cloud-platform.

This section describes the how to deploy and use the GCP Service Broker, as a KubeCF application, on SUSE Cloud Application Platform.

7.15.1 Enable APIs

  1. From the GCP console, click the Navigation menu.

  2. Click APIs & Services and then Library.

  3. Enable the following:

  4. Additionally, enable the APIs for the services that will be used. Refer to https://github.com/GoogleCloudPlatform/gcp-service-broker/tree/master#open-service-broker-for-google-cloud-platform to see the services available and the corresponding APIs that will need to be enabled. The examples in this section will require enabling the following APIs:

7.15.2 Create a Service Account

A service account allows non-human users to authenticate with and be authorized to interact with Google APIs. To learn more about service accounts, see https://cloud.google.com/iam/docs/understanding-service-accounts. The service account created here will be used by the GCP Service Broker application so that it can interact with the APIs to provision resources.

  1. From the GCP console, click the Navigation menu.

  2. Go to IAM & admin and click Service accounts.

  3. Click Create Service Account.

  4. In the Service account name field, enter a name.

  5. Click Create.

  6. In the Service account permissions section, add the following roles:

    • Project > Editor

    • Cloud SQL > Cloud SQL Admin

    • Compute Engine > Compute Admin

    • Service Accounts > Service Account User

    • Cloud Services > Service Broker Admin

    • IAM > Security Admin

  7. Click Continue.

  8. In the Create key section, click Create Key.

  9. In the Key type field, select JSON and click Create. Save the file to a secure location. This will be required when deploying the GCP Service Broker application.

  10. Click Done to finish creating the service account.

7.15.3 Create a Database for the GCP Service Broker

The GCP Service Broker requires a database to store information about the resources it provisions. Any database that adheres to the MySQL protocol may be used, but it is recommended to use a GCP Cloud SQL instance, as outlined in the following steps.

  1. From the GCP console, click the Navigation menu.

  2. Under the Storage section, click SQL.

  3. Click Create Instance.

  4. Click Choose MySQL to select MySQL as the database engine.

  5. In the Instance ID field, enter an identifier for MySQL instance.

  6. In the Root password field, set a password for the root user.

  7. Click Show configuration options to see additonal configuration options.

  8. Under the Set connectivity section, click Add network to add an authorized network.

  9. In the Network field, enter 0.0.0.0/0 and click Done.

  10. Optionally, create SSL certificates for the database and store them in a secure location.

  11. Click Create and wait for the MySQL instance to finish creating.

  12. After the MySQL instance is finished creating, connect to it using either the Cloud Shell or the mysql command line client.

    • To connect using Cloud Shell:

      1. Click on the instance ID of the MySQL instance.

      2. In the Connect to this instance section of the Overview tab, click Connect using Cloud Shell.

      3. After the shell is opened, the gcloud sql connect command is displayed. Press Enter to connect to the MySQL instance as the root user.

      4. When prompted, enter the password for the root user set in an earlier step.

    • To connect using the mysql command line client:

      1. Click on the instance ID of the MySQL instance.

      2. In the Connect to this instance section of the Overview tab, take note of the IP address. For example, 11.22.33.44.

      3. Using the mysql command line client, run the following command.

        tux > mysql -h 11.22.33.44 -u root -p
      4. When prompted, enter the password for the root user set in an earlier step.

  13. After connecting to the MySQL instance, run the following commands to create an initial user. The service broker will use this user to connect to the service broker database.

    CREATE DATABASE servicebroker;
    CREATE USER 'gcpdbuser'@'%' IDENTIFIED BY 'gcpdbpassword';
    GRANT ALL PRIVILEGES ON servicebroker.* TO 'gcpdbuser'@'%' WITH GRANT OPTION;

    Where:

    gcpdbuser

    Is the username of the user the service broker will connect to the service broker database with. Replace gcpdbuser with a username of your choosing.

    gcpdbpassword

    Is the password of the user the service broker will connect to the service broker database with. Replace gcpdbpassword with a secure password of your choosing.

7.15.4 Deploy the Service Broker

The GCP Service Broker can be deployed as a Cloud Foundry application onto your deployment of SUSE Cloud Application Platform.

  1. Get the GCP Service Broker application from Github and change to the GCP Service Broker application directory.

    tux > git clone https://github.com/GoogleCloudPlatform/gcp-service-broker
    tux > cd gcp-service-broker
  2. Update the manifest.yml file and add the environment variables below and their associated values to the env section:

    ROOT_SERVICE_ACCOUNT_JSON

    The contents, as a string, of the JSON key file created for the service account created earlier (see Section 7.15.2, “Create a Service Account”).

    SECURITY_USER_NAME

    The username to authenticate broker requests. This will be the same one used in the cf create-service-broker command. In the examples, this is cfgcpbrokeruser.

    SECURITY_USER_PASSWORD

    The password to authenticate broker requests. This will be the same one used in the cf create-service-broker command. In the examples, this is cfgcpbrokerpassword.

    DB_HOST

    The host for the service broker database created earlier (see Section 7.15.3, “Create a Database for the GCP Service Broker”. This can be found in the GCP console by clicking on the name of the database instance and examining the Connect to this instance section of the Overview tab. In the examples, this is 11.22.33.44.

    DB_USERNAME

    The username used to connect to the service broker database. This was created by the mysql commands earlier while connected to the service broker database instance (see Section 7.15.3, “Create a Database for the GCP Service Broker”). In the examples, this is gcpdbuser.

    DB_PASSWORD

    The password of the user used to connect to the service broker database. This was created by the mysql commands earlier while connected to the service broker database instance (see Section 7.15.3, “Create a Database for the GCP Service Broker”). In the examples, this is gcpdbpassword.

    The manifest.yml should look similar to the example below.

    ### example manifest.yml for the GCP Service Broker
    ---
    applications:
    - name: gcp-service-broker
      memory: 1G
      buildpacks:
      - go_buildpack
      env:
        GOPACKAGENAME: github.com/GoogleCloudPlatform/gcp-service-broker
        GOVERSION: go1.12
        ROOT_SERVICE_ACCOUNT_JSON: '{ ... }'
        SECURITY_USER_NAME: cfgcpbrokeruser
        SECURITY_USER_PASSWORD: cfgcpbrokerpassword
        DB_HOST: 11.22.33.44
        DB_USERNAME: gcpdbuser
        DB_PASSWORD: gcpdbpassword
  3. After updating the manifest.yml file, deploy the service broker as an application to your Cloud Application Platform deployment. Specify a health check type of none.

    tux > cf push --health-check-type none
  4. After the service broker application is deployed, take note of the URL displayed in the route field. Alternatively, run cf app gcp-service-broker to find the URL in the route field. On a browser, go to the route (for example, https://gcp-service-broker.example.com). You should see the documentation for the GCP Service Broker.

  5. Create the service broker in KubeCF using the cf CLI.

    tux > cf create-service-broker gcp-service-broker cfgcpbrokeruser cfgcpbrokerpassword https://gcp-service-broker.example.com

    Where https://gcp-service-broker.example.com is replaced by the URL of the GCP Service Broker application deployed to SUSE Cloud Application Platform. Find the URL using cf app gcp-service-broker and examining the routes field.

  6. Verify the service broker has been successfully registered.

    tux > cf service-brokers
  7. List the available services and their associated plans for the GCP Service Broker. For more information about the services, see https://github.com/GoogleCloudPlatform/gcp-service-broker/tree/master#open-service-broker-for-google-cloud-platform.

    tux > cf service-access -b gcp-service-broker
  8. Enable access to a service. This example enables access to the Google CloudSQL MySQL service (see https://cloud.google.com/sql/).

    tux > cf enable-service-access google-cloudsql-mysql
  9. Create an instance of the Google CloudSQL MySQL service. This example uses the mysql-db-f1-micro plan. Use the -c flag to pass optional parameters when provisioning a service. See https://github.com/GoogleCloudPlatform/gcp-service-broker/blob/master/docs/use.md for the parameters that can be set for each service.

    tux > cf create-service google-cloudsql-mysql mysql-db-f1-micro mydb-instance

    Wait for the service to finish provisioning. Check the status using the GCP console or with the following command.

    tux > cf service mydb-instance | grep status

    The service can now be bound to applications and used.

7.16 LDAP Integration

SUSE Cloud Application Platform can be integrated with identity providers to help manage authentication of users. The Lightweight Directory Access Protocol (LDAP) is an example of an identity provider that Cloud Application Platform integrates with. This section describes the necessary components and steps in order to configure the integration. See User Account and Authentication LDAP Integration for more information.

7.16.1 Prerequisites

The following prerequisites are required in order to complete an LDAP integration with SUSE Cloud Application Platform.

  • cf, the Cloud Foundry command line interface. For more information, see https://docs.cloudfoundry.org/cf-cli/.

    For SUSE Linux Enterprise and openSUSE systems, install using zypper.

    tux > sudo zypper install cf-cli

    For SLE, ensure the SUSE Cloud Application Platform Tools Module has been added. Add the module using YaST or SUSEConnect.

    tux > SUSEConnect --product sle-module-cap-tools/15.1/x86_64

    For other systems, follow the instructions at https://docs.cloudfoundry.org/cf-cli/install-go-cli.html.

  • uaac, the Cloud Foundry uaa command line client (UAAC). See https://docs.cloudfoundry.org/uaa/uaa-user-management.html for more information and installation instructions.

    On SUSE Linux Enterprise systems, ensure the ruby-devel and gcc-c++ packages have been installed before installing the cf-uaac gem.

    tux > sudo zypper install ruby-devel gcc-c++
  • An LDAP server and the credentials for a user/service account with permissions to search the directory.

7.16.2 Example LDAP Integration

Run the following commands to complete the integration of your Cloud Application Platform deployment and LDAP server.

  1. Use UAAC to target your uaa server.

    tux > uaac target --skip-ssl-validation https://uaa.example.com:2793
  2. Authenticate to the uaa server as admin using the UAA_ADMIN_CLIENT_SECRET set in your kubecf-config-values.yaml file.

    tux > uaac token client get admin --secret password
  3. Create the LDAP identity provider. A 201 response will be returned when the identity provider is successfully created. See the UAA API Reference and Cloud Foundry UAA-LDAP Documentationfor information regarding the request parameters and additional options available to configure your identity provider.

    The following is an example of a uaac curl command and its request parameters used to create an identity provider. Specify the parameters according to your LDAP server's credentials and directory structure. Ensure the user specifed in the bindUserDn has permissions to search the directory.

    tux > uaac curl /identity-providers?rawConfig=true \
        --request POST \
        --insecure \
        --header 'Content-Type: application/json' \
        --header 'X-Identity-Zone-Subdomain: uaa' \
        --data '{
      "type" : "ldap",
      "config" : {
        "ldapProfileFile" : "ldap/ldap-search-and-bind.xml",
        "baseUrl" : "ldap://ldap.example.com:389",
        "bindUserDn" : "cn=admin,dc=example,dc=com",
        "bindPassword" : "password",
        "userSearchBase" : "dc=example,dc=com",
        "userSearchFilter" : "uid={0}",
        "ldapGroupFile" : "ldap/ldap-groups-map-to-scopes.xml",
        "groupSearchBase" : "dc=example,dc=com",
        "groupSearchFilter" : "member={0}"
      },
      "originKey" : "ldap",
      "name" : "My LDAP Server",
      "active" : true
      }'
  4. Verify the LDAP identify provider has been created in the kubecf zone. The output should now contain an entry for the ldap type.

    tux > uaac curl /identity-providers --insecure --header "X-Identity-Zone-Id: uaa"
  5. Use the cf CLI to target your SUSE Cloud Application Platform deployment.

    tux > cf api --skip-ssl-validation https://api.example.com
  6. Log in as an administrator.

    tux > cf login
    API endpoint: https://api.example.com
    
    Email> admin
    
    Password>
    Authenticating...
    OK
  7. Create users associated with your LDAP identity provider.

    tux > cf create-user username --origin ldap
    Creating user username...
    OK
    
    TIP: Assign roles with 'cf set-org-role' and 'cf set-space-role'.
  8. Assign the user a role. Roles define the permissions a user has for a given org or space and a user can be assigned multiple roles. See Orgs, Spaces, Roles, and Permissions for available roles and their corresponding permissions. The following example assumes that an org named Org and a space named Space have already been created.

    tux > cf set-space-role username Org Space SpaceDeveloper
    Assigning role RoleSpaceDeveloper to user username in org Org / space Space as admin...
    OK
    tux > cf set-org-role username Org OrgManager
    Assigning role OrgManager to user username in org Org as admin...
    OK
  9. Verify the user can log into your SUSE Cloud Application Platform deployment using their associated LDAP server credentials.

    tux > cf login
    API endpoint: https://api.example.com
    
    Email> username
    
    Password>
    Authenticating...
    OK
    
    
    
    API endpoint:   https://api.example.com (API version: 2.115.0)
    User:           username@ldap.example.com

If the LDAP identity provider is no longer needed, it can be removed with the following steps.

  1. Obtain the ID of your identity provider.

    tux > uaac curl /identity-providers \
        --insecure \
        --header "Content-Type:application/json" \
        --header "Accept:application/json" \
        --header"X-Identity-Zone-Id:uaa"
  2. Delete the identity provider.

    tux > uaac curl /identity-providers/IDENTITY_PROVIDER_ID \
        --request DELETE \
        --insecure \
        --header "X-Identity-Zone-Id:uaa"

7.17 Expanding Capacity of a Cloud Application Platform Deployment on Google GKE

If the current capacity of your Cloud Application Platform deployment is insufficient for your workloads, you can expand the capacity using the procedure in this section.

These instructions assume you have followed the procedure in Chapter 7, Deploying SUSE Cloud Application Platform on Google Kubernetes Engine (GKE) and have a running Cloud Application Platform deployment on Microsoft AKS. The instructions below will use environment variables defined in Section 7.2, “Creating a GKE cluster”.

  1. Get the most recently created node in the cluster.

    tux > RECENT_VM_NODE=$(gcloud compute instances list --filter=name~${CLUSTER_NAME:?required} --format json | jq --raw-output '[sort_by(.creationTimestamp) | .[].creationTimestamp ] | last | .[0:19] | strptime("%Y-%m-%dT%H:%M:%S") | mktime')
  2. Increase the Kubernetes node count in the cluster. Replace the example value with the number of nodes required for your workload.

    tux > gcloud container clusters resize $CLUSTER_NAME \
    --num-nodes 5
  3. Verify the new nodes are in a Ready state before proceeding.

    tux > kubectl get nodes
  4. Add or update the following in your kubecf-config-values.yaml file to increase the number of diego-cell in your Cloud Application Platform deployment. Replace the example value with the number required by your workflow.

    sizing:
      diego_cell:
        instances: 5
  5. Perform a helm upgrade to apply the change.

    tux > helm upgrade kubecf suse/kubecf \
    --namespace kubecf \
    --values kubecf-config-values.yaml \
    --version 2.2.3
  6. Monitor progress of the additional diego-cell pods:

    tux > watch --color 'kubectl get pods --namespace kubecf'

8 Installing the Stratos Web Console

The Stratos user interface (UI) is a modern web-based management application for Cloud Foundry. It provides a graphical management console for both developers and system administrators. Install Stratos with Helm after all of the kubecf pods are running.

8.1 Deploy Stratos on SUSE® CaaS Platform

The steps in this section describe how to install Stratos on SUSE® CaaS Platform without an external load balancer, instead mapping a worker node to your SUSE Cloud Application Platform domain as described in Section 4.5, “Deployment Configuration”. These instructions assume you have followed the procedure in Chapter 4, Deploying SUSE Cloud Application Platform on SUSE CaaS Platform, have deployed kubecf successfully, and have created a default storage class.

If you are using SUSE Enterprise Storage as your storage back-end, copy the secret into the Stratos namespace:

tux > kubectl get secret ceph-secret-admin --output json --namespace default | \
sed 's/"namespace": "default"/"namespace": "stratos"/' | kubectl create --filename -

You should already have the Stratos charts when you downloaded the SUSE charts repository (see Section 4.12, “Add the Kubernetes Charts Repository”). Search your Helm repository to verify that you have the suse/console chart:

tux > helm search repo suse
NAME                            CHART VERSION        APP VERSION    DESCRIPTION
suse/cf-operator                4.5.13+0.gd4738712    2.0.1          A Helm chart for cf-operator, the k8s operator ....
suse/console                    4.0.1                2.0.1          A Helm chart for deploying SUSE Stratos Console
suse/kubecf                     2.2.3                2.0.1          A Helm chart for KubeCF
suse/metrics                    1.2.1                2.0.1          A Helm chart for Stratos Metrics
suse/minibroker                 0.3.1                               A minibroker for your minikube
suse/nginx-ingress              0.28.4               0.15.0         An nginx Ingress controller that uses ConfigMap to store ...
...

Use Helm to install Stratos, using the same kubecf-config-values.yaml configuration file you used to deploy kubecf:

Note
Note: Technology Preview Features

Some Stratos releases may include features as part of a technology preview. Technology preview features are for evaluation purposes only and not supported for production use. To see the technology preview features available for a given release, refer to https://github.com/SUSE/stratos/blob/master/CHANGELOG.md.

To enable technology preview features, set the console.techPreview Helm value to true . For example, when running helm install add --set console.techPreview=true.

tux > kubectl create namespace stratos

tux > helm install susecf-console suse/console \
--namespace stratos \
--values kubecf-config-values.yaml

You can monitor the status of your stratos deployment with the watch command:

tux > watch --color 'kubectl get pods --namespace stratos'

When stratos is successfully deployed, the following is observed:

  • For the volume-migration pod, the STATUS is Completed and the READY column is at 0/1.

  • All other pods have a Running STATUS and a READY value of n/n.

Press CtrlC to exit the watch command.

When the stratos deployment completes, query with Helm to view your release information:

tux > helm status susecf-console
LAST DEPLOYED: Wed Mar 27 06:51:36 2019
NAMESPACE: stratos
STATUS: DEPLOYED

RESOURCES:
==> v1/Secret
NAME                           TYPE    DATA  AGE
susecf-console-secret          Opaque  2     3h
susecf-console-mariadb-secret  Opaque  2     3h

==> v1/PersistentVolumeClaim
NAME                                  STATUS  VOLUME                                    CAPACITY  ACCESSMODES  STORAGECLASS  AGE
susecf-console-upgrade-volume         Bound   pvc-711380d4-5097-11e9-89eb-fa163e15acf0  20Mi      RWO          persistent    3h
susecf-console-encryption-key-volume  Bound   pvc-711b5275-5097-11e9-89eb-fa163e15acf0  20Mi      RWO          persistent    3h
console-mariadb                       Bound   pvc-7122200c-5097-11e9-89eb-fa163e15acf0  1Gi       RWO          persistent    3h

==> v1/Service
NAME                    CLUSTER-IP      EXTERNAL-IP                                                PORT(S)   AGE
susecf-console-mariadb  172.24.137.195  <none>                                                     3306/TCP  3h
susecf-console-ui-ext   172.24.80.22    10.86.101.115,172.28.0.31,172.28.0.36,172.28.0.7,172.28.0.22  8443/TCP  3h

==> v1beta1/Deployment
NAME        DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
stratos-db  1        1        1           1          3h

==> v1beta1/StatefulSet
NAME     DESIRED  CURRENT  AGE
stratos  1        1        3h

Find the external IP address with kubectl get service susecf-console-ui-ext --namespace stratos to access your new Stratos Web console, for example https://10.86.101.115:8443, or use the domain you created for it, and its port, for example https://example.com:8443. Wade through the nag screens about the self-signed certificates and log in as admin with the password you created in kubecf-config-values.yaml.

Stratos UI Cloud Foundry Console
Figure 8.1: Stratos UI Cloud Foundry Console

8.1.1 Connecting SUSE® CaaS Platform to Stratos

Stratos can show information from your SUSE® CaaS Platform environment.

To enable this, you must register and connect your SUSE® CaaS Platform environment with Stratos.

In the Stratos UI, go to Endpoints in the left-hand side navigation and click on the + icon in the top-right of the view - you should be shown the "Register new Endpoint" view.

  1. In the Stratos UI, go to Endpoints in the left-hand side navigation and click on the + icon in the top-right of the view.

  2. On the Register a new Endpoint view, click the SUSE CaaS Platform button.

  3. Enter a memorable name for your SUSE® CaaS Platform environment in the Name field. For example, my-endpoint.

  4. Enter the URL of the API server for your Kubernetes environment in the Endpoint Address field. Run kubectl cluster-info and use the value of Kubernetes master as the URL.

    tux > kubectl cluster-info
  5. Activate the Skip SSL validation for the endpoint check box if using self-signed certificates.

  6. Click Register.

  7. Activate the Connect to my-endpoint now (optional). check box.

  8. Provide a valid kubeconfig file for your SUSE® CaaS Platform environment.

  9. Click Connect.

  10. In the Stratos UI, go to Kubernetes in the left-hand side navigation. Information for your SUSE® CaaS Platform environment should now be displayed.

Kubernetes Environment Information on Stratos
Figure 8.2: Kubernetes Environment Information on Stratos

8.2 Deploy Stratos on Amazon EKS

Before deploying Stratos, ensure kubecf has been successfully deployed on Amazon EKS (see Chapter 6, Deploying SUSE Cloud Application Platform on Amazon Elastic Kubernetes Service (EKS)).

Configure a scoped storage class for your Stratos deployment. Create a configuration file, called scoped-storage-class.yaml in this example, using the following as a template. Specify the region you are using as the zone and be sure to include the letter (for example, the letter a in us-west-2a) identifier to indicate the Availability Zone used:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: gp2scoped
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2
  zone: "us-west-2a"
reclaimPolicy: Retain
mountOptions:
  - debug

Create the storage class using the scoped-storage-class.yaml configuration file:

tux > kubectl create --filename scoped-storage-class.yaml

Verify the storage class has been created:

tux > kubectl get storageclass
NAME            PROVISIONER             AGE
gp2 (default)   kubernetes.io/aws-ebs   1d
gp2scoped       kubernetes.io/aws-ebs   1d

Use Helm to install Stratos:

Note
Note: Technology Preview Features

Some Stratos releases may include features as part of a technology preview. Technology preview features are for evaluation purposes only and not supported for production use. To see the technology preview features available for a given release, refer to https://github.com/SUSE/stratos/blob/master/CHANGELOG.md.

To enable technology preview features, set the console.techPreview Helm value to true . For example, when running helm install add --set console.techPreview=true.

tux > kubectl create namespace stratos

tux > helm install susecf-console suse/console \
--namespace stratos \
--values kubecf-config-values.yaml \
--set kube.storage_class.persistent=gp2scoped \
--set services.loadbalanced=true

You can monitor the status of your stratos deployment with the watch command:

tux > watch --color 'kubectl get pods --namespace stratos'

When stratos is successfully deployed, the following is observed:

  • For the volume-migration pod, the STATUS is Completed and the READY column is at 0/1.

  • All other pods have a Running STATUS and a READY value of n/n.

Press CtrlC to exit the watch command.

Obtain the host name of the service exposed through the public load balancer:

tux > kubectl get service susecf-console-ui-ext --namespace stratos

Use this host name to create a CNAME record.

Stratos UI Cloud Foundry Console
Figure 8.3: Stratos UI Cloud Foundry Console

8.2.1 Connecting Amazon EKS to Stratos

Stratos can show information from your Amazon EKS environment.

To enable this, you must register and connect your Amazon EKS environment with Stratos.

In the Stratos UI, go to Endpoints in the left-hand side navigation and click on the + icon in the top-right of the view - you should be shown the "Register new Endpoint" view.

  1. In the Stratos UI, go to Endpoints in the left-hand side navigation and click on the + icon in the top-right of the view.

  2. On the Register a new Endpoint view, click the Amazon EKS button.

  3. Enter a memorable name for your Amazon EKS environment in the Name field. For example, my-endpoint.

  4. Enter the URL of the API server for your Kubernetes environment in the Endpoint Address field. Run kubectl cluster-info and use the value of Kubernetes master as the URL.

    tux > kubectl cluster-info
  5. Activate the Skip SSL validation for the endpoint check box if using self-signed certificates.

  6. Click Register.

  7. Activate the Connect to my-endpoint now (optional). check box.

  8. Enter the name of your Amazon EKS cluster in the Cluster field.

  9. Enter your AWS Access Key ID in the Access Key ID field.

  10. Enter your AWS Secret Access Key in the Secret Access Key field.

  11. Click Connect.

  12. In the Stratos UI, go to Kubernetes in the left-hand side navigation. Information for your Amazon EKS environment should now be displayed.

Kubernetes Environment Information on Stratos
Figure 8.4: Kubernetes Environment Information on Stratos

8.3 Deploy Stratos on Microsoft AKS

Before deploying Stratos, ensure kubecf has been successfully deployed on Microsoft AKS (see Chapter 5, Deploying SUSE Cloud Application Platform on Microsoft Azure Kubernetes Service (AKS)).

Note
Note: Technology Preview Features

Some Stratos releases may include features as part of a technology preview. Technology preview features are for evaluation purposes only and not supported for production use. To see the technology preview features available for a given release, refer to https://github.com/SUSE/stratos/blob/master/CHANGELOG.md.

To enable technology preview features, set the console.techPreview Helm value to true . For example, when running helm install add --set console.techPreview=true.

Use Helm to install Stratos:

tux > kubectl create namespace stratos
	  
tux > helm install susecf-console suse/console \
--namespace stratos \
--values kubecf-config-values.yaml \
--set services.loadbalanced=true

You can monitor the status of your stratos deployment with the watch command:

tux > watch --color 'kubectl get pods --namespace stratos'

When stratos is successfully deployed, the following is observed:

  • For the volume-migration pod, the STATUS is Completed and the READY column is at 0/1.

  • All other pods have a Running STATUS and a READY value of n/n.

Press CtrlC to exit the watch command.

Obtain the IP address of the service exposed through the public load balancer:

tux > kubectl get service susecf-console-ui-ext --namespace stratos

Use this IP address to create an A record.

Stratos UI Cloud Foundry Console
Figure 8.5: Stratos UI Cloud Foundry Console

8.3.1 Connecting Microsoft AKS to Stratos

Stratos can show information from your Microsoft AKS environment.

To enable this, you must register and connect your Microsoft AKS environment with Stratos.

In the Stratos UI, go to Endpoints in the left-hand side navigation and click on the + icon in the top-right of the view - you should be shown the "Register new Endpoint" view.

  1. In the Stratos UI, go to Endpoints in the left-hand side navigation and click on the + icon in the top-right of the view.

  2. On the Register a new Endpoint view, click the Azure AKS button.

  3. Enter a memorable name for your Microsoft AKS environment in the Name field. For example, my-endpoint.

  4. Enter the URL of the API server for your Kubernetes environment in the Endpoint Address field. Run kubectl cluster-info and use the value of Kubernetes master as the URL.

    tux > kubectl cluster-info
  5. Activate the Skip SSL validation for the endpoint check box if using self-signed certificates.

  6. Click Register.

  7. Activate the Connect to my-endpoint now (optional). check box.

  8. Provide a valid kubeconfig file for your Microsoft AKS environment.

  9. Click Connect.

  10. In the Stratos UI, go to Kubernetes in the left-hand side navigation. Information for your Microsoft AKS environment should now be displayed.

Kubernetes Environment Information on Stratos
Figure 8.6: Kubernetes Environment Information on Stratos

8.4 Deploy Stratos on Google GKE

Before deploying Stratos, ensure kubecf has been successfully deployed on Google GKE (see Chapter 7, Deploying SUSE Cloud Application Platform on Google Kubernetes Engine (GKE)).

Use Helm to install Stratos:

Note
Note: Technology Preview Features

Some Stratos releases may include features as part of a technology preview. Technology preview features are for evaluation purposes only and not supported for production use. To see the technology preview features available for a given release, refer to https://github.com/SUSE/stratos/blob/master/CHANGELOG.md.

To enable technology preview features, set the console.techPreview Helm value to true . For example, when running helm install add --set console.techPreview=true.

tux > kubectl create namespace stratos

tux > helm install susecf-console suse/console \
--namespace stratos \
--values kubecf-config-values.yaml \
--set services.loadbalanced=true

You can monitor the status of your stratos deployment with the watch command:

tux > watch --color 'kubectl get pods --namespace stratos'

When stratos is successfully deployed, the following is observed:

  • For the volume-migration pod, the STATUS is Completed and the READY column is at 0/1.

  • All other pods have a Running STATUS and a READY value of n/n.

Press CtrlC to exit the watch command.

Obtain the IP address of the service exposed through the public load balancer:

tux > kubectl get service susecf-console-ui-ext --namespace stratos

Use this IP address to create an A record.

Stratos UI Cloud Foundry Console
Figure 8.7: Stratos UI Cloud Foundry Console

8.4.1 Connecting Google GKE to Stratos

Stratos can show information from your Google GKE environment.

To enable this, you must register and connect your Google GKE environment with Stratos.

In the Stratos UI, go to Endpoints in the left-hand side navigation and click on the + icon in the top-right of the view - you should be shown the "Register new Endpoint" view.

  1. In the Stratos UI, go to Endpoints in the left-hand side navigation and click on the + icon in the top-right of the view.

  2. On the Register a new Endpoint view, click the Google Kubernetes Engine button.

  3. Enter a memorable name for your Microsoft AKS environment in the Name field. For example, my-endpoint.

  4. Enter the URL of the API server for your Kubernetes environment in the Endpoint Address field. Run kubectl cluster-info and use the value of Kubernetes master as the URL.

    tux > kubectl cluster-info
  5. Activate the Skip SSL validation for the endpoint check box if using self-signed certificates.

  6. Click Register.

  7. Activate the Connect to my-endpoint now (optional). check box.

  8. Provide a valid Application Default Credentials file for your Google GKE environment. Generate the file using the command below. The command saves the credentials to a file named application_default_credentials.json and outputs the path of the file.

    tux > gcloud auth application-default login
  9. Click Connect.

  10. In the Stratos UI, go to Kubernetes in the left-hand side navigation. Information for your Google GKE environment should now be displayed.

Kubernetes Environment Information on Stratos
Figure 8.8: Kubernetes Environment Information on Stratos

8.5 Upgrading Stratos

For instructions to upgrade Stratos, follow the process described in Chapter 10, Upgrading SUSE Cloud Application Platform. Take note that kubecf is upgraded prior to upgrading Stratos.

8.6 Stratos Metrics

Stratos Metrics provides a Helm chart for deploying Prometheus (see https://prometheus.io/) and the following metrics exporters to Kubernetes:

  • Cloud Foundry Firehose Exporter (enabled by default)

  • Cloud Foundry Exporter (disabled by default)

  • Kubernetes State Metrics Exporter (disabled by default)

The Stratos Metrics Helm chart deploys a Prometheus server and the configured Exporters and fronts the Prometheus server with an nginx server to provide authenticated access to Prometheus (currently basic authentication over HTTPS).

When required by configuration, it also contains an initialization script that will setup users in the UAA that have correct scopes/permissions to be able to read data from the Cloud Foundry Firehose and/or API.

Lastly, the Helm chart generates a small metadata file in the root of the nginx server that is used by Stratos to determine which Cloud Foundry and Kubernetes clusters the Prometheus server is providing Metrics for.

To learn more about Stratos Metrics and its full list of configuration options, see https://github.com/SUSE/stratos-metrics.

8.6.1 Exporter Configuration

8.6.1.1 Firehose Exporter

This exporter can be enabled/disabled via the Helm value firehoseExporter.enabled. By default this exporter is enabled.

You must provide the following Helm chart values for this Exporter to work correctly:

  • cloudFoundry.apiEndpoint - API Endpoint of the Cloud Foundry API Server

  • cloudFoundry.uaaAdminClient - Admin client of the UAA used by the Cloud Foundry server

  • cloudFoundry.uaaAdminClientSecret - Admin client secret of the UAA used by the Cloud Foundry server

  • cloudFoundry.skipSslVerification - Whether to skip SSL verification when communicating with Cloud Foundry and the UAA APIs

You can scale the firehose nozzle in Stratos Metrics by specifying the following override:

firehoseExporter:
  instances: 1

Please note, the number of firehose nozzles should be proportional to the number of Traffic Controllers in your Cloud Foundry (see docs at https://docs.cloudfoundry.org/loggregator/log-ops-guide.html). Otherwise, Loggregator will not split the firehose between the nozzles.

8.6.1.2 Cloud Foundry Exporter

This exporter can be enabled/disabled via the Helm value cfExporter.enabled. By default this exporter is disabled.

You must provide the following Helm chart values for this Exporter to work correctly:

  • cloudFoundry.apiEndpoint - API Endpoint of the Cloud Foundry API Server

  • cloudFoundry.uaaAdminClient - Admin client of the UAA used by the Cloud Foundry server

  • cloudFoundry.uaaAdminClientSecret - Admin client secret of the UAA used by the Cloud Foundry server

  • cloudFoundry.skipSslVerification - Whether to skip SSL verification when communicating with Cloud Foundry and the UAA APIs

8.6.1.3 Kubernetes Monitoring

This exporter can be enabled/disabled via the Helm value prometheus.kubeStateMetrics.enabled. By default this exporter is disabled.

You must provide the following Helm chart values for this Exporter to work correctly:

  • kubernetes.apiEndpoint - The API Endpoint of the Kubernetes API Server

8.6.2 Install Stratos Metrics with Helm

In order to display metrics data with Stratos, you need to deploy the stratos-metrics Helm chart. As with deploying Stratos, you should deploy the metrics Helm chart using the same kubecf-config-values.yaml file that was used for deploying kubecf.

Additionally, create a new YAML file, named stratos-metrics-values.yaml in this example, for configuration options specific to Stratos Metrics.

The following is an example stratos-metrics-values.yaml file.

cloudFoundry:                                                                   
  apiEndpoint: https://api.example.com
  uaaAdminClient: admin                                                         
  uaaAdminClientSecret: password
  skipSslVerification: "true"  
env:
  DOPPLER_PORT: 443
kubernetes:
  apiEndpoint: kube_server_address.example.com
metrics:
  username: username                                 
  password: password
prometheus:
  kubeStateMetrics:
    enabled: true
  server:
    storageClass: "persistent"
services:
  loadbalanced: true

where:

  • kubernetes.apiEndpoint is the same URL that you used when registering your Kubernetes environment with Stratos (the Kubernetes API Server URL).

  • prometheus.server.storageClass is the storage class to be used by Stratos Metrics. If a storage class is not assigned, the default storage class will be used. If a storage class is not specified and there is no default storage class, the prometheus pod will fail to start.

  • metrics.username is the username used to authenticate with the nginx server that fronts Prometheus. This username is also used during the Section 8.6.3, “Connecting Stratos Metrics”) process.

  • metrics.password is the password used to authenticate with the nginx server that fronts Prometheus. This username is also used during the Section 8.6.3, “Connecting Stratos Metrics”) process. Ensure a secure password is chosen.

  • services.loadbalanced is set to true if your Kubernetes deployment supports automatic configuration of a load balancer (for example, AKS, EKS, and GKE).

If you are using SUSE Enterprise Storage, you must copy the Ceph admin secret to the metrics namespace:

tux > kubectl get secret ceph-secret-admin --output json --namespace default | \
sed 's/"namespace": "default"/"namespace": "metrics"/' | kubectl create --filename -

Install Metrics with:

tux > kubectl create namespace metrics

tux > helm install susecf-metrics suse/metrics \
--namespace metrics \
--values kubecf-config-values.yaml \
--values stratos-metrics-values.yaml

Monitor progress:

$ watch --color 'kubectl get pods --namespace metrics'

When all statuses show Ready, press CtrlC to exit and to view your release information.

8.6.3 Connecting Stratos Metrics

When Stratos Metrics is connected to Stratos, additional views are enabled that show metrics metadata that has been ingested into the Stratos Metrics Prometheus server.

To enable this, you must register and connect your Stratos Metrics instance with Stratos.

In the Stratos UI, go to Endpoints in the left-hand side navigation and click on the + icon in the top-right of the view - you should be shown the "Register new Endpoint" view. Next:

  1. Select Metrics from the Endpoint Type dropdown.

  2. Enter a memorable name for your environment in the Name field.

  3. Enter the Endpoint Address. Use the following to find the endpoint value.

    tux > kubectl get service susecf-metrics-metrics-nginx --namespace metrics
    • For Microsoft AKS, Amazon EKS, and Google GKE deployments which use a load balancer, the output will be similar to the following:

      NAME                           TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)         AGE
      susecf-metrics-metrics-nginx   LoadBalancer   10.0.202.180   52.170.253.229   443:30263/TCP   21h

      Preprend https:// to the public IP of the load balancer, and enter it into the Endpoint Address field. Using the values from the example above, https://52.170.253.229 is entered as the endpoint address.

    • For SUSE CaaS Platform deployments which do not use a load balancer, the output will be similar to the following:

      NAME                           TYPE       CLUSTER-IP       EXTERNAL-IP               PORT(S)         AGE
      susecf-metrics-metrics-nginx   NodePort   172.28.107.209   10.86.101.115,172.28.0.31 443:30685/TCP   21h

      Prepend https:// to the external IP of your node, followed by the nodePort, and enter it into the Endpoint Address field. Using the values from the example above, https://10.86.101.115:30685 is entered as the endpoint address.

  4. Check the Skip SSL validation for the endpoint checkbox if using self-signed certificates.

  5. Click Finish.

The view will refresh to show the new endpoint in the disconnected state. Next you will need to connect to this endpoint.

In the table of endpoints, click the overflow menu icon alongside the endpoint that you added above, then:

  1. Click on Connect in the dropdown menu.

  2. Enter the username for your Stratos Metrics instance. This will be the metrics.username defined in your stratos-metrics-values.yaml file.

  3. Enter the password for your Stratos Metrics instance. This will be the metrics.password defined in your stratos-metrics-values.yaml file.

  4. Click Connect.

Once connected, you should see that the name of your Metrics endpoint is a hyperlink and clicking on it should show basic metadata about the Stratos Metrics endpoint.

Metrics data and views should now be available in the Stratos UI, for example:

  • On the Instances tab for an Application, the table should show an additional Cell column to indicate which Diego Cell the instance is running on. This should be clickable to navigate to a Cell view showing Cell information and metrics.

    Cell Column on Application Instance Tab after Connecting Stratos Metrics
    Figure 8.9: Cell Column on Application Instance Tab after Connecting Stratos Metrics
  • On the view for an Application there should be a new Metrics tab that shows Application metrics.

    Application Metrics Tab after Connecting Stratos Metrics
    Figure 8.10: Application Metrics Tab after Connecting Stratos Metrics
  • On the Kubernetes views, views such as the Node view should show an additional Metrics tab with metric information.

    Node Metrics on the Stratos Kubernetes View
    Figure 8.11: Node Metrics on the Stratos Kubernetes View

9 Eirini

Eirini, an alternative to Diego, is a scheduler for the Cloud Foundry Application Runtime (CFAR) that runs Cloud Foundry user applications in Kubernetes. For details about Eirini, see https://www.cloudfoundry.org/project-eirini/ and http://eirini.cf

Different schedulers and stacks have different memory requirements for applications. Not every combination is tested so there is no universal memory setting for Cloud Application Platform, and because it depends on the application deployed, it is up to the user to adjust the setting based on their application.

Warning
Warning: Technology Preview

Eirini is currently included in SUSE Cloud Application Platform as a technology preview to allow users to evaluate. It is not supported for use in production deployments.

As a technology preview, Eirini contains certain limitations to its functionality.

9.1 Enabling Eirini

  1. To enable Eirini, and disable Diego, add the following to your kubecf-config-values.yaml file.

    features:
      eirini:
        enabled: true

    When Eirini is enabled, both features.suse_default_stack and features.suse_buildpacks must be enabled as well. A cflinuxfs3 Eirini image is currently not available, and the SUSE stack must be used. By default, both the SUSE stack and buildpacks are enabled.

    Note
    Note
    • After enabling Eirini, you will still see the diego-api pod. This is normal behavior because the Diego pod has a component required by Eirini.

    • Eirini will only work on a cluster that has the parameter --cluster-domain set to cluster.local.

  2. Deploy kubecf.

    Refer to the following for platform-specific instructions:

  3. Depending on your cluster configuration, Metrics Server may need to be deployed. Use Helm to install the latest stable Metrics Server.

    Note that --kubelet-insecure-tls is not recommended for production usage, but can be useful in test clusters with self-signed Kubelet serving certificates. For production, use --tls-private-key-file.

    tux > helm install metrics-server stable/metrics-server --set args[0]="--kubelet-preferred-address-types=InternalIP" --set args[1]="--kubelet-insecure-tls"

Part III SUSE Cloud Application Platform Administration

10 Upgrading SUSE Cloud Application Platform

SUSE Cloud Application Platform upgrades are delivered as container images from the SUSE registry and applied with Helm.

11 Configuration Changes

After the initial deployment of Cloud Application Platform, any changes made to your Helm chart values, whether through your kubecf-config-values.yaml file or directly using Helm's --set flag, are applied using the helm upgrade command.

12 Creating Admin Users

This chapter provides an overview on how to create additional administrators for your Cloud Application Platform cluster.

13 Managing Passwords

The various components of SUSE Cloud Application Platform authenticate to each other using passwords that are automatically managed by the Cloud Application Platform secrets-generator. The only passwords managed by the cluster administrator are passwords for human users. The administrator may create…

14 Accessing the UAA User Interface

After UAA is deployed successfully, users will not be able to log in to the UAA user interface (UI) with the admin user and the UAA_ADMIN_CLIENT_SECRET credentials. This user is only an OAuth client that is authorized to call UAA REST APIs and will need to create a separate user in the UAA server by…

15 Cloud Controller Database Secret Rotation

The Cloud Controller Database (CCDB) encrypts sensitive information like passwords. The encryption key is generated when KubeCF is deployed. If it is compromised or needs to be rotated for any other reason, new keys can be added. Note that existing encrypted information will not be updated. The encr…

16 Rotating Automatically Generated Secrets

Cloud Application Platform uses a number of automatically generated secrets (passwords and certificates) for use internally provided by cf-operator. This removes the burden from human operators while allowing for secure communication. From time to time, operators may wish to change such secrets, eit…

17 Backup and Restore

cf-plugin-backup backs up and restores your Cloud Controller Database (CCDB), using the Cloud Foundry command line interface (cf CLI). (See Section 22.1, “Using the cf CLI with SUSE Cloud Application Platform”.)

18 Service Brokers

The Open Service Broker API provides (OSBAPI) your SUSE Cloud Application Platform applications with access to external dependencies and platform-level capabilities, such as databases, filesystems, external repositories, and messaging systems. These resources are called services. Services are create…

19 App-AutoScaler

The App-AutoScaler service is used for automatically managing an application's instance count when deployed on KubeCF. The scaling behavior is determined by a set of criteria defined in a policy (See Section 19.4, “Policies”).

20 Integrating CredHub with SUSE Cloud Application Platform

SUSE Cloud Application Platform supports CredHub integration. You should already have a working CredHub instance, a CredHub service on your cluster, then apply the steps in this chapter to connect SUSE Cloud Application Platform.

21 Buildpacks

Buildpacks are used to construct the environment needed to run your applications, including any required runtimes or frameworks as well as other dependencies. When you deploy an application, a buildpack can be specified or automatically detected by cycling through all available buildpacks to find on…

10 Upgrading SUSE Cloud Application Platform

SUSE Cloud Application Platform upgrades are delivered as container images from the SUSE registry and applied with Helm.

For additional upgrade information, always review the release notes published at https://www.suse.com/releasenotes/x86_64/SUSE-CAP/2/.

10.1 Important Considerations

Before performing an upgrade, be sure to take note of the following:

Perform Upgrades in Sequence

Cloud Application Platform only supports upgrading releases in sequential order. If there are any intermediate releases between your current release and your target release, they must be installed. Skipping releases is not supported.

Preserve Helm Value Changes during Upgrades

During a helm upgrade, always ensure your kubecf-config-values.yaml file is passed. This will preserve any previously set Helm values while allowing additional Helm value changes to be made.

helm rollback Is Not Supported

helm rollback is not supported in SUSE Cloud Application Platform or in upstream Cloud Foundry, and may break your cluster completely, because database migrations only run forward and cannot be reversed. Database schema can change over time. During upgrades both pods of the current and the next release may run concurrently, so the schema must stay compatible with the immediately previous release. But there is no way to guarantee such compatibility for future upgrades. One way to address this is to perform a full raw data backup and restore. (See Section 17.2, “Disaster Recovery through Raw Data Backup and Restore”)

10.2 Upgrading SUSE Cloud Application Platform

The supported upgrade method is to install all upgrades, in order. Skipping releases is not supported. This table matches the Helm chart versions to each release:

CAP Releasecf-operator Helm Chart VersionKubeCF Helm Chart VersionStratos Helm Chart VersionStratos Metrics Helm Chart VersionMinimum Kubernetes Version RequiredCF API ImplementedKnown Compatible CF CLI VersionCF CLI URL
2.0.1 (current release)4.5.13+0.gd47387122.2.34.0.11.2.11.142.144.06.49.0https://github.com/cloudfoundry/cli/releases/tag/v6.49.0
2.04.5.6+0.gffc6f9422.2.23.2.11.2.11.142.144.06.49.0https://github.com/cloudfoundry/cli/releases/tag/v6.49.0

Use helm list to see the version of your installed release . Verify the latest release is the next sequential release from your installed release. If it is, proceed with the commands below to perform the upgrade.

The following procedure will upgrade SUSE Cloud Application Platform 2.0 to SUSE Cloud Application Platform 2.0.1.

  1. Begin by upgrading cf-operator.

    tux > helm upgrade cf-operator suse/cf-operator \
    --namespace cf-operator \
    --set "global.operator.watchNamespace=kubecf" \
    --version 4.5.13+0.gd4738712
  2. Wait until cf-operator is successfully upgraded before proceeding. Monitor the status of your cf-operator upgrade using the watch command.

    tux > watch --color 'kubectl get pods --namespace cf-operator'
  3. When the cf-operator upgrade is completed, upgrade KubeCF.

    tux > helm upgrade kubecf suse/kubecf \
    --namespace kubecf \
    --values kubecf-config-values.yaml \
    --version 2.2.3
  4. Monitor the status of your KubeCF upgrade using the watch command.

    tux > watch --color 'kubectl get pods --namespace kubecf'

11 Configuration Changes

After the initial deployment of Cloud Application Platform, any changes made to your Helm chart values, whether through your kubecf-config-values.yaml file or directly using Helm's --set flag, are applied using the helm upgrade command.

Warning
Warning: Do Not Make Changes to Pod Counts During a Version Upgrade

The helm upgrade command can be used to apply configuration changes as well as perform version upgrades to Cloud Application Platform. A change to the pod count configuration should not be applied simultaneously with a version upgrade. Sizing changes should be made separately, either before or after, from a version upgrade.

11.1 Configuration Change Example

Consider an example where you want to enable the App-AutoScaler.

The entry below is added to your kubecf-config-values.yaml file and set with enabled set to true.

features:
  autoscaler:
    enabled: true

The changed is then applied with the helm upgrade command. This example assumes the suse/kubecf Helm chart deployed was named kubecf.

tux > helm upgrade kubecf suse/kubecf \
--namespace kubecf \
--values kubecf-config-values.yaml \
--version 2.2.3

When all pods are in a READY state, the configuration change will also be reflected. Assuming the chart was deployed to the kubecf namespace, progress can be monitored with:

tux > watch --color 'kubectl get pods --namespace kubecf'

11.2 Other Examples

The following are other examples of using helm upgrade to make configuration changes:

12 Creating Admin Users

This chapter provides an overview on how to create additional administrators for your Cloud Application Platform cluster.

12.1 Prerequisites

The following prerequisites are required in order to create additional Cloud Application Platform cluster administrators:

  • cf, the Cloud Foundry command line interface. For more information, see https://docs.cloudfoundry.org/cf-cli/.

    For SUSE Linux Enterprise and openSUSE systems, install using zypper.

    tux > sudo zypper install cf-cli

    For SLE, ensure the SUSE Cloud Application Platform Tools Module has been added. Add the module using YaST or SUSEConnect.

    tux > SUSEConnect --product sle-module-cap-tools/15.1/x86_64

    For other systems, follow the instructions at https://docs.cloudfoundry.org/cf-cli/install-go-cli.html.

  • uaac, the Cloud Foundry uaa command line client (UAAC). See https://docs.cloudfoundry.org/uaa/uaa-user-management.html for more information and installation instructions.

    On SUSE Linux Enterprise systems, ensure the ruby-devel and gcc-c++ packages have been installed before installing the cf-uaac gem.

    tux > sudo zypper install ruby-devel gcc-c++

12.2 Creating an Example Cloud Application Platform Cluster Administrator

The following example demonstrates the steps required to create a new administrator user for your Cloud Application Platform cluster. Note that creating administrator accounts must be done using the UAAC and cannot be done using the cf CLI.

  1. Use UAAC to target your uaa server.

    tux > uaac target --skip-ssl-validation https://uaa.example.com:2793
  2. Authenticate to the uaa server as admin using the UAA_ADMIN_CLIENT_SECRET set in your kubecf-config-values.yaml file.

    tux > uaac token client get admin --secret password
  3. Create a new user:

    tux > uaac user add new-admin --password password --emails new-admin@example.com --zone kubecf
  4. Add the new user to the following groups to grant administrator privileges to the cluster (see https://docs.cloudfoundry.org/concepts/architecture/uaa.html#uaa-scopes for information on privileges provided by each group):

    tux > uaac member add scim.write new-admin --zone kubecf
    
    tux > uaac member add scim.read new-admin --zone kubecf
    
    tux > uaac member add cloud_controller.admin new-admin --zone kubecf
    
    tux > uaac member add clients.read new-admin --zone kubecf
    
    tux > uaac member add clients.write new-admin --zone kubecf
    
    tux > uaac member add doppler.firehose new-admin --zone kubecf
    
    tux > uaac member add routing.router_groups.read new-admin --zone kubecf
    
    tux > uaac member add routing.router_groups.write new-admin --zone kubecf
  5. Log into your Cloud Application Platform deployment as the newly created administrator:

    tux > cf api --skip-ssl-validation https://api.example.com
    
    tux > cf login -u new-admin
  6. The following commands can be used to verify the new administrator account has sufficient permissions:

    tux > cf create-shared-domain test-domain.com
    
    tux > cf set-org-role new-admin org OrgManager
    
    tux > cf create-buildpack test_buildpack /tmp/ruby_buildpack-cached-sle15-v1.7.30.1.zip 1

    If the account has sufficient permissions, you should not receive any authorization message similar to the following:

    FAILED
    Server error, status code: 403, error code: 10003, message: You are not authorized to perform the requested action

    See https://docs.cloudfoundry.org/cf-cli/cf-help.html for other administrator-specific commands that can be run to confirm sufficient permissions are provided.

13 Managing Passwords

The various components of SUSE Cloud Application Platform authenticate to each other using passwords that are automatically managed by the Cloud Application Platform secrets-generator. The only passwords managed by the cluster administrator are passwords for human users. The administrator may create and remove user logins, but cannot change user passwords.

  • The cluster administrator password is initially defined in the deployment's values.yaml file with CLUSTER_ADMIN_PASSWORD

  • The Stratos Web UI provides a form for users, including the administrator, to change their own passwords

  • User logins are created (and removed) with the Cloud Foundry Client, cf CLI

13.1 Password Management with the Cloud Foundry Client

The administrator cannot change other users' passwords. Only users may change their own passwords, and password changes require the current password:

tux > cf passwd
Current Password>
New Password>
Verify Password>
Changing password...
OK
Please log in again

The administrator can create a new user:

tux > cf create-user username password

and delete a user:

tux > cf delete-user username password

Use the cf CLI to assign space and org roles. Run cf help -a for a complete command listing, or see Creating and Managing Users with the cf CLI.

13.2 Changing User Passwords with Stratos

The Stratos Web UI provides a form for changing passwords on your profile page. Click the overflow menu button on the top right to access your profile, then click the edit button on your profile page. You can manage your password and username on this page.

Stratos Profile Page
Figure 13.1: Stratos Profile Page
Stratos Edit Profile Page
Figure 13.2: Stratos Edit Profile Page

14 Accessing the UAA User Interface

After UAA is deployed successfully, users will not be able to log in to the UAA user interface (UI) with the admin user and the UAA_ADMIN_CLIENT_SECRET credentials. This user is only an OAuth client that is authorized to call UAA REST APIs and will need to create a separate user in the UAA server by using the UAAC utility.

14.1 Prerequisites

The following prerequisites are required in order to access the UAA UI.

  • cf, the Cloud Foundry command line interface. For more information, see https://docs.cloudfoundry.org/cf-cli/.

    For SUSE Linux Enterprise and openSUSE systems, install using zypper.

    tux > sudo zypper install cf-cli

    For SLE, ensure the SUSE Cloud Application Platform Tools Module has been added. Add the module using YaST or SUSEConnect.

    tux > SUSEConnect --product sle-module-cap-tools/15.1/x86_64

    For other systems, follow the instructions at https://docs.cloudfoundry.org/cf-cli/install-go-cli.html.

  • uaac, the Cloud Foundry uaa command line client (UAAC). See https://docs.cloudfoundry.org/uaa/uaa-user-management.html for more information and installation instructions.

    On SUSE Linux Enterprise systems, ensure the ruby-devel and gcc-c++ packages have been installed before installing the cf-uaac gem.

    tux > sudo zypper install ruby-devel gcc-c++
  • UAA has been successfully deployed.

14.2 Procedure

  1. Use UAAC to target your uaa server.

    tux > uaac target --skip-ssl-validation https://uaa.example.com:2793
  2. Authenticate to the uaa server as admin using the UAA_ADMIN_CLIENT_SECRET set in your kubecf-config-values.yaml file.

    tux > uaac token client get admin --secret password
  3. Create a new user.

    tux > uaac user add NEW-USER -p PASSWORD --emails NEW-USER-EMAIL
  4. Go to the UAA UI at https://uaa.example.com:2793/login , replacing example.com with your domain.

  5. Log in using the the newly created user. Use the username and password as the credentials.

15 Cloud Controller Database Secret Rotation

The Cloud Controller Database (CCDB) encrypts sensitive information like passwords. The encryption key is generated when KubeCF is deployed. If it is compromised or needs to be rotated for any other reason, new keys can be added. Note that existing encrypted information will not be updated. The encrypted information must be set again to have them re-encrypted with the new key. The old key cannot be dropped until all references to it are removed from the database.

Updating these secrets is a manual process that involves decrypting the current contents of the database using the old key and re-encrypting the contents using a new key. The following procedure outlines how this is done.

  1. For each label under key_labels, KubeCF will generate an encryption key. The current_key_label indicates which key is currently being used.

    ccdb:
      encryption:
        rotation:
          key_labels:
          - encryption_key_0
          current_key_label: encryption_key_0
  2. In order to rotate the CCDB encryption key, add a new label to key_labels (keeping the old labels), and mark the current_key_label with the newly added label:

    ccdb:
      encryption:
        rotation:
          key_labels:
          - encryption_key_0
          - encryption_key_1
          current_key_label: encryption_key_1
  3. Save the above information into a file, for example rotate-secret.yaml, and perform the rotation:

    1. Update the KubeCF Helm installation:

      tux > helm upgrade kubecf --namespace kubecf --values rotate-secret.yaml --reuse-values
    2. After Helm finishes its updates, trigger the rotate-cc-database-key errand:

      tux > kubectl patch qjob kubecf-rotate-cc-database-key \
      --namespace kubecf \
      --type merge \
      --patch '{"spec":{"trigger":{"strategy":"now"}}}'

15.1 Tables with Encrypted Information

The CCDB contains several tables with encrypted information as follows:

apps

Environment variables

buildpack_lifecycle_buildpacks

Buildpack URLs may contain passwords

buildpack_lifecycle_data

Buildpack URLs may contain passwords

droplets

May contain Docker registry passwords

env_groups

Environment variables

packages

May contain Docker registry passwords

service_bindings

Contains service credentials

service_brokers

Contains service credentials

service_instances

Contains service credentials

service_keys

Contains service credentials

tasks

Environment variables

15.1.1 Update Existing Data with New Encryption Key

To ensure the encryption key is updated for existing data, the command (or its update- equivalent) can be run again with the same parameters. Some commands need to be deleted/recreated to update the label.

apps

Run cf set-env again

buildpack_lifecycle_buildpacks, buildpack_lifecycle_data, droplets

cf restage the app

packages

cf delete, then cf push the app (Docker apps with registry password)

env_groups

Run cf set-staging-environment-variable-group or cf set-running-environment-variable-group again

service_bindings

Run cf unbind-service and cf bind-service again

service_brokers

Run cf update-service-broker with the appropriate credentials

service_instances

Run cf update-service with the appropriate credentials

service_keys

Run cf delete-service-key and cf create-service-key again

tasks

While tasks have an encryption key label, they are generally meant to be a one-off event, and left to run to completion. If there is a task still running, it could be stopped with cf terminate-task, then run again with cf run-task.

16 Rotating Automatically Generated Secrets

Cloud Application Platform uses a number of automatically generated secrets (passwords and certificates) for use internally provided by cf-operator. This removes the burden from human operators while allowing for secure communication. From time to time, operators may wish to change such secrets, either manually or on a schedule. This is called rotating a secret.

16.1 Finding Secrets

Retrieve the list of all secrets maintained by KubeCF:

tux > kubectl get quarkssecret --namespace kubecf

To see information about a specific secret, for example the NATS password:

tux > kubectl get quarkssecret --namespace kubecf kubecf.var-nats-password --output yaml

Note that each quarkssecret has a corresponding regular Kubernetes secret that it controls:

tux > kubectl get secret --namespace kubecf
tux > kubectl get secret --namespace kubecf kubecf.var-nats-password --output yaml

16.2 Rotating Specific Secrets

To rotate a secret, for example kubecf.var-nats-password:

  1. Create a YAML file for a ConfigMap of the form:

    ---
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: rotate-kubecf.var-nats-password
      labels:
        quarks.cloudfoundry.org/secret-rotation: "true"
    data:
      secrets: '["kubecf.var-nats-password"]'

    The name of the ConfigMap can be anything allowed by Kubernetes syntax but we recommend using a name derived from the name of the secret itself.

    Also, the example above rotates only a single secret but the data.secrets key accepts an array of secret names, allowing simultaneous rotation of many secrets.

  2. Apply the ConfigMap:

    tux > kubectl apply --namespace kubecf -f /path/to/your/yaml/file

    The result can be seen in the cf-operator's log.

  3. After the rotation is complete, that is after secrets have been changed and all affected pods have been restarted, delete the config map again:

    tux > kubectl delete --namespace kubecf -f /path/to/your/yaml/file

17 Backup and Restore

17.1 Backup and Restore Using cf-plugin-backup

cf-plugin-backup backs up and restores your Cloud Controller Database (CCDB), using the Cloud Foundry command line interface (cf CLI). (See Section 22.1, “Using the cf CLI with SUSE Cloud Application Platform”.)

cf-plugin-backup is not a general-purpose backup and restore plugin. It is designed to save the state of a KubeCF instance before making changes to it. If the changes cause problems, use cf-plugin-backup to restore the instance from scratch. Do not use it to restore to a non-pristine KubeCF instance. Some of the limitations for applying the backup to a non-pristine KubeCF instance are:

  • Application configuration is not restored to running applications, as the plugin does not have the ability to determine which applications should be restarted to load the restored configurations.

  • User information is managed by the User Account and Authentication (uaa) Server, not the Cloud Controller (CC). As the plugin talks only to the CC it cannot save full user information, nor restore users. Saving and restoring users must be performed separately, and user restoration must be performed before the backup plugin is invoked.

  • The set of available stacks is part of the KubeCF instance setup, and is not part of the CC configuration. Trying to restore applications using stacks not available on the target KubeCF instance will fail. Setting up the necessary stacks must be performed separately before the backup plugin is invoked.

  • Buildpacks are not saved. Applications using custom buildpacks not available on the target KubeCF instance will not be restored. Custom buildpacks must be managed separately, and relevant buildpacks must be in place before the affected applications are restored.

17.1.1 Installing the cf-plugin-backup

Download the plugin from https://github.com/SUSE/cf-plugin-backup/releases.

Then install it with cf, using the name of the plugin binary that you downloaded:

tux > cf install-plugin cf-plugin-backup-1.0.8.0.g9e8438e.linux-amd64
 Attention: Plugins are binaries written by potentially untrusted authors.
 Install and use plugins at your own risk.
 Do you want to install the plugin
 backup-plugin/cf-plugin-backup-1.0.8.0.g9e8438e.linux-amd64? [yN]: y
 Installing plugin backup...
 OK
 Plugin backup 1.0.8 successfully installed.

Verify installation by listing installed plugins:

tux > cf plugins
 Listing installed plugins...

 plugin   version   command name      command help
 backup   1.0.8     backup-info       Show information about the current snapshot
 backup   1.0.8     backup-restore    Restore the CloudFoundry state from a
  backup created with the snapshot command
 backup   1.0.8     backup-snapshot   Create a new CloudFoundry backup snapshot
  to a local file

 Use 'cf repo-plugins' to list plugins in registered repos available to install.

17.1.2 Using cf-plugin-backup

The plugin has three commands:

  • backup-info

  • backup-snapshot

  • backup-restore

View the online help for any command, like this example:

tux >  cf backup-info --help
 NAME:
   backup-info - Show information about the current snapshot

 USAGE:
   cf backup-info

Create a backup of your SUSE Cloud Application Platform data and applications. The command outputs progress messages until it is completed:

tux > cf backup-snapshot
 2018/08/18 12:48:27 Retrieving resource /v2/quota_definitions
 2018/08/18 12:48:30 org quota definitions done
 2018/08/18 12:48:30 Retrieving resource /v2/space_quota_definitions
 2018/08/18 12:48:32 space quota definitions done
 2018/08/18 12:48:32 Retrieving resource /v2/organizations
 [...]

Your Cloud Application Platform data is saved in the current directory in cf-backup.json, and application data in the app-bits/ directory.

View the current backup:

tux > cf backup-info
 - Org  system

Restore from backup:

tux > cf backup-restore

There are two additional restore options: --include-security-groups and --include-quota-definitions.

17.1.3 Scope of Backup

The following table lists the scope of the cf-plugin-backup backup. Organization and space users are backed up at the SUSE Cloud Application Platform level. The user account in uaa/LDAP, the service instances and their application bindings, and buildpacks are not backed up. The sections following the table goes into more detail.

ScopeRestore
OrgsYes
Org auditorsYes
Org billing-managerYes
Quota definitionsOptional
SpacesYes
Space developersYes
Space auditorsYes
Space managersYes
AppsYes
App binariesYes
RoutesYes
Route mappingsYes
DomainsYes
Private domainsYes
Stacksnot available
Feature flagsYes
Security groupsOptional
Custom buildpacksNo

cf backup-info reads the cf-backup.json snapshot file found in the current working directory, and reports summary statistics on the content.

cf backup-snapshot extracts and saves the following information from the CC into a cf-backup.json snapshot file. Note that it does not save user information, but only the references needed for the roles. The full user information is handled by the uaa server, and the plugin talks only to the CC. The following list provides a summary of what each plugin command does.

  • Org Quota Definitions

  • Space Quota Definitions

  • Shared Domains

  • Security Groups

  • Feature Flags

  • Application droplets (zip files holding the staged app)

  • Orgs

    • Spaces

      • Applications

      • Users' references (role in the space)

cf backup-restore reads the cf-backup.json snapshot file found in the current working directory, and then talks to the targeted KubeCF instance to upload the following information, in the specified order:

  • Shared domains

  • Feature flags

  • Quota Definitions (iff --include-quota-definitions)

  • Orgs

    • Space Quotas (iff --include-quota-definitions)

    • UserRoles

    • (private) Domains

    • Spaces

      • UserRoles

      • Applications (+ droplet)

        • Bound Routes

      • Security Groups (iff --include-security-groups)

The following list provides more details of each action.

Shared Domains

Attempts to create domains from the backup. Existing domains are retained, and not overwritten.

Feature Flags

Attempts to update flags from the backup.

Quota Definitions

Existing quotas are overwritten from the backup (deleted, re-created).

Orgs

Attempts to create orgs from the backup. Attempts to update existing orgs from the backup.

Space Quota Definitions

Existing quotas are overwritten from the backup (deleted, re-created).

User roles

Expect the referenced user to exist. Will fail when the user is already associated with the space, in the given role.

(private) Domains

Attempts to create domains from the backup. Existing domains are retained, and not overwritten.

Spaces

Attempts to create spaces from the backup. Attempts to update existing spaces from the backup.

User roles

Expect the referenced user to exist. Will fail when the user is already associated with the space, in the given role.

Apps

Attempts to create apps from the backup. Attempts to update existing apps from the backup (memory, instances, buildpack, state, ...)

Security groups

Existing groups are overwritten from the backup

17.2 Disaster Recovery through Raw Data Backup and Restore

An existing SUSE Cloud Application Platform deployment's data can be migrated to a new SUSE Cloud Application Platform deployment through a backup and restore of its raw data. The process involves performing a backup and restore of the kubecf components respectively. This procedure is agnostic of the underlying Kubernetes infrastructure and can be included as part of your disaster recovery solution.

17.2.1 Prerequisites

In order to complete a raw data backup and restore, the following are required:

17.2.2 Scope of Raw Data Backup and Restore

The following lists the data that is included as part of the backup (and restore) procedure:

17.2.3 Performing a Raw Data Backup

Note
Note: Restore to the Same Version

This process is intended for backing up and restoring to a target deployment with the same version as the source deployment. For example, data from a backup of kubecf version 2.18.0 should be restored to a version version 2.18.0 kubecf deployment.

Perform the following steps to create a backup of your source kubecf deployment.

  1. Connect to the blobstore pod:

    tux > kubectl exec --stdin --tty blobstore-0 --namespace kubecf -- env /bin/bash
  2. Create an archive of the blobstore directory to preserve all needed files (see the Cloud Controller Blobstore content of Section 17.2.2, “Scope of Raw Data Backup and Restore”) then disconnect from the pod:

    tux > tar cfvz blobstore-src.tgz /var/vcap/store/shared
    tux > exit
  3. Copy the archive to a location outside of the pod:

    tux > kubectl cp kubecf/blobstore-0:blobstore-src.tgz /tmp/blobstore-src.tgz
  4. Export the Cloud Controller Database (CCDB) into a file:

    tux > kubectl exec mysql-0 --namespace kubecf -- bash -c \
      '/var/vcap/packages/mariadb/bin/mysqldump \
      --defaults-file=/var/vcap/jobs/mysql/config/mylogin.cnf \
      --socket /var/vcap/sys/run/pxc-mysql/mysqld.sock \
      ccdb' > /tmp/ccdb-src.sql
  5. Next, obtain the CCDB encryption key(s). The method used to capture the key will depend on whether current_key_label has been defined on the source cluster. This value is defined in /var/vcap/jobs/cloud_controller_ng/config/cloud_controller_ng.yml of the api-0 pod and also found in various tables of the MySQL database.

    Begin by examining the configuration file for thecurrent_key_label setting:

    tux > kubectl exec --stdin --tty --namespace kubecf api-0 -- bash -c "cat /var/vcap/jobs/cloud_controller_ng/config/cloud_controller_ng.yml | grep -A 3 database_encryption"
    • If the output contains the current_key_label setting, save the output for the restoration process. Adjust the -A flag as needed to include all keys.

    • If the output does not contain the current_key_label setting, run the following command and save the output for the restoration process:

      tux > kubectl exec api-0 --namespace kubecf -- bash -c 'echo $DB_ENCRYPTION_KEY'

17.2.4 Performing a Raw Data Restore

Important
Important: Ensure Access to the Correct Deployment

Working with multiple Kubernetes clusters simultaneously can be confusing. Ensure you are communicating with the desired cluster by setting $KUBECONFIG correctly.

Perform the following steps to restore your backed up data to the target kubecf deployment.

  1. The target kubecf cluster needs to be deployed with the correct database encryption key(s) set in your kubecf-config-values.yaml before data can be restored. How the encryption key(s) will be prepared in your kubecf-config-values.yaml depends on the result of Step 5 in Section 17.2.3, “Performing a Raw Data Backup”

    • If current_key_label was set, use the current_key_label obtained as the value of CC_DB_CURRENT_KEY_LABEL and all the keys under the keys are defined under CC_DB_ENCRYPTION_KEYS. See the following example kubecf-config-values.yaml:

      env:
        CC_DB_CURRENT_KEY_LABEL: migrated_key_1
      
      secrets:
        CC_DB_ENCRYPTION_KEYS:
          migrated_key_1: "<key_goes_here>"
          migrated_key_2: "<key_goes_here>"
    • If current_key_label was not set, create one for the new cluster through kubecf-config-values.yaml and set it to the $DB_ENCRYPTION_KEY value from the old cluster. Also set up the db_encryption_key bosh property to use the previous key. In this example, migrated_key is the new current_key_label created:

      env:
        CC_DB_CURRENT_KEY_LABEL: migrated_key
      
      secrets:
        CC_DB_ENCRYPTION_KEYS:
          migrated_key: "OLD_CLUSTER_DB_ENCRYPTION_KEY"
      bosh:
        instance_groups:
        - name: api-group
          jobs:
          - name: cloud_controller_ng
            properties:
              cc:
                db_encryption_key: "OLD_CLUSTER_DB_ENCRYPTION_KEY"
  2. Deploy a non-high-availability configuration of kubecf and wait until all pods are ready before proceeding.

  3. In the ccdb-src.sql file created earlier, replace the domain name of the source deployment with the domain name of the target deployment.

    tux > sed --in-place 's/old-example.com/new-example.com/g' /tmp/ccdb-src.sql
  4. Stop the monit services on the api-0, cc-worker-0, and cc-clock-0 pods:

    tux > for n in api-0 cc-worker-0 cc-clock-0; do
      kubectl exec --stdin --tty --namespace kubecf $n -- bash -l -c 'monit stop all'
    done
  5. Copy the blobstore-src.tgz archive to the blobstore pod:

    tux > kubectl cp /tmp/blobstore-src.tgz kubecf/blobstore-0:/.
  6. Restore the contents of the archive created during the backup process to the blobstore pod:

    tux > kubectl exec --stdin --tty --namespace kubecf blobstore-0 -- bash -l -c 'monit stop all && sleep 10 && rm -rf /var/vcap/store/shared/* && tar xvf blobstore-src.tgz && monit start all && rm blobstore-src.tgz'
  7. Recreate the CCDB on the mysql pod:

    tux > kubectl exec mysql-0 --namespace kubecf -- bash -c \
      "/var/vcap/packages/mariadb/bin/mysql \
      --defaults-file=/var/vcap/jobs/mysql/config/mylogin.cnf \
      --socket /var/vcap/sys/run/pxc-mysql/mysqld.sock \
      -e 'drop database ccdb; create database ccdb;'"
  8. Restore the CCDB on the mysql pod:

    tux > kubectl exec --stdin mysql-0 --namespace kubecf -- bash -c \
      '/var/vcap/packages/mariadb/bin/mysql \
      --defaults-file=/var/vcap/jobs/mysql/config/mylogin.cnf \
      --socket /var/vcap/sys/run/pxc-mysql/mysqld.sock \
      ccdb' < /tmp/ccdb-src.sql
  9. Start the monit services on the api-0, cc-worker-0, and cc-clock-0 pods

    tux > for n in api-0 cc-worker-0 cc-clock-0; do
      kubectl exec --stdin --tty --namespace kubecf $n -- bash -l -c 'monit start all'
    done
  10. If your old cluster did not have current_key_label defined, perform a key rotation. Otherwise, a key rotation is not necessary.

    1. Run the rotation for the encryption keys:

      tux > kubectl exec --namespace kubecf api-0 -- bash -c \
      "source /var/vcap/jobs/cloud_controller_ng/bin/ruby_version.sh; \
      export CLOUD_CONTROLLER_NG_CONFIG=/var/vcap/jobs/cloud_controller_ng/config/cloud_controller_ng.yml; \
      cd /var/vcap/packages/cloud_controller_ng/cloud_controller_ng; \
      bundle exec rake rotate_cc_database_key:perform"
    2. Restart the api pod.

      tux > kubectl delete pod api-0 --namespace kubecf --force --grace-period=0
  11. Perform a cf restage appname for existing applications to ensure their existing data is updated with the new encryption key.

  12. The data restore is now complete. Run some cf commands, such as cf apps, cf marketplace, or cf services, and verify data from the old cluster is returned.

18 Service Brokers

The Open Service Broker API provides (OSBAPI) your SUSE Cloud Application Platform applications with access to external dependencies and platform-level capabilities, such as databases, filesystems, external repositories, and messaging systems. These resources are called services. Services are created, used, and deleted as needed, and provisioned on demand. This chapter focuses on Minibroker but there are others.

Use the following guideline to determine which service broker is most suitable for your situation.

18.1 Provisioning Services with Minibroker

Minibroker is an OSBAPI compliant broker created by members of the Microsoft Azure team. It provides a simple method to provision service brokers on Kubernetes clusters.

Important
Important: Minibroker Upstream Services

The services deployed by Minibroker are sourced from the stable upstream charts repository, see https://github.com/helm/charts/tree/master/stable, and maintained by contributors to the Helm project. Though SUSE supports Minibroker itself, it does not support the service charts it deploys. Operators should inspect the charts and images exposed by the service plans before deciding to use them in a production environment.

18.1.1 Deploy Minibroker

  1. Minibroker is deployed using a Helm chart. Ensure your SUSE Helm chart repository contains the most recent Minibroker chart:

    tux > helm repo update
  2. Use Helm to deploy Minibroker:

    tux > kubectl create namespace minibroker
    	     
    tux > helm install minibroker suse/minibroker \
    --namespace minibroker \
    --set "defaultNamespace=minibroker"

    The following tables list the services provided by Minibroker, along with the latest chart and application version combination known to work with Minibroker.

    If your deployment uses Kubernetes 1.15 or earlier, use the following versions.

    ServiceVersionappVersion
    MariaDB4.3.010.1.34
    MongoDB5.3.34.0.6
    PostgreSQL6.2.111.5.0
    Redis3.7.24.0.10

    If your deployment uses Kubernetes 1.16 or later, use the following versions.

    ServiceVersionappVersion
    MariaDB7.0.010.3.18
    MongoDB7.2.94.0.12
    PostgreSQL7.0.011.5.0
    Redis9.1.125.0.5
  3. Monitor the deployment progress. Wait until all pods are in a ready state before proceeding:

    tux > watch --color 'kubectl get pods --namespace minibroker'

18.1.2 Setting Up the Environment for Minibroker Usage

  1. Begin by logging into your Cloud Application Platform deployment. Select an organization and space to work with, creating them if needed:

    tux > cf api --skip-ssl-validation https://api.example.com
     tux > cf login -u admin -p password
     tux > cf create-org org
     tux > cf create-space space -o org
     tux > cf target -o org -s space
  2. Create the service broker. Note that Minibroker does not require authentication and the username and password parameters act as dummy values to pass to the cf command. These parameters do not need to be customized for the Cloud Application Platform installation:

    tux > cf create-service-broker minibroker username password http://minibroker-minibroker.minibroker.svc.cluster.local

    After the service broker is ready, it can be seen on your deployment:

    tux > cf service-brokers
     Getting service brokers as admin...
    
     name               url
     minibroker         http://minibroker-minibroker.minibroker.svc.cluster.local
  3. List the services and their associated plans the Minibroker has access to:

    tux > cf service-access -b minibroker
  4. Enable access to a service. Refer to the table in Section 18.1.1, “Deploy Minibroker” for service plans known to be working with Minibroker.

    This example enables access to the Redis service:

    tux > cf enable-service-access redis -b minibroker -p 4-0-10

    Use cf marketplace to verify the service has been enabled:

    tux > cf marketplace
     Getting services from marketplace in org org / space space as admin...
     OK
    
     service      plans     description
     redis        4-0-10    Helm Chart for redis
    
     TIP:  Use 'cf marketplace -s SERVICE' to view descriptions of individual plans of a given service.
  5. Define your Application Security Group (ASG) rules in a JSON file. Using the defined rules, create an ASG and bind it to an organization and space:

    tux > echo > redis.json '[{ "protocol": "tcp", "destination": "10.0.0.0/8", "ports": "6379", "description": "Allow Redis traffic" }]'
     tux > cf create-security-group redis_networking redis.json
     tux > cf bind-security-group redis_networking org space

    Use following ports to define your ASG for the given service:

    ServicePort
    MariaDB3306
    MongoDB27017
    PostgreSQL5432
    Redis6379
  6. Create an instance of the Redis service. The cf marketplace or cf marketplace -s redis commands can be used to see the available plans for the service:

    tux > cf create-service redis 4-0-10 redis-example-service

    Monitor the progress of the pods and wait until all pods are in a ready state. The example below shows the additional redis pods with a randomly generated name that have been created in the minibroker namespace:

    tux > watch --color 'kubectl get pods --namespace minibroker'
     NAME                                            READY     STATUS             RESTARTS   AGE
     alternating-frog-redis-master-0                 1/1       Running            2          1h
     alternating-frog-redis-slave-7f7444978d-z86nr   1/1       Running            0          1h
     minibroker-minibroker-5865f66bb8-6dxm7          2/2       Running            0          1h

18.1.3 Using Minibroker with Applications

This section demonstrates how to use Minibroker services with your applications. The example below uses the Redis service instance created in the previous section.

  1. Obtain the demo application from Github and use cf push with the --no-start flag to deploy the application without starting it:

    tux > git clone https://github.com/scf-samples/cf-redis-example-app
     tux > cd cf-redis-example-app
     tux > cf push --no-start
  2. Bind the service to your application and start the application:

    tux > cf bind-service redis-example-app redis-example-service
     tux > cf start redis-example-app
  3. When the application is ready, it can be tested by storing a value into the Redis service:

    tux > export APP=redis-example-app.example.com
     tux > curl --request GET $APP/foo
     tux > curl --request PUT $APP/foo --data 'data=bar'
     tux > curl --request GET $APP/foo

    The first GET will return key not present. After storing a value, it will return bar.

Important
Important: Database Names for PostgreSQL and MariaDB Instances

By default, Minibroker creates PostgreSQL and MariaDB server instances without a named database. A named database is required for normal usage with these and will need to be added during the cf create-service step using the -c flag. For example:

tux > cf create-service postgresql 9-6-2 djangocms-db -c '{"postgresDatabase":"mydjango"}'
 tux > cf create-service mariadb 10-1-34 my-db  -c '{"mariadbDatabase":"mydb"}'

Other options can be set too, but vary by service type.

18.1.4 Upgrading SUSE Cloud Application Platform When Using Minibroker

If you are upgrading SUSE Cloud Application Platform to 1.5.2 and already use Minibroker to connect to external databases and are using Kubernetes 1.16 or higher, which is the case with SUSE CaaS Platform 4.1, you will need to update the database version to a compatible version and migrate your data over via the database’s suggested mechanism. This may require a database export/import.

19 App-AutoScaler

The App-AutoScaler service is used for automatically managing an application's instance count when deployed on KubeCF. The scaling behavior is determined by a set of criteria defined in a policy (See Section 19.4, “Policies”).

19.1 Prerequisites

Using the App-AutoScaler service requires:

  • A running deployment of kubecf

  • cf, the Cloud Foundry command line interface. For more information, see https://docs.cloudfoundry.org/cf-cli/.

    For SUSE Linux Enterprise and openSUSE systems, install using zypper.

    tux > sudo zypper install cf-cli

    For SLE, ensure the SUSE Cloud Application Platform Tools Module has been added. Add the module using YaST or SUSEConnect.

    tux > SUSEConnect --product sle-module-cap-tools/15.1/x86_64

    For other systems, follow the instructions at https://docs.cloudfoundry.org/cf-cli/install-go-cli.html.

  • The Cloud Foundry CLI AutoScaler Plug-in, see https://github.com/cloudfoundry/app-autoscaler-cli-plugin

    The plugin can be installed by running the following command:

    tux > cf install-plugin -r CF-Community app-autoscaler-plugin

    If the plugin repo is not found, add it first:

    tux > cf add-plugin-repo "CF-Community" "https://plugins.cloudfoundry.org"

19.2 Enabling and Disabling the App-AutoScaler Service

App-AutoScaler is disabled by default. To enable it, add the following the following block to your kubecf-config-values.yaml file.

features:
  autoscaler:
    enabled: true

To disable App-AutoScaler again, update the above block in your kubecf-config-values.yaml so that enabled is set to false.

After making the change above, and any other configuration changes, apply the update by doing the following:

19.3 Using the App-AutoScaler Service

Push the application without starting it first:

tux > cf push my_application --no-start

Attach autoscaling policy to the application:

tux > cf attach-autoscaling-policy my_application my-policy.json

Policy has to be defined as a JSON file (See Section 19.4, “Policies”) in a proper format (See https://github.com/cloudfoundry/app-autoscaler/blob/develop/docs/policy.md).

Start the application:

tux > cf start my_application

Autoscaling policies can be managed using cf CLI with the App-AutoScaler plugin as above (See Section 19.3.1, “The App-AutoScaler cf CLI Plugin”) or using the App-AutoScaler API (See Section 19.3.2, “App-AutoScaler API”).

19.3.1 The App-AutoScaler cf CLI Plugin

The App-AutoScaler plugin is used for managing the service with your applications and provides the following commands (with shortcuts in brackets). Refer to https://github.com/cloudfoundry/app-autoscaler-cli-plugin#command-list for details about each command:

autoscaling-api (asa)

Set or view AutoScaler service API endpoint. See https://github.com/cloudfoundry/app-autoscaler-cli-plugin#cf-autoscaling-api for more information.

autoscaling-policy (asp)

Retrieve the scaling policy of an application. See https://github.com/cloudfoundry/app-autoscaler-cli-plugin#cf-autoscaling-policy for more information.

attach-autoscaling-policy (aasp)

Attach a scaling policy to an application. See https://github.com/cloudfoundry/app-autoscaler-cli-plugin#cf-attach-autoscaling-policy for more information.

detach-autoscaling-policy (dasp)

Detach the scaling policy from an application. See https://github.com/cloudfoundry/app-autoscaler-cli-plugin#cf-detach-autoscaling-policy for more information.

create-autoscaling-credential (casc)

Create custom metric credential for an application. See https://github.com/cloudfoundry/app-autoscaler-cli-plugin#cf-create-autoscaling-credential for more information.

delete-autoscaling-credential (dasc)

Delete the custom metric credential of an application. See https://github.com/cloudfoundry/app-autoscaler-cli-plugin#cf-delete-autoscaling-credential for more information.

autoscaling-metrics (asm)

Retrieve the metrics of an application. See https://github.com/cloudfoundry/app-autoscaler-cli-plugin#cf-autoscaling-metrics for more information.

autoscaling-history (ash)

Retrieve the scaling history of an application. See https://github.com/cloudfoundry/app-autoscaler-cli-plugin#cf-autoscaling-history for more information.

19.4 Policies

A policy identifies characteristics including minimum instance count, maximum instance count, and the rules used to determine when the number of application instances is scaled up or down. These rules are categorized into two types, scheduled scaling and dynamic scaling. (See Section 19.4.1, “Scaling Types”). Multiple scaling rules can be specified in a policy, but App-AutoScaler does not detect or handle conflicts that may occur. Ensure there are no conflicting rules to avoid unintended scaling behavior.

Policies are defined using the JSON format and can be attached to an application either by passing the path to the policy file or directly as a parameter.

The following is an example of a policy file, called my-policy.json.

{
    "instance_min_count": 1,
    "instance_max_count": 4,
    "scaling_rules": [{
        "metric_type": "memoryused",
        "stat_window_secs": 60,
        "breach_duration_secs": 60,
        "threshold": 10,
        "operator": ">=",
        "cool_down_secs": 300,
        "adjustment": "+1"
    }]
}

For an example that demonstrates defining multiple scaling rules in a single policy, refer to the sample of a policy file at https://github.com/cloudfoundry/app-autoscaler/blob/develop/src/integration/fakePolicyWithSchedule.json. The complete list of configurable policy values can be found at https://github.com/cloudfoundry/app-autoscaler/blob/master/docs/policy.md.

19.4.1 Scaling Types

Scheduled Scaling

Modifies an application's instance count at a predetermined time. This option is suitable for workloads with predictable resource usage.

Dynamic Scaling

Modifies an application's instance count based on metrics criteria. This option is suitable for workloads with dynamic resource usage. The following metrics are available:

  • memoryused

  • memoryutil

  • cpu

  • responsetime

  • throughput

  • custom metric

See https://github.com/cloudfoundry/app-autoscaler/tree/develop/docs#scaling-type for additional details.

20 Integrating CredHub with SUSE Cloud Application Platform

SUSE Cloud Application Platform supports CredHub integration. You should already have a working CredHub instance, a CredHub service on your cluster, then apply the steps in this chapter to connect SUSE Cloud Application Platform.

20.1 Installing the CredHub Client

Start by creating a new directory for the CredHub client on your local workstation, then download and unpack the CredHub client. The following example is for the 2.2.0 Linux release. For other platforms and current releases, see the cloudfoundry-incubator/credhub-cli at https://github.com/cloudfoundry-incubator/credhub-cli/releases

tux > mkdir chclient
tux > cd chclient
tux > wget https://github.com/cloudfoundry-incubator/credhub-cli/releases/download/2.2.0/credhub-linux-2.2.0.tgz
tux > tar zxf credhub-linux-2.2.0.tgz

20.2 Enabling and Disabling CredHub

CredHub is enabled by default. To disable it, add the following the following block to your kubecf-config-values.yaml file.

features:
  credhub:
    enabled: false

To enable CredHub again, update the above block in your kubecf-config-values.yaml so that enabled is set to true.

After making the change above, and any other configuration changes, apply the update by doing the following:

Warning
Warning

On occasion, the credhub pod may fail to start due to database migration failures; this has been spotted intermittently on Microsoft Azure Kubernetes Service and to a lesser extent, other public clouds. In these situations, manual intervention is required to track the last completed transaction in credhub_user database and update the flyway schema history table with the record of the last completed transaction. Please contact support for further instructions.

20.3 Connecting to the CredHub Service

Set environment variables for the CredHub client, your CredHub service location, and Cloud Application Platform namespace. In these guides the example namespace is kubecf:

tux > CH_CLI=~/.chclient/credhub
tux > CH_SERVICE=https://credhub.example.com
tux > NAMESPACE=kubecf

Set up the CredHub service location:

tux > SECRET="$(kubectl get secrets --namespace "${NAMESPACE}" | awk '/^secrets-/ { print $1 }')"
tux > CH_SECRET="$(kubectl get secrets --namespace "${NAMESPACE}" "${SECRET}" --output jsonpath="{.data['uaa-clients-credhub-user-cli-secret']}"|base64 --decode)"
tux > CH_CLIENT=credhub_user_cli
tux > echo Service ......@ $CH_SERVICE
tux > echo CH cli Secret @ $CH_SECRET

Set the CredHub target through its Kubernetes service, then log into CredHub:

tux > "${CH_CLI}" api --skip-tls-validation --server "${CH_SERVICE}"
tux > "${CH_CLI}" login --client-name="${CH_CLIENT}" --client-secret="${CH_SECRET}"

Test your new connection by inserting and retrieving some fake credentials:

tux > "${CH_CLI}" set --name FOX --type value --value 'fox over lazy dog'
tux > "${CH_CLI}" set --name DOG --type user --username dog --password fox
tux > "${CH_CLI}" get --name FOX
tux > "${CH_CLI}" get --name DOG

21 Buildpacks

Buildpacks are used to construct the environment needed to run your applications, including any required runtimes or frameworks as well as other dependencies. When you deploy an application, a buildpack can be specified or automatically detected by cycling through all available buildpacks to find one that is applicable. When there is a suitable buildpack for your application, the buildpack will then download any necessary dependencies during the staging process.

21.1 System Buildpacks

SUSE Cloud Application Platform releases include a set of system, or built-in, buildpacks for common languages and frameworks. These system buildpacks are based on the upstream versions of the buildpack, but are made compatible with the SLE-based stack(s) found in SUSE Cloud Application Platform.

The following table lists the default system buildpacks and their associated versions included as part of the SUSE Cloud Application Platform 2.0.1 release.

21.2 Using Buildpacks

When deploying an application, a buildpack can be selected through one of the following methods:

  • Using the -b option during the cf push command, for example:

    tux > cf push 12factor -b ruby_buildpack
  • Using the buildpacks attribute in your application's manifest.yml. For more information, see https://docs.cloudfoundry.org/devguide/deploy-apps/manifest-attributes.html#buildpack.

    ---
    applications:
    - name: 12factor
      buildpacks:
        - ruby_buildpack
  • Using buildpack detection.

    Buildpack detection occurs when an application is pushed and a buildpack has not been specified using any of the other methods. The application is checked aginst the detection criteria of a buildpack to verify whether its compatible. Each buildpack has its own detection criteria, defined in the /bin/detect file. The Ruby buildpack, for example, considers an application compatible if it contains a Gemfile file and Gemfile.lock file in its root directory.

    The detection process begins with the first buildpack in the detection priority list. If the buildpack is compatible with the application, the staging process continues. If the buildpack is not compatible with the application, the buildpack in the next position is checked. To see the detection priority list, run cf buildpacks and examine the position field. If there are no compatible buildpacks, the cf push command will fail.

    For more information, see https://docs.cloudfoundry.org/buildpacks/understand-buildpacks.html#buildpack-detection.

In the above, ruby_buildpack can be replaced with:

  • The name of a buildpack. To list the currently available buildpacks, including any that were created or updated, examine the buildpack field after running:

    tux > cf buildpacks
  • The Git URL of a buildpack. For example, https://github.com/SUSE/cf-ruby-buildpack.

  • The Git URL of a buildpack with a specific branch or tag. For example, https://github.com/SUSE/cf-ruby-buildpack#1.7.40.

For more information about using buildpacks, see https://docs.cloudfoundry.org/buildpacks/#using-buildpacks.

21.3 Adding Buildpacks

Additional buildpacks can be added to your SUSE Cloud Application Platform deployment to complement the ones already installed.

  1. List the currently installed buildpacks.

    tux > cf buildpacks
    Getting buildpacks...
    
    buildpack               position   enabled   locked   filename                                           stack
    staticfile_buildpack    1          true      false    staticfile-buildpack-v1.4.43.1-1.1-53227ab3.zip
    nginx_buildpack         2          true      false    nginx-buildpack-v1.0.15.1-1.1-868e3dbf.zip
    java_buildpack          3          true      false    java-buildpack-v4.20.0.1-7b3efeee.zip
    ruby_buildpack          4          true      false    ruby-buildpack-v1.7.42.1-1.1-897dec18.zip
    nodejs_buildpack        5          true      false    nodejs-buildpack-v1.6.53.1-1.1-ca7738ac.zip
    go_buildpack            6          true      false    go-buildpack-v1.8.42.1-1.1-c93d1f83.zip
    python_buildpack        7          true      false    python-buildpack-v1.6.36.1-1.1-4c0057b7.zip
    php_buildpack           8          true      false    php-buildpack-v4.3.80.1-6.1-613615bf.zip
    binary_buildpack        9          true      false    binary-buildpack-v1.0.33.1-1.1-a53fa79d.zip
    dotnet-core_buildpack   10         true      false    dotnet-core-buildpack-v2.2.13.1-1.1-cf41131a.zip
  2. Add a new buildpack using the cf create-buildpack command.

    tux > cf create-buildpack another_ruby_buildpack https://cf-buildpacks.suse.com/ruby-buildpack-v1.7.41.1-1.1-c4cd5fed.zip 10

    Where:

    • another_ruby_buildpack is the name of the buildpack.

    • https://cf-buildpacks.suse.com/ruby-buildpack-v1.7.41.1-1.1-c4cd5fed.zip is the path to the buildpack release. It should be a zip file, a URL to a zip file, or a local directory.

    • 10 is the position of the buildpack and used to determine priority. A lower value indicates a higher priority.

    To see all available options, run:

    tux > cf create-buildpack -h
  3. Verify the new buildpack has been added.

    tux > cf buildpacks
    Getting buildpacks...
    
    buildpack                position   enabled   locked   filename                                           stack
    staticfile_buildpack     1          true      false    staticfile-buildpack-v1.4.43.1-1.1-53227ab3.zip
    nginx_buildpack          2          true      false    nginx-buildpack-v1.0.15.1-1.1-868e3dbf.zip
    java_buildpack           3          true      false    java-buildpack-v4.20.0.1-7b3efeee.zip
    ruby_buildpack           4          true      false    ruby-buildpack-v1.7.42.1-1.1-897dec18.zip
    nodejs_buildpack         5          true      false    nodejs-buildpack-v1.6.53.1-1.1-ca7738ac.zip
    go_buildpack             6          true      false    go-buildpack-v1.8.42.1-1.1-c93d1f83.zip
    python_buildpack         7          true      false    python-buildpack-v1.6.36.1-1.1-4c0057b7.zip
    php_buildpack            8          true      false    php-buildpack-v4.3.80.1-6.1-613615bf.zip
    binary_buildpack         9          true      false    binary-buildpack-v1.0.33.1-1.1-a53fa79d.zip
    another_ruby_buildpack   10         true      false    ruby-buildpack-v1.7.41.1-1.1-c4cd5fed.zip
    dotnet-core_buildpack    11         true      false    dotnet-core-buildpack-v2.2.13.1-1.1-cf41131a.zip

21.4 Updating Buildpacks

Currently installed buildpacks can be updated using the cf update-buildpack command. To see all values that can be updated, run cf update-buildpack -h.

  1. List the currently installed buildpacks that can be updated.

    tux > cf buildpacks
    Getting buildpacks...
    
    buildpack                position   enabled   locked   filename                                           stack
    staticfile_buildpack     1          true      false    staticfile-buildpack-v1.4.43.1-1.1-53227ab3.zip
    nginx_buildpack          2          true      false    nginx-buildpack-v1.0.15.1-1.1-868e3dbf.zip
    java_buildpack           3          true      false    java-buildpack-v4.20.0.1-7b3efeee.zip
    ruby_buildpack           4          true      false    ruby-buildpack-v1.7.42.1-1.1-897dec18.zip
    nodejs_buildpack         5          true      false    nodejs-buildpack-v1.6.53.1-1.1-ca7738ac.zip
    go_buildpack             6          true      false    go-buildpack-v1.8.42.1-1.1-c93d1f83.zip
    python_buildpack         7          true      false    python-buildpack-v1.6.36.1-1.1-4c0057b7.zip
    php_buildpack            8          true      false    php-buildpack-v4.3.80.1-6.1-613615bf.zip
    binary_buildpack         9          true      false    binary-buildpack-v1.0.33.1-1.1-a53fa79d.zip
    another_ruby_buildpack   10         true      false    ruby-buildpack-v1.7.41.1-1.1-c4cd5fed.zip
    dotnet-core_buildpack    11         true      false    dotnet-core-buildpack-v2.2.13.1-1.1-cf41131a.zip
  2. Use the cf update-buildpack command to update a buildpack.

    tux > cf update-buildpack another_ruby_buildpack -i 11

    To see all available options, run:

    tux > cf update-buildpack -h
  3. Verify the new buildpack has been updated.

    tux > cf buildpacks
    Getting buildpacks...
    
    buildpack                position   enabled   locked   filename                                           stack
    staticfile_buildpack     1          true      false    staticfile-buildpack-v1.4.43.1-1.1-53227ab3.zip
    nginx_buildpack          2          true      false    nginx-buildpack-v1.0.15.1-1.1-868e3dbf.zip
    java_buildpack           3          true      false    java-buildpack-v4.20.0.1-7b3efeee.zip
    ruby_buildpack           4          true      false    ruby-buildpack-v1.7.42.1-1.1-897dec18.zip
    nodejs_buildpack         5          true      false    nodejs-buildpack-v1.6.53.1-1.1-ca7738ac.zip
    go_buildpack             6          true      false    go-buildpack-v1.8.42.1-1.1-c93d1f83.zip
    python_buildpack         7          true      false    python-buildpack-v1.6.36.1-1.1-4c0057b7.zip
    php_buildpack            8          true      false    php-buildpack-v4.3.80.1-6.1-613615bf.zip
    binary_buildpack         9          true      false    binary-buildpack-v1.0.33.1-1.1-a53fa79d.zip
    dotnet-core_buildpack    10         true      false    dotnet-core-buildpack-v2.2.13.1-1.1-cf41131a.zip
    another_ruby_buildpack   11         true      false    ruby-buildpack-v1.7.41.1-1.1-c4cd5fed.zip

21.5 Offline Buildpacks

An offline, or cached, buildpack packages the runtimes, frameworks, and dependencies needed to run your applications into an archive that is then uploaded to your Cloud Application Platform deployment. When an application is deployed using an offline buildpack, access to the Internet to download dependencies is no longer required. This has the benefit of providing improved staging performance and allows for staging to take place on air-gapped environments.

21.5.1 Creating an Offline Buildpack

Offline buildpacks can be created using the cf-buildpack-packager-docker tool, which is available as a Docker image. The only requirement to use this tool is a system with Docker support.

Important
Important: Disclaimer

Some Cloud Foundry buildpacks can reference binaries with proprietary or mutually incompatible open source licenses which cannot be distributed together as offline/cached buildpack archives. Operators who wish to package and maintain offline buildpacks will be responsible for any required licensing or export compliance obligations.

For automation purposes, you can use the --accept-external-binaries option to accept this disclaimer without the interactive prompt.

Usage of the tool is as follows:

package [--accept-external-binaries] org [all [stack] | language [tag] [stack]]

Where:

  • org is the Github organization hosting the buildpack repositories, such as "cloudfoundry" or "SUSE"

  • A tag cannot be specified when using all as the language because the tag is different for each language

  • tag is not optional if a stack is specified. To specify the latest release, use "" as the tag

  • A maximum of one stack can be specified

The following example demonstrates packaging an offline Ruby buildpack and uploading it to your Cloud Application Platform deployment to use. The packaged buildpack will be a Zip file placed in the current working directory, $PWD.

  1. Build the latest released SUSE Ruby buildpack for the SUSE Linux Enterprise 15 stack:

    tux > docker run --interactive --tty --rm -v $PWD:/out splatform/cf-buildpack-packager SUSE ruby "" sle15
  2. Verify the archive has been created in your current working directory:

    tux > ls
    ruby_buildpack-cached-sle15-v1.7.30.1.zip
  3. Log into your Cloud Application Platform deployment. Select an organization and space to work with, creating them if needed:

    tux > cf api --skip-ssl-validation https://api.example.com
    tux > cf login -u admin -p password
    tux > cf create-org org
    tux > cf create-space space -o org
    tux > cf target -o org -s space
  4. List the currently available buildpacks:

    tux > cf buildpacks
    Getting buildpacks...
    
    buildpack               position   enabled   locked   filename
    staticfile_buildpack    1          true      false    staticfile_buildpack-v1.4.34.1-1.1-1dd6386a.zip
    java_buildpack          2          true      false    java-buildpack-v4.16.1-e638145.zip
    ruby_buildpack          3          true      false    ruby_buildpack-v1.7.26.1-1.1-c2218d66.zip
    nodejs_buildpack        4          true      false    nodejs_buildpack-v1.6.34.1-3.1-c794e433.zip
    go_buildpack            5          true      false    go_buildpack-v1.8.28.1-1.1-7508400b.zip
    python_buildpack        6          true      false    python_buildpack-v1.6.23.1-1.1-99388428.zip
    php_buildpack           7          true      false    php_buildpack-v4.3.63.1-1.1-2515c4f4.zip
    binary_buildpack        8          true      false    binary_buildpack-v1.0.27.1-3.1-dc23dfe2.zip
    dotnet-core_buildpack   9          true      false    dotnet-core-buildpack-v2.0.3.zip
  5. Upload your packaged offline buildpack to your Cloud Application Platform deployment:

    tux > cf create-buildpack ruby_buildpack_cached /tmp/ruby_buildpack-cached-sle15-v1.7.30.1.zip 1 --enable
    Creating buildpack ruby_buildpack_cached...
    OK
    
    Uploading buildpack ruby_buildpack_cached...
    Done uploading
    OK
  6. Verify your buildpack is available:

    tux > cf buildpacks
    Getting buildpacks...
    
    buildpack               position   enabled   locked   filename
    ruby_buildpack_cached   1          true      false    ruby_buildpack-cached-sle15-v1.7.30.1.zip
    staticfile_buildpack    2          true      false    staticfile_buildpack-v1.4.34.1-1.1-1dd6386a.zip
    java_buildpack          3          true      false    java-buildpack-v4.16.1-e638145.zip
    ruby_buildpack          4          true      false    ruby_buildpack-v1.7.26.1-1.1-c2218d66.zip
    nodejs_buildpack        5          true      false    nodejs_buildpack-v1.6.34.1-3.1-c794e433.zip
    go_buildpack            6          true      false    go_buildpack-v1.8.28.1-1.1-7508400b.zip
    python_buildpack        7          true      false    python_buildpack-v1.6.23.1-1.1-99388428.zip
    php_buildpack           8          true      false    php_buildpack-v4.3.63.1-1.1-2515c4f4.zip
    binary_buildpack        9          true      false    binary_buildpack-v1.0.27.1-3.1-dc23dfe2.zip
    dotnet-core_buildpack   10         true      false    dotnet-core-buildpack-v2.0.3.zip
  7. Deploy a sample Rails app using the new buildpack:

    tux > git clone https://github.com/scf-samples/12factor
    tux > cd 12factor
    tux > cf push 12factor -b ruby_buildpack_cached
Warning
Warning: Deprecation of cflinuxfs2 and sle12 Stacks

As of SUSE Cloud Foundry 2.18.0, since our cf-deployment version is 9.5 , the cflinuxfs2 stack is no longer supported, as was advised in SUSE Cloud Foundry 2.17.1 or Cloud Application Platform 1.4.1. The cflinuxfs2 buildpack is no longer shipped, but if you are upgrading from an earlier version, cflinuxfs2 will not be removed. However, for migration purposes, we encourage all admins to move to cflinuxfs3 or sle15 as newer buildpacks will not work with the deprecated cflinuxfs2. If you still want to use the older stack, you will need to build an older version of a buildpack to continue for the application to work, but you will be unsupported. (If you are running on sle12, we will be retiring that stack in a future version so start planning your migration to sle15. The procedure is described below.)

  • Migrate applications to the new stack using one of the methods listed. Note that both methods will cause application downtime. Downtime can be avoided by following a Blue-Green Deployment strategy. See https://docs.cloudfoundry.org/devguide/deploy-apps/blue-green.html for details.

    Note that stack association support is available as of cf CLI v6.39.0.

    • Option 1 - Migrating applications using the Stack Auditor plugin.

      Stack Auditor rebuilds the application onto the new stack without a change in the application source code. If you want to move to a new stack with updated code, please follow Option 2 below. For additional information about the Stack Auditor plugin, see https://docs.cloudfoundry.org/adminguide/stack-auditor.html.

      1. Install the Stack Auditor plugin for the cf CLI. For instructions, see https://docs.cloudfoundry.org/adminguide/stack-auditor.html#install.

      2. Identify the stack applications are using. The audit lists all applications in orgs you have access to. To list all applications in your Cloud Application Platform deployment, ensure you are logged in as a user with access to all orgs.

        tux > cf audit-stack

        For each application requiring migration, perform the steps below.

      3. If necessary, switch to the org and space the application is deployed to.

        tux > cf target ORG SPACE
      4. Change the stack to sle15.

        tux > cf change-stack APP_NAME sle15
      5. Identify all buildpacks associated with the sle12 and cflinuxfs2 stacks.

        tux > cf buildpacks
      6. Remove all buildpacks associated with the sle12 and cflinuxfs2 stacks.

        tux > cf delete-buildpack BUILDPACK -s sle12
        
        tux > cf delete-buildpack BUILDPACK -s cflinuxfs2
      7. Remove the sle12 and cflinuxfs2 stacks.

        tux > cf delete-stack sle12
        
        tux > cf delete-stack cflinuxfs2
    • Option 2 - Migrating applications using the cf CLI.

      Perform the following for all orgs and spaces in your Cloud Application Platform deployment. Ensure you are logged in as a user with access to all orgs.

      1. Target an org and space.

        tux > cf target ORG SPACE
      2. Identify the stack an applications in the org and space is using.

        tux > cf app APP_NAME
      3. Re-push the app with the sle15 stack using one of the following methods.

      4. Identify all buildpacks associated with the sle12 and cflinuxfs2 stacks.

        tux > cf buildpacks
      5. Remove all buildpacks associated with the sle12 and cflinuxfs2 stacks.

        tux > cf delete-buildpack BUILDPACK -s sle12
        
        tux > cf delete-buildpack BUILDPACK -s cflinuxfs2
      6. Remove the sle12 and cflinuxfs2 stacks using the CF API. See https://apidocs.cloudfoundry.org/7.11.0/#stacks for details.

        List all stacks then find the GUID of the sle12 cflinuxfs2 stacks.

        tux > cf curl /v2/stacks

        Delete the sle12 and cflinuxfs2 stacks.

        tux > cf curl -X DELETE /v2/stacks/SLE12_STACK_GUID
        
        tux > cf curl -X DELETE /v2/stacks/CFLINUXFS2_STACK_GUID

Part IV SUSE Cloud Application Platform User Guide

22 Deploying and Managing Applications with the Cloud Foundry Client

The Cloud Foundry command line interface (cf CLI) is for deploying and managing your applications. You may use it for all the orgs and spaces that you are a member of. Install the client on a workstation for remote administration of your SUSE Cloud Foundry instances.

22 Deploying and Managing Applications with the Cloud Foundry Client

22.1 Using the cf CLI with SUSE Cloud Application Platform

The Cloud Foundry command line interface (cf CLI) is for deploying and managing your applications. You may use it for all the orgs and spaces that you are a member of. Install the client on a workstation for remote administration of your SUSE Cloud Foundry instances.

The complete guide is at Using the Cloud Foundry Command Line Interface, and source code with a demo video is on GitHub at Cloud Foundry CLI.

The following examples demonstrate some of the commonly-used commands. The first task is to log into your new Cloud Application Platform instance. You need to provide the API endpoint of your SUSE Cloud Application Platform instance to log in. The API endpoint is the system_domain value you provided in kubecf-config-values.yaml, plus the api. prefix, as it shows in the above welcome screen. Set your endpoint, and use --skip-ssl-validation when you have self-signed SSL certificates. It asks for an e-mail address, but you must enter admin instead (you cannot change this to a different username, though you may create additional users), and the password is the one you created in kubecf-config-values.yaml:

tux > cf login --skip-ssl-validation -a https://api.example.com
API endpoint: https://api.example.com

Email> admin

Password>
Authenticating...
OK

Targeted org system

API endpoint:   https://api.example.com (API version: 2.134.0)
User:           admin
Org:            system
Space:          No space targeted, use 'cf target -s SPACE'

cf help displays a list of commands and options. cf help [command] provides information on specific commands.

You may pass in your credentials and set the API endpoint in a single command:

tux > cf login -u admin -p password --skip-ssl-validation -a https://api.example.com

Log out with cf logout.

Change the admin password:

tux > cf passwd
Current Password>
New Password>
Verify Password>
Changing password...
OK
Please log in again

View your current API endpoint, user, org, and space:

tux > cf target

Switch to a different org or space:

tux > cf target -o org
tux > cf target -s space

List all apps in the current space:

tux > cf apps

Query the health and status of a particular app:

tux > cf app appname

View app logs. The first example tails the log of a running app. The --recent option dumps recent logs instead of tailing, which is useful for stopped and crashed apps:

tux > cf logs appname
tux > cf logs --recent appname

Restart all instances of an app:

tux > cf restart appname

Restart a single instance of an app, identified by its index number, and restart it with the same index number:

tux > cf restart-app-instance appname index

After you have set up a service broker (see Chapter 18, Service Brokers), create new services:

tux > cf create-service service-name default mydb

Then you may bind a service instance to an app:

tux > cf bind-service appname service-instance

The most-used command is cf push, for pushing new apps and changes to existing apps.

tux > cf push new-app -b buildpack

If you need to debug your application or run one-off tasks, start an SSH session into your application container.

tux > cf ssh appname

When the SSH connection is established, run the following to have the environment match that of the application and its associated buildpack.

tux > /tmp/lifecycle/shell

Part V Troubleshooting

23 Troubleshooting

Cloud stacks are complex, and debugging deployment issues often requires digging through multiple layers to find the information you need. Remember that the KubeCF releases must be deployed in the correct order, and that each release must deploy successfully, with no failed pods, before deploying th…

23 Troubleshooting

Cloud stacks are complex, and debugging deployment issues often requires digging through multiple layers to find the information you need. Remember that the KubeCF releases must be deployed in the correct order, and that each release must deploy successfully, with no failed pods, before deploying the next release.

Before proceeding with in depth troubleshooting, ensure the following have been met as defined in the Support Statement at Section 5.2, “Platform Support”.

  1. The Kubernetes cluster satisfies the Requirements listed here at https://documentation.suse.com/suse-cap/2.0.1/html/cap-guides/cha-cap-depl-kube-requirements.html#sec-cap-changes-kube-reqs.

  2. The kube-ready-state-check.sh script has been run on the target Kubernetes cluster and does not show any configuration problems.

  3. A SUSE Services or Sales Engineer has verified that SUSE Cloud Application Platform works correctly on the target Kubernetes cluster.

23.1 Logging

There are two types of logs in a deployment of SUSE Cloud Application Platform, applications logs and component logs. The following provides a brief overview of each log type and how to retrieve them for monitoring and debugging use.

  • Application logs provide information specific to a given application that has been deployed to your Cloud Application Platform cluster and can be accessed through:

    • The cf CLI using the cf logs command

    • The application's log stream within the Stratos console

  • Access to logs for a given component of your Cloud Application Platform deployment can be obtained by:

    • The kubectl logs command

      The following example retrieves the logs of the router container of router-0 pod in the kubecf namespace

      tux > kubectl logs --namespace kubecf router-0 router
    • Direct access to the log files using the following:

      1. Open a shell to the container of the component using the kubectl exec command

      2. Navigate to the logs directory at /var/vcap/sys/logs, at which point there will be subdirectories containing the log files for access.

        tux > kubectl exec --stdin --tty --namespace kubecf router-0 /bin/bash
        
        router/0:/# cd /var/vcap/sys/log
        
        router/0:/var/vcap/sys/log# ls -R
        .:
        gorouter  loggregator_agent
        
        ./gorouter:
        access.log  gorouter.err.log  gorouter.log  post-start.err.log  post-start.log
        
        ./loggregator_agent:
        agent.log

23.2 Using Supportconfig

If you ever need to request support, or just want to generate detailed system information and logs, use the supportconfig utility. Run it with no options to collect basic system information, and also cluster logs including Docker, etcd, flannel, and Velum. supportconfig may give you all the information you need.

supportconfig -h prints the options. Read the "Gathering System Information for Support" chapter in any SUSE Linux Enterprise Administration Guide to learn more.

23.3 Deployment Is Taking Too Long

A deployment step seems to take too long, or you see that some pods are not in a ready state hours after all the others are ready, or a pod shows a lot of restarts. This example shows not-ready pods many hours after the others have become ready:

tux > kubectl get pods --namespace kubecf
NAME                     READY STATUS    RESTARTS  AGE
router-3137013061-wlhxb  0/1   Running   0         16h
routing-api-0            0/1   Running   0         16h

The Running status means the pod is bound to a node and all of its containers have been created. However, it is not Ready, which means it is not ready to service requests. Use kubectl to print a detailed description of pod events and status:

tux > kubectl describe pod --namespace kubecf router-0

This prints a lot of information, including IP addresses, routine events, warnings, and errors. You should find the reason for the failure in this output.

Important
Important

During deployment, pods are spawned over time, starting with a single pod whose name stars with ig-. This pod will eventually disappear and will be replaced by other pods whose progress then can be followed as usual.

The whole process can take around 20—30 minutes to finish.

The initial stage may look like this:

tux > kubectl get pods --namespace kubecf
ig-kubecf-f9085246244fbe70-jvg4z   1/21    Running             0          8m28s

Later the progress may look like this:

NAME                        READY   STATUS       RESTARTS   AGE
adapter-0                   4/4     Running      0          6m45s
api-0                       0/15    Init:30/63   0          6m38s
bits-0                      0/6     Init:8/15    0          6m34s
bosh-dns-7787b4bb88-2wg9s   1/1     Running      0          7m7s
bosh-dns-7787b4bb88-t42mh   1/1     Running      0          7m7s
cc-worker-0                 0/4     Init:5/9     0          6m36s
credhub-0                   0/5     Init:6/11    0          6m33s
database-0                  2/2     Running      0          6m36s
diego-api-0                 6/6     Running      2          6m38s
doppler-0                   0/9     Init:7/16    0          6m40s
eirini-0                    9/9     Running      0          6m37s
log-api-0                   0/7     Init:6/13    0          6m35s
nats-0                      4/4     Running      0          6m39s
router-0                    0/5     Init:5/11    0          6m33s
routing-api-0               0/4     Init:5/10    0          6m42s
scheduler-0                 0/8     Init:8/17    0          6m35s
singleton-blobstore-0       0/6     Init:6/11    0          6m46s
tcp-router-0                0/5     Init:5/11    0          6m37s
uaa-0                       0/6     Init:8/13    0          6m36s

23.4 Deleting and Rebuilding a Deployment

There may be times when you want to delete and rebuild a deployment, for example when there are errors in your kubecf-config-values.yaml file, you wish to test configuration changes, or a deployment fails and you want to try it again.

  1. Remove the kubecf release. All resources associated with the release of the suse/kubecf chart will be removed. Replace the example release name with the one used during your installation.

    tux > helm uninstall kubecf
  2. Remove the kubecf namespace. Replace with the namespace where the suse/kubecf chart was installed.

    tux > kubectl delete namespace kubecf
  3. Remove the cf-operator release. All resources associated with the release of the suse/cf-operator chart will be removed. Replace the example release name with the one used during your installation.

    tux > helm uninstall cf-operator
  4. Remove the cf-operator namespace. Replace with the namespace where the suse/cf-operator chart was installed.

    tux > kubectl delete namespace cf-operator
  5. Verify all of the releases are removed.

    tux > helm list --all-namespaces
  6. Verify all of the namespaces are removed.

    tux > kubectl get namespaces

23.5 Querying with Kubectl

You can safely query with kubectl to get information about resources inside your Kubernetes cluster. kubectl cluster-info dump | tee clusterinfo.txt outputs a large amount of information about the Kubernetes master and cluster services to a text file.

The following commands give more targeted information about your cluster.

  • List all cluster resources:

    tux > kubectl get all --all-namespaces
  • List all of your running pods:

    tux > kubectl get pods --all-namespaces
  • List all of your running pods, their internal IP addresses, and which Kubernetes nodes they are running on:

    tux > kubectl get pods --all-namespaces --output wide
  • See all pods, including those with Completed or Failed statuses:

    tux > kubectl get pods --show-all --all-namespaces
  • List pods in one namespace:

    tux > kubectl get pods --namespace kubecf
  • Get detailed information about one pod:

    tux > kubectl describe --namespace kubecf po/diego-cell-0
  • Read the log file of a pod:

    tux > kubectl logs --namespace kubecf po/diego-cell-0
  • List all Kubernetes nodes, then print detailed information about a single node:

    tux > kubectl get nodes
    tux > kubectl describe node 6a2752b6fab54bb889029f60de6fa4d5.infra.caasp.local
  • List all containers in all namespaces, formatted for readability:

    tux > kubectl get pods --all-namespaces --output jsonpath="{..image}" |\
    tr -s '[[:space:]]' '\n' |\
    sort |\
    uniq -c
  • These two commands check node capacities, to verify that there are enough resources for the pods:

    tux > kubectl get nodes --output yaml | grep '\sname\|cpu\|memory'
    tux > kubectl get nodes --output json | \
    jq '.items[] | {name: .metadata.name, cap: .status.capacity}'

23.6 Admission webhook denied

When switching back to Diego from Eirini, the error below can occur:

tux > helm install kubecf suse/kubecf --namespace kubecf --values kubecf-config-values.yaml
Error: admission webhook "validate-boshdeployment.quarks.cloudfoundry.org" denied the request: Failed to resolve manifest: Failed to interpolate ops 'kubecf-user-provided-properties' for manifest 'kubecf': Applying ops on manifest obj failed in interpolator: Expected to find exactly one matching array item for path '/instance_groups/name=eirini' but found 0

To avoid this error, remove the eirini-persi-broker configuration before running the command.

23.7 Namespace does not exist

When running a Helm command, an error occurs stating that a namespace does not exist. To avoid this error, create the namespace manually with kubectl; before running the command:

tux > kubectl create namespace name

A Appendix

A.1 Complete suse/kubecf values.yaml File

This is the complete output of helm inspect values suse/kubecf for the current SUSE Cloud Application Platform 2.0.1 release.

# REQUIRED: the domain that the deployment will be visible to the user.
system_domain: ~

# Set or override job properties. The first level of the map is the instance group name. The second
# level of the map is the job name. E.g.:
#  properties:
#    adapter:
#      adapter:
#        scalablesyslog:
#          adapter:
#            logs:
#              addr: kubecf-log-api:8082
#
properties: {}

credentials: {}

variables: {}

kube:
  # The storage class to be used for the instance groups that need it (e.g. bits, database and
  # singleton-blobstore). If it's not set, the default storage class will be used.
  storage_class: ~
  # The psp key contains the configuration related to Pod Security Policies. By default, a PSP will
  # be generated with the necessary permissions for running KubeCF. To pass an existing PSP and
  # prevent KubeCF from creating a new one, set the kube.psp.default with the PSP name.
  psp:
    default: ~

releases:
  # The defaults for all releases, where we do not otherwise override them.
  defaults:
    url: registry.suse.com/cap
    stemcell:
      os: SLE_15_SP1
      version: 23.21-7.0.0_374.gb8e8e6af
  app-autoscaler:
    version: 3.0.0
  bits-service:
    version: 2.28.0
  brain-tests:
    version: v0.0.12
    stemcell:
      os: SLE_15_SP1
      version: 25.1-7.0.0_374.gb8e8e6af
  cf-acceptance-tests:
    version: 0.0.13
    stemcell:
      os: SLE_15_SP1
      version: 23.21-7.0.0_374.gb8e8e6af
  cf-smoke-tests:
    version: 40.0.128
    stemcell:
      os: SLE_15_SP1
      version: 25.2-7.0.0_374.gb8e8e6af
  # pxc is not a BOSH release.
  pxc:
    image:
      repository: registry.suse.com/cap/pxc
      tag: 0.9.4
  eirini:
    version: 0.0.27
    stemcell:
      os: SLE_15_SP1
      version: 23.21-7.0.0_374.gb8e8e6af
  postgres:
    version: "39"
  sle15:
    version: "10.93"
  sync-integration-tests:
    version: v0.0.3
  suse-staticfile-buildpack:
    url: registry.suse.com/cap
    version: "1.5.5.1"
    stemcell:
      os: SLE_15_SP1
      version: 25.1-7.0.0_374.gb8e8e6af
    file: suse-staticfile-buildpack/packages/staticfile-buildpack-sle15/staticfile-buildpack-sle15-v1.5.5.1-5.1-eaf36a02.zip
  suse-java-buildpack:
    url: registry.suse.com/cap
    version: "4.29.1.1"
    stemcell:
      os: SLE_15_SP1
      version: 25.1-7.0.0_374.gb8e8e6af
    file: suse-java-buildpack/packages/java-buildpack-sle15/java-buildpack-sle15-v4.29.1.1-543ec059.zip
  suse-ruby-buildpack:
    url: registry.suse.com/cap
    version: "1.8.15.1"
    stemcell:
      os: SLE_15_SP1
      version: 25.1-7.0.0_374.gb8e8e6af
    file: suse-ruby-buildpack/packages/ruby-buildpack-sle15/ruby-buildpack-sle15-v1.8.15.1-4.1-2b6d6879.zip
  suse-dotnet-core-buildpack:
    url: registry.suse.com/cap
    version: "2.3.9.1"
    stemcell:
      os: SLE_15_SP1
      version: 25.1-7.0.0_374.gb8e8e6af
    file: suse-dotnet-core-buildpack/packages/dotnet-core-buildpack-sle15/dotnet-core-buildpack-sle15-v2.3.9.1-1.1-e74bd89e.zip
  suse-nodejs-buildpack:
    url: registry.suse.com/cap
    version: "1.7.17.1"
    stemcell:
      os: SLE_15_SP1
      version: 25.1-7.0.0_374.gb8e8e6af
    file: suse-nodejs-buildpack/packages/nodejs-buildpack-sle15/nodejs-buildpack-sle15-v1.7.17.1-1.1-7e96d2dd.zip
  suse-go-buildpack:
    url: registry.suse.com/cap
    version: "1.9.11.1"
    stemcell:
      os: SLE_15_SP1
      version: 25.1-7.0.0_374.gb8e8e6af
    file: suse-go-buildpack/packages/go-buildpack-sle15/go-buildpack-sle15-v1.9.11.1-2.1-d5c02636.zip
  suse-python-buildpack:
    url: registry.suse.com/cap
    version: "1.7.12.1"
    stemcell:
      os: SLE_15_SP1
      version: 25.1-7.0.0_374.gb8e8e6af
    file: suse-python-buildpack/packages/python-buildpack-sle15/python-buildpack-sle15-v1.7.12.1-2.1-ebd0f50d.zip
  suse-php-buildpack:
    url: registry.suse.com/cap
    version: "4.4.12.1"
    stemcell:
      os: SLE_15_SP1
      version: 25.1-7.0.0_374.gb8e8e6af
    file: suse-php-buildpack/packages/php-buildpack-sle15/php-buildpack-sle15-v4.4.12.1-4.1-2c4591cb.zip
  suse-nginx-buildpack:
    url: registry.suse.com/cap
    version: "1.1.7.1"
    stemcell:
      os: SLE_15_SP1
      version: 25.1-7.0.0_374.gb8e8e6af
    file: suse-nginx-buildpack/packages/nginx-buildpack-sle15/nginx-buildpack-sle15-v1.1.7.1-1.1-fbf90d1f.zip
  suse-binary-buildpack:
    url: registry.suse.com/cap
    version: "1.0.36.1"
    stemcell:
      os: SLE_15_SP1
      version: 25.1-7.0.0_374.gb8e8e6af
    file: suse-binary-buildpack/packages/binary-buildpack-sle15/binary-buildpack-sle15-v1.0.36.1-1.1-37ec2cbf.zip

multi_az: false
high_availability: false

# Instance sizing takes precedence over the high_availability property. I.e. setting the
# instance count for an instance group greater than 1 will make it highly available.
#
# It is also possible to specify custom affinity rules for each instance group. If no rule
# is provided, then each group as anti-affinity to itself, to try to spread the pods between
# different nodes. In addition diego-cell and router also have anti-affinity to each other.
#
# The default rules look like this:
#
# sizing:
#   sample_group:
#     affinity:
#       podAntiAffinity:
#         preferredDuringSchedulingIgnoredDuringExecution:
#         - weight: 100
#           podAffinityTerm:
#             labelSelector:
#               matchExpressions:
#               - key: quarks.cloudfoundry.org/quarks-statefulset-name
#                 operator: In
#                 values:
#                 - sample_group
#             topologyKey: kubernetes.io/hostname
#
# Any affinity rules specified here will *overwrite* the default rule and not merge with it.

sizing:
  adapter:
    instances: ~
  api:
    instances: ~
  asactors:
    instances: ~
  asapi:
    instances: ~
  asmetrics:
    instances: ~
  asnozzle:
    instances: ~
  auctioneer:
    instances: ~
  bits:
    instances: ~
  cc_worker:
    instances: ~
  credhub:
    instances: ~
  database:
    instances: ~
    persistence:
      size: 20Gi
  diego_api:
    instances: ~
  diego_cell:
    ephemeral_disk:
      # Size of the ephemeral disk used to store applications in MB
      size: 40960
      # The name of the storage class used for the ephemeral disk PVC.
      storage_class: ~
    instances: ~
  doppler:
    instances: ~
  eirini:
    instances: ~
  log_api:
    instances: ~
  nats:
    instances: ~
  router:
    instances: ~
  routing_api:
    instances: ~
  scheduler:
    instances: ~
  uaa:
    instances: ~
  tcp_router:
    instances: ~

#  External endpoints are created for the instance groups only if features.ingress.enabled is false.
services:
  router:
    annotations: ~
    type: LoadBalancer
    externalIPs: []
    clusterIP: ~
  ssh-proxy:
    annotations: ~
    type: LoadBalancer
    externalIPs: []
    clusterIP: ~
  tcp-router:
    annotations: ~
    type: LoadBalancer
    externalIPs: []
    clusterIP: ~
    port_range:
      start: 20000
      end: 20008

settings:
  router:
    # tls sets up the public TLS for the router. The tls keys:
    #   crt: the certificate in the PEM format. Required.
    #   key: the private key in the PEM format. Required.
    tls: {}
    # crt: |
    #   -----BEGIN CERTIFICATE-----
    #   ...
    #   -----END CERTIFICATE-----
    # key: |
    #   -----BEGIN PRIVATE KEY-----
    #   ...
    #   -----END PRIVATE KEY-----


features:
  eirini:
    # When eirini is enabled, both suse_default_stack and suse_buildpacks must be enabled as well.
    enabled: false
    registry:
      service:
        # This setting is not currently configurable and must be HIDDEN
        nodePort: 31666
  ingress:
    enabled: false
    tls:
      crt: ~
      key: ~
    annotations: {}
    labels: {}
  suse_default_stack:
    enabled:  true
  suse_buildpacks:
    enabled: true
  autoscaler:
    enabled: false
  credhub:
    enabled: true
  # Disabling routing_api will also disable the tcp_router instance_group
  routing_api:
    enabled: true
  # embedded_database enables the embedded PXC sub-chart. Disabling it allows using an external, already seeded,
  embedded_database:
    enabled: true
  blobstore:
    # Possible values for provider: singleton and s3.
    provider: singleton
    s3:
      aws_region: ~
      blobstore_access_key_id: ~
      blobstore_secret_access_key: ~
      blobstore_admin_users_password: ~
      # The following values are used as S3 bucket names.
      app_package_directory_key: ~
      buildpack_directory_key: ~
      droplet_directory_key: ~
      resource_directory_key: ~

  # The external database type can be either 'mysql' or 'postgres'.
  external_database:
    enabled: false
    require_ssl: false
    ca_cert: ~
    type: ~
    host: ~
    port: ~
    databases:
      uaa:
        name: uaa
        password: ~
        username: ~
      cc:
        name: cloud_controller
        password: ~
        username: ~
      bbs:
        name: diego
        password: ~
        username: ~
      routing_api:
        name: routing-api
        password: ~
        username: ~
      policy_server:
        name: network_policy
        password: ~
        username: ~
      silk_controller:
        name: network_connectivity
        password: ~
        username: ~
      locket:
        name: locket
        password: ~
        username: ~
      credhub:
        name: credhub
        password: ~
        username: ~

# Enable or disable instance groups for the different test suites.
# Only smoke tests should be run in production environments.
#
# __ATTENTION__: The brain tests do things with the cluster which
# required them to have `cluster-admin` permissions (i.e. root).
# Enabling them is thus potentially insecure. They should only be
# activated for isolated testing.

testing:
  brain_tests:
    enabled: false
  cf_acceptance_tests:
    enabled: false
  smoke_tests:
    enabled: true
  sync_integration_tests:
    enabled: false

ccdb:
  encryption:
    rotation:
      # Key labels must be <= 240 characters long.
      key_labels:
      - encryption_key_0
      current_key_label: encryption_key_0

operations:
  # A list of configmap names that should be applied to the BOSH manifest.
  custom: []
  # Inlined operations that get into generated ConfigMaps. E.g. adding a password variable:
  # operations:
  #   inline:
  #   - type: replace
  #     path: /variables/-
  #     value:
  #       name: my_password
  #       type: password
  inline: []

k8s-host-url: ""
k8s-service-token: ""
k8s-service-username: ""
k8s-node-ca: ""

eirini:
  global:
    labels: {}
    annotations: {}

  env:
    # This setting is not configurable and must be HIDDEN from the user.
    # It's a workaround to replace the port eirini uses for the registry
    DOMAIN: '127.0.0.1.nip.io:31666" #'
  services:
    loadbalanced: true
  opi:
    image_tag: "1.5.0"
    image: registry.suse.com/cap/opi
    metrics_collector_image: registry.suse.com/cap/metrics-collector
    bits_waiter_image: registry.suse.com/cap/bits-waiter
    route_collector_image: registry.suse.com/cap/route-collector
    route_pod_informer_image: registry.suse.com/cap/route-pod-informer
    route_statefulset_informer_image: registry.suse.com/cap/route-statefulset-informer
    event_reporter_image: registry.suse.com/cap/event-reporter
    event_reporter_image_tag: "1.5.0"
    staging_reporter_image: registry.suse.com/cap/staging-reporter
    staging_reporter_image_tag: "1.5.0"
    #
    registry_secret_name: eirini-registry-credentials
    namespace: eirini
    kubecf:
      enable: false
    use_registry_ingress: false
    ingress_endpoint: ~
    kube:
      external_ips: []
    deny_app_ingress: false
    cc_api:
      serviceName: "api"

    staging:
      downloader_image: registry.suse.com/cap/recipe-downloader
      downloader_image_tag: "1.5.0-24.1"
      executor_image: registry.suse.com/cap/recipe-executor
      executor_image_tag: "1.5.0-24.1"
      uploader_image: registry.suse.com/cap/recipe-uploader
      uploader_image_tag: "1.5.0-24.1"
      enable: true
      tls:
        client:
          secretName: "var-eirini-tls-client-cert"
          certPath: "certificate"
          keyPath: "private_key"
        cc_uploader:
          secretName: "var-cc-bridge-cc-uploader"
          certPath: "certificate"
          keyPath: "private_key"
        ca:
          secretName: "var-eirini-tls-client-cert"
          path: "ca"
        stagingReporter:
          secretName: "var-eirini-tls-client-cert"
          certPath: "certificate"
          keyPath: "private_key"
          caPath: "ca"

    tls:
      opiCapiClient:
        secretName: "var-eirini-tls-client-cert"
        keyPath: "private_key"
        certPath: "certificate"
      opiServer:
        secretName: "var-eirini-tls-server-cert"
        certPath: "certificate"
        keyPath: "private_key"
      capi:
        secretName: "var-eirini-tls-server-cert"
        caPath: "ca"
      eirini:
        secretName: "var-eirini-tls-server-cert"
        caPath: "ca"

    events:
      enable: true
      # All configs in this section should be HIDDEN from the user; they are
      # here to adapt the Eirini helm chart for KubeCF use.
      tls:
        capiClient:
          secretName: "var-cc-tls"
          keyPath: "private_key"
          certPath: "certificate"
        capi:
          secretName: "var-cc-tls"
          caPath: "ca"

    logs:
      # disable fluentd, use eirinix-loggregator-bridge (HIDDEN from the user).
      enable: false
      # HIDDEN from the user as changing this breaks logging.
      serviceName: doppler

    # All configs in this section should be HIDDEN from the user; they are here
    # to adapt the Eirini helm chart for KubeCF use.
    metrics:
      enable: true
      tls:
        client:
          secretName: "var-loggregator-tls-doppler"
          keyPath: "private_key"
          certPath: "certificate"
        server:
          secretName: "var-loggregator-tls-doppler"
          caPath: "ca"

    rootfsPatcher:
      enable: false
      timeout: 2m

    # All configs in this section should be HIDDEN from the user; they are here
    # to adapt the Eirini helm chart for KubeCF use.
    routing:
      enable: true
      nats:
        secretName: "var-nats-password"
        passwordPath: "password"
        serviceName: "nats"

    secretSmuggler:
      enable: false

bits:
  download_eirinifs: false
  global:
    labels: {}
    annotations: {}
    images:
      bits_service: registry.suse.com/cap/bits-service:bits-1.0.15-15.1.6.2.220-24.2
  env:
    # This setting is not configurable and must be HIDDEN from the user.
    DOMAIN: 127.0.0.1.nip.io
  ingress:
    endpoint: ~
    use: false
  kube:
    external_ips: []
  services:
    loadbalanced: true

  blobstore:
    serviceName: "singleton-blobstore"
    userName: "blobstore-user"
    secret:
      name: "var-blobstore-admin-users-password"
      passwordPath: "password"

  secrets:
    BITS_SERVICE_SECRET: "secret"
    BITS_SERVICE_SIGNING_USER_PASSWORD: "notpassword123"

  useExistingSecret: true
  tls_secret_name: bits-service-ssl
  tls_cert_name: certificate
  tls_key_name: private_key
  tls_ca_name: ca

eirinix:
  persi-broker:
    service-plans:
    - id: default
      name: "default"
      description: "Existing default storage class"
      kube_storage_class: "default"
      free: true
      default_size: "1Gi"

A.2 Complete suse/cf-operator values.yaml File

This is the complete output of helm inspect values suse/cf-operator for the current SUSE Cloud Application Platform 2.0.1 release.

## Default values for Cf-operator Helm Chart.
## This is a YAML-formatted file.
## Declare variables to be passed into your templates.


# applyCRD is a boolean to control the installation of CRD's.
applyCRD: true

cluster:
  # domain is the the Kubernetes cluster domain
  domain: "cluster.local"

# createWatchNamespace is a boolean to control creation of the watched namespace.
createWatchNamespace: true

# fullnameOverride overrides the release name
fullnameOverride: ""

# image is the docker image of quarks job.
image:
  # repository that provides the operator docker image.
  repository: cf-operator
  # org that provides the operator docker image.
  org: registry.suse.com/cap
  # tag of the operator docker image
  tag: v4.5.13-0.gd4738712

# logrotateInterval is the time between logrotate calls for instance groups in minutes
logrotateInterval: 1440

# logLevel defines from which level the logs should be printed (trace,debug,info,warn).
logLevel: debug

# workers are the int values for running maximum number of workers of the respective controller.
workers:
  boshdeployment: 1
  quarksSecret: 1
  quarksStatefulset: 1

operator:
  webhook:
    # host under which the webhook server can be reached from the cluster
    host: ~
    # port the webhook server listens on
    port: "2999"
  # boshDNSDockerImage is the docker image used for emulating bosh DNS (a CoreDNS image).
  boshDNSDockerImage: "registry.suse.com/cap/coredns:0.1.0-1.6.7-bp152.1.2"

# nameOverride overrides the chart name part of the release name
nameOverride: ""

# serviceAccount contains the configuration
# values of the service account used by cf-operator.
serviceAccount:
  # create is a boolean to control the creation of service account name.
  create: true
  # name of the service account.
  name:

global:
  # Context Timeout for each K8's API request in seconds.
  contextTimeout: 300
  image:
    # pullPolicy defines the policy used for pulling docker images.
    pullPolicy: IfNotPresent
    # credentials is used for pulling docker images.
    credentials: ~
      # username:
      # password:
      # servername:
  operator:
    # watchNamespace is used for watching the BOSH deployments.
    watchNamespace: staging
    webhook:
      # useServiceReference is a boolean to control the use of the
      # service reference in the webhook spec instead of a url.
      useServiceReference: true

  rbac:
    # create is a boolean to control the installation of quarks job cluster role template.
    create: true

quarks-job:
  # createWatchNamespace is a boolean to control creation of the watched namespace.
  createWatchNamespace: false
  serviceAccount:
    # create is a boolean to control the creation of service account name.
    create: true
    # name of the service account.
    name:
Print this page