Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
Applies to SUSE Cloud Application Platform 1.5.2

5 Deploying SUSE Cloud Application Platform on SUSE CaaS Platform

Important
Important
README first!

Before you start deploying SUSE Cloud Application Platform, review the following documents:

Read the Release Notes: Release Notes SUSE Cloud Application Platform

Read Chapter 3, Deployment and Administration Notes

SUSE Cloud Application Platform supports deployment on SUSE CaaS Platform. SUSE CaaS Platform is an enterprise-class container management solution that enables IT and DevOps professionals to more easily deploy, manage, and scale container-based applications and services. It includes Kubernetes to automate lifecycle management of modern applications, and surrounding technologies that enrich Kubernetes and make the platform itself easy to operate. As a result, enterprises that use SUSE CaaS Platform can reduce application delivery cycle times and improve business agility. This chapter describes the steps to prepare a SUSE Cloud Application Platform deployment on SUSE CaaS Platform. See https://documentation.suse.com/suse-caasp/4.1/ for more information on SUSE CaaS Platform.

5.1 Prerequisites

The following are required to deploy and use SUSE Cloud Application Platform on SUSE CaaS Platform:

  • Access to one of the following platforms to deploy SUSE CaaS Platform:

    • SUSE OpenStack Cloud 8

    • VMware VMware ESXi 6.7.0.20000

    • Bare Metal x86_64

  • A management workstation, which is used to deploy and control a SUSE CaaS Platform cluster, that is capable of running skuba (see https://github.com/SUSE/skuba for installation instructions). The management workstation can be a regular desktop workstation or laptop running SUSE Linux Enterprise 15 SP1 or later.

  • cf, the Cloud Foundry command line interface. For more information, see https://docs.cloudfoundry.org/cf-cli/.

    For SUSE Linux Enterprise and openSUSE systems, install using zypper.

    tux > sudo zypper install cf-cli

    For SLE, ensure the SUSE Cloud Application Platform Tools Module has been added. Add the module using YaST or SUSEConnect.

    tux > SUSEConnect --product sle-module-cap-tools/15.1/x86_64

    For other systems, follow the instructions at https://docs.cloudfoundry.org/cf-cli/install-go-cli.html.

  • kubectl, the Kubernetes command line tool. For more information, refer to https://kubernetes.io/docs/reference/kubectl/overview/.

    For SLE 12 SP3 or 15 systems, install the package kubernetes-client from the Public Cloud module.

    For other systems, follow the instructions at https://kubernetes.io/docs/tasks/tools/install-kubectl/.

  • jq, a command line JSON processor. See https://stedolan.github.io/jq/ for more information and installation instructions.

  • curl, the Client URL (cURL) command line tool.

  • sed, the stream editor.

Important
Important

The prerequisites and configurations described is this chapter only reflect the requirements for a minimal SUSE Cloud Application Platform deployment. For a more production-ready environment, consider incoporating the following features:

Stratos Web Console

For details, see Chapter 6, Installing the Stratos Web Console.

High Availability

For details, see Chapter 7, SUSE Cloud Application Platform High Availability.

LDAP Integration

For details, see Chapter 8, LDAP Integration.

External Log Server Integration

For details, see Chapter 24, Logging.

Managing Certificates

For details, see Chapter 25, Managing Certificates.

Other Features

Refer to the Administration Guide at Part III, “SUSE Cloud Application Platform Administration” for additional features.

5.2 Creating a SUSE CaaS Platform Cluster

When creating a SUSE CaaS Platform cluster, take note of the following general guidelines to ensure there are sufficient resources available to run a SUSE Cloud Application Platform deployment:

  • Minimum 2.3 GHz processor

  • 2 vCPU per physical core

  • 4 GB RAM per vCPU

  • Worker nodes need a minimum of 4 vCPU and 16 GB RAM

As a minimum, a SUSE Cloud Application Platform deployment with a basic workload will require:

  • 1 master node

    • vCPU: 2

    • RAM: 8 GB

    • Storage: 60 GB (SSD)

  • 2 worker nodes. Each node configured with:

    • (v)CPU: 4

    • RAM: 16 GB

    • Storage: 100 GB

  • Persistent storage: 40 GB

For steps to deploy a SUSE CaaS Platform cluster, refer to the SUSE CaaS Platform Deployment Guide at https://documentation.suse.com/suse-caasp/4.1/single-html/caasp-deployment/

When proceeding through the instructions, take note of the following to ensure the SUSE CaaS Platform cluster is suitable for a deployment of SUSE Cloud Application Platform:

5.3 Installing Helm Client and Tiller

Helm is a Kubernetes package manager. It consists of a client and server component, both of which are required in order to install and manage Cloud Application Platform.

The Helm client, helm, can be installed on your management workstation, which is part of the SUSE CaaS Platform package repository, by running

tux > sudo zypper install helm

Alternatively, refer to the upstream documentation at https://docs.helm.sh/using_helm/#installing-helm. Cloud Application Platform is compatible with both Helm 2 and Helm 3. Examples in this guide are based on Helm 2. To use Helm 3, refer to the Helm documentation at https://helm.sh/docs/.

Tiller, the Helm server component, needs to be installed on your Kubernetes cluster. To install, follow the instructions at https://documentation.suse.com/suse-caasp/4.1/single-html/caasp-admin/#_installing_tiller. Alternatively, refer to the upstream documentation at https://helm.sh/docs/using_helm/#installing-tiller to install Tiller with a service account and ensure your installation is appropriately secured according to your requirements as described in https://helm.sh/docs/using_helm/#securing-your-helm-installation.

5.4 Storage Class

SUSE Cloud Application Platform requires a persistent storage class to store persistent data inside a persistent volume (PV). When a PV is provisioned, the storage class' provisioner determines the volume plugin used. Create a storage class using a provisioner (see https://kubernetes.io/docs/concepts/storage/storage-classes/#provisioner) that fits your storage strategy. Examples of provisioners include:

By default SUSE Cloud Application Platform assumes your storage class is named persistent. Examples in this guide follow this assumption. If your storage class uses a different name, adjust the kube.storage_class.persistent value in your configuration file.

5.5 Deployment Configuration

SUSE Cloud Application Platform is configured using Helm values (see https://helm.sh/docs/chart_template_guide/values_files/ . Helm values can be set as either command line parameters or using a values.yaml file. The following values.yaml file, called scf-config-values.yaml in this guide, provides an example of a SUSE Cloud Application Platform configuration.

Ensure DOMAIN maps to the load balancer configured for your SUSE CaaS Platform cluster (see https://documentation.suse.com/suse-caasp/4.1/single-html/caasp-deployment/#loadbalancer).

Warning
Warning: Supported Domains

When selecting a domain, SUSE Cloud Application Platform expects DOMAIN to be either a subdomain or a root domain. Setting DOMAIN to a top-level domain, such suse, is not supported.

### example deployment configuration file
### scf-config-values.yaml

env:
  DOMAIN: example.com
  # the UAA prefix is required
  UAA_HOST: uaa.example.com
  UAA_PORT: 2793
  GARDEN_ROOTFS_DRIVER: "btrfs"

kube:
  external_ips: ["<CAASP_MASTER_NODE_EXTERNAL_IP>","<CAASP_MASTER_NODE_INTERNAL_IP>"]

  storage_class:
    persistent: "persistent"
    shared: "shared"

  registry:
    hostname: "registry.suse.com"
    username: ""
    password: ""
  organization: "cap"

secrets:
  # Create a very strong password for user 'admin'
  CLUSTER_ADMIN_PASSWORD: password

  # Create a very strong password, and protect it because it
  # provides root access to everything
  UAA_ADMIN_CLIENT_SECRET: password

Take note of the following Helm values when defining your scf-config-values.yaml file.

GARDEN_ROOTFS_DRIVER

For SUSE® CaaS Platform and other Kubernetes deployments where the nodes are based on SUSE Linux Enterprise, the btrfs file system driver must be used. By default, btrfs is selected as the default.

For Microsoft AKS, Amazon EKS, Google GKE, and other Kubernetes deployments where the nodes are based on other operating systems, the overlay-xfs file system driver must be used.

Important
Important: Protect UAA_ADMIN_CLIENT_SECRET

The UAA_ADMIN_CLIENT_SECRET is the master password for access to your Cloud Application Platform cluster. Make this a very strong password, and protect it just as carefully as you would protect any root password.

5.6 Add the Kubernetes Charts Repository

Download the SUSE Kubernetes charts repository with Helm:

tux > helm repo add suse https://kubernetes-charts.suse.com/

You may replace the example suse name with any name. Verify with helm:

tux > helm repo list
NAME       URL
stable     https://kubernetes-charts.storage.googleapis.com
local      http://127.0.0.1:8879/charts
suse       https://kubernetes-charts.suse.com/

List your chart names, as you will need these for some operations:

tux > helm search suse
NAME                            CHART VERSION   APP VERSION     DESCRIPTION
suse/cf                         2.20.3          1.5.2           A Helm chart for SUSE Cloud Foundry
suse/cf-usb-sidecar-mysql       1.0.1                           A Helm chart for SUSE Universal Service Broker Sidecar fo...
suse/cf-usb-sidecar-postgres    1.0.1                           A Helm chart for SUSE Universal Service Broker Sidecar fo...
suse/console                    3.1.0           1.5.2           A Helm chart for deploying Stratos UI Console
suse/log-agent-rsyslog      	1.0.1        	8.39.0     	Log Agent for forwarding logs of K8s control pl...
suse/metrics                    1.1.2           1.5.2           A Helm chart for Stratos Metrics
suse/minibroker                 0.3.1                           A minibroker for your minikube
suse/nginx-ingress              0.28.4          0.15.0          An nginx Ingress controller that uses ConfigMap to store ...
suse/uaa                        2.20.3          1.5.2           A Helm chart for SUSE UAA

5.7 Deploying SUSE Cloud Application Platform

5.7.1 Deploy uaa

Note
Note: Embedded uaa in scf

The User Account and Authentication (uaa) Server is included as an optional feature of the scf Helm chart. This simplifies the Cloud Application Platform deployment process as a separate installation and/or upgrade of uaa is no longer a prerequisite to installing and/or upgrading scf.

It is important to note that:

  • This feature should only be used when uaa is not shared with other projects.

  • You cannot migrate from an existing external uaa to an embedded one. In this situation, enabling this feature during an upgrade will result in a single admin account.

To enable this feature, add the following to your scf-config-values.yaml.

enable:
  uaa: true

When deploying and/or upgrading scf, run helm install and/or helm upgrade and note that:

  • Installing and/or upgrading uaa using helm install suse/uaa ... and/or helm upgrade is no longer required.

  • It is no longer necessary to set the UAA_CA_CERT parameter. Previously, this parameter was passed the CA_CERT variable, which was assigned the CA certificate of uaa.

Use Helm to deploy the uaa server:

tux > helm install suse/uaa \
--name susecf-uaa \
--namespace uaa \
--values scf-config-values.yaml

Wait until you have a successful uaa deployment before going to the next steps, which you can monitor with the watch command:

tux > watch --color 'kubectl get pods --namespace uaa'

When uaa is successfully deployed, the following is observed:

  • For the secret-generation pod, the STATUS is Completed and the READY column is at 0/1.

  • All other pods have a Running STATUS and a READY value of n/n.

Press CtrlC to exit the watch command.

Use curl to verify you are able to connect to the uaa OAuth server on the DNS name configured:

tux > curl --insecure https://uaa.example.com:2793/.well-known/openid-configuration

This should return a JSON object, as this abbreviated example shows:

{"issuer":"https://uaa.example.com:2793/oauth/token",
"authorization_endpoint":"https://uaa.example.com:2793
/oauth/authorize","token_endpoint":"https://uaa.example.com:2793/oauth/token"

5.7.2 Deploy scf

Use Helm to deploy scf:

Note
Note: Setting UAA_CA_CERT

Starting with SUSE Cloud Application Platform 1.5.2, you no longer need to set UAA_CA_CERT when using an external UAA with a certificate signed by a well known Certificate Authority (CA). It is only needed when you use an external UAA with either a certificate generated by the secret-generator or a self-signed certificate.

If you need to set UAA_CA_CERT:

  1. Obtain your UAA secret and certificate:

    tux > SECRET=$(kubectl get pods --namespace uaa \
    --output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')
    
    tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \
    --output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"
  2. Then pass --set "secrets.UAA_CA_CERT=${CA_CERT}" as part of your helm command.

tux > helm install suse/cf \
--name susecf-scf \
--namespace scf \
--values scf-config-values.yaml

Wait until you have a successful scf deployment before going to the next steps, which you can monitor with the watch command:

tux > watch --color 'kubectl get pods --namespace scf'

When scf is successfully deployed, the following is observed:

  • For the secret-generation and post-deployment-setup pods, the STATUS is Completed and the READY column is at 0/1.

  • All other pods have a Running STATUS and a READY value of n/n.

Press CtrlC to exit the watch command.

When the deployment completes, use the Cloud Foundry command line interface to log in to SUSE Cloud Foundry to deploy and manage your applications. (See Section 30.1, “Using the cf CLI with SUSE Cloud Application Platform”)

5.8 Expanding Capacity of a Cloud Application Platform Deployment on SUSE® CaaS Platform

If the current capacity of your Cloud Application Platform deployment is insufficient for your workloads, you can expand the capacity using the procedure in this section.

These instructions assume you have followed the procedure in Chapter 5, Deploying SUSE Cloud Application Platform on SUSE CaaS Platform and have a running Cloud Application Platform deployment on SUSE® CaaS Platform.

  1. Add additional nodes to your SUSE® CaaS Platform cluster as described in https://documentation.suse.com/suse-caasp/4.1/html/caasp-admin/_cluster_management.html#adding_nodes.

  2. Verify the new nodes are in a Ready state before proceeding.

    tux > kubectl get nodes
  3. Add or update the following in your scf-config-values.yaml file to increase the number of diego-cell in your Cloud Application Platform deployment. Replace the example value with the number required by your workflow.

    sizing:
      diego_cell:
        count: 4
  4. Perform a helm upgrade to apply the change.

    Note
    Note: Setting UAA_CA_CERT

    Starting with SUSE Cloud Application Platform 1.5.2, you no longer need to set UAA_CA_CERT when using an external UAA with a certificate signed by a well known Certificate Authority (CA). It is only needed when you use an external UAA with either a certificate generated by the secret-generator or a self-signed certificate.

    If you need to set UAA_CA_CERT:

    1. Obtain your UAA secret and certificate:

      tux > SECRET=$(kubectl get pods --namespace uaa \
      --output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')
      
      tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \
      --output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"
    2. Then pass --set "secrets.UAA_CA_CERT=${CA_CERT}" as part of your helm command.

    tux > helm upgrade susecf-scf suse/cf \
    --values scf-config-values.yaml \
    --version 2.20.3
  5. Monitor progress of the additional diego-cell pods:

    tux > watch --color 'kubectl get pods --namespace scf'