Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
Applies to SUSE Cloud Application Platform 1.5.2

14 Upgrading SUSE Cloud Application Platform

uaa, scf, and Stratos together make up a SUSE Cloud Application Platform release. Maintenance updates are delivered as container images from the SUSE registry and applied with Helm.

For additional upgrade information, always review the release notes published at https://www.suse.com/releasenotes/x86_64/SUSE-CAP/1/.

14.1 Important Considerations

Before performing an upgrade, be sure to take note of the following:

Perform Upgrades in Sequence

Cloud Application Platform only supports upgrading releases in sequential order. If there are any intermediate releases between your current release and your target release, they must be installed. Skipping releases is not supported. See Section 14.3, “Installing Skipped Releases” for more information.

Preserve Helm Value Changes during Upgrades

During a helm upgrade, always ensure your scf-config-values-yaml file is passed. This will preserve any previously set Helm values while allowing additional Helm value changes to be made.

helm rollback Is Not Supported

helm rollback is not supported in SUSE Cloud Application Platform or in upstream Cloud Foundry, and may break your cluster completely, because database migrations only run forward and cannot be reversed. Database schema can change over time. During upgrades both pods of the current and the next release may run concurrently, so the schema must stay compatible with the immediately previous release. But there is no way to guarantee such compatibility for future upgrades. One way to address this is to perform a full raw data backup and restore. (See Section 21.2, “Disaster Recovery through Raw Data Backup and Restore”)

14.2 Upgrading SUSE Cloud Application Platform

The supported upgrade method is to install all upgrades, in order. Skipping releases is not supported. This table matches the Helm chart versions to each release:

CAP ReleaseSCF and UAA Helm Chart VersionStratos Helm Chart VersionStratos Metrics Helm Chart VersionMinimum Kubernetes Version RequiredCF API ImplementedKnown Compatible CF CLI VersionCF CLI URL
1.5.2 (current release)2.20.33.1.01.1.21.102.144.06.49.0https://github.com/cloudfoundry/cli/releases/tag/v6.49.0
1.5.12.19.12.6.01.1.0 2.138.06.46.1https://github.com/cloudfoundry/cli/releases/tag/v6.46.1
1.52.18.02.5.31.1.0 2.138.06.46.1https://github.com/cloudfoundry/cli/releases/tag/v6.46.1
1.4.12.17.12.4.01.0.0 2.134.06.46.0https://github.com/cloudfoundry/cli/releases/tag/v6.46.0
1.42.16.42.4.01.0.0 2.134.06.44.1https://github.com/cloudfoundry/cli/releases/tag/v6.44.1
1.3.12.15.22.3.01.0.0 2.120.06.42.0https://github.com/cloudfoundry/cli/releases/tag/v6.42.0
1.32.14.52.2.01.0.0 2.115.06.40.1https://github.com/cloudfoundry/cli/releases/tag/v6.40.1
1.2.12.13.32.1.0  2.115.06.39.1https://github.com/cloudfoundry/cli/releases/tag/v6.39.1
1.2.02.11.02.0.0  2.112.06.38.0https://github.com/cloudfoundry/cli/releases/tag/v6.38.0
1.1.12.10.11.1.0  2.106.06.37.0https://github.com/cloudfoundry/cli/releases/tag/v6.37.0
1.1.02.8.01.1.0  2.103.06.36.1https://github.com/cloudfoundry/cli/releases/tag/v6.36.1
1.0.12.7.01.0.2  2.101.06.34.1https://github.com/cloudfoundry/cli/releases/tag/v6.34.1
1.02.6.111.0.0   6.34.0https://github.com/cloudfoundry/cli/releases/tag/v6.34.0

Use helm list to see the version of your installed release . Verify the latest release is the next sequential release from your installed release. If it is, proceed with the commands below to perform the upgrade. If any releases have been missed, see Section 14.3, “Installing Skipped Releases”.

Important
Important: Upgrading SUSE Cloud Application Platform When Using Minibroker

If you are upgrading SUSE Cloud Application Platform to 1.5.2 and already use Minibroker to connect to external databases and are using Kubernetes 1.16 or higher, which is the case with SUSE CaaS Platform 4.1, you will need to update the database version to a compatible version and migrate your data over via the database’s suggested mechanism. This may require a database export/import.

Warning
Warning: Upgrading from 1.5.1 When Using External Database

If you are upgrading SUSE Cloud Application Platform to 1.5.2 and use an external database in 1.5.1, please contact support for further instructions.

For upgrades from SUSE Cloud Application Platform 1.5.1 to 1.5.2, if the mysql roles of uaa and scf are in high availability mode, it is recommended to first scale them to single availability. Performing an upgrade with these roles in HA mode runs the risk of encountering potential database migration failures, which will lead to pods not starting.

The following sections outline the upgrade process for SUSE Cloud Application Platform deployments described by the given scenarios:

14.2.1 High Availability mysql through config.HA

This section describes the upgrade process for deployments where the mysql roles of uaa and scf are currently in high availability mode through setting config.HA to true (see Section 7.1.3, “Simple High Availability Configuration”).

The following example assumes your current scf-config-values.yaml file contains:

config:
  HA: true
  1. For the uaa deployment, scale the instance count of the mysql role to 1 and remove the persistent volume claims associated with the removed mysql instances.

    1. Scale the mysql instance count to 1.

      Create a custom configuration file, called ha-strict-false-single-mysql.yaml in this example, with the values below. The file uses the config.HA_strict feature to allow setting the mysql role to a single instance while other roles are kept HA. Note that it is assumed config.HA=true is set in your existing scf-config-values.yaml file.

      config:
        HA_strict: false
      sizing:
        mysql:
          count: 1

      Perform the scaling.

      tux > helm upgrade susecf-uaa suse/uaa \
      --values scf-config-values.yaml \
      --values ha-strict-false-single-mysql.yaml \
      --version 2.19.1

      Monitor progress using the watch command.

      tux > watch --color 'kubectl get pods --namespace uaa'
    2. Delete the persistent volume claims (PVC) associated with the removed mysql instances from the uaa namespace. Do not delete the persistent volumes (PV).

      When config.HA is set to true, there are 3 instances of themysql role. This means that after scaling to single availability, the PVCs associated with mysql-1 and mysql-2 need to be deleted.

      tux > kubectl delete persistentvolumeclaim --namespace uaa mysql-data-mysql-1
      
      tux > kubectl delete persistentvolumeclaim --namespace uaa mysql-data-mysql-2
  2. For the scf deployment, scale the instance count of the mysql role to 1 and remove the persistent volume claims associated with the removed mysql instances.

    1. Scale the mysql instance count to 1.

      Reuse the custom configuration file, called ha-strict-false-single-mysql.yaml, created earlier. The file uses the config.HA_strict feature to allow setting the mysql role to a single instance while other roles are kept HA. Note that it is assumed config.HA=true is set in your existing scf-config-values.yaml file.

      Perform the scaling.

      tux > helm upgrade susecf-scf suse/cf \
      --values scf-config-values.yaml \
      --values ha-strict-false-single-mysql.yaml \
      --version 2.19.1

      Monitor progress using the watch command.

      tux > watch --color 'kubectl get pods --namespace scf'
    2. Delete the persistent volume claims (PVC) associated with the removed mysql instances from the scf namespace. Do not delete the persistent volumes (PV).

      When config.HA is set to true, there are 3 instances of themysql role. This means that after scaling to single availability, the PVCs associated with mysql-1 and mysql-2 need to be deleted.

      tux > kubectl delete persistentvolumeclaim --namespace scf mysql-data-mysql-1
      
      tux > kubectl delete persistentvolumeclaim --namespace scf mysql-data-mysql-2
  3. Get the latest updates from your Helm chart repositories. This will allow upgrading to newer releases of the SUSE Cloud Application Platform charts.

    tux > helm repo update
  4. Upgrade uaa.

    Reuse the custom configuration file, called ha-strict-false-single-mysql.yaml, created earlier. The file uses the config.HA_strict feature to allow setting the mysql role to a single instance while other roles are kept HA. Note that it is assumed config.HA=true is set in your existing scf-config-values.yaml file.

    tux > helm upgrade susecf-uaa suse/uaa \
    --values scf-config-values.yaml \
    --values ha-strict-false-single-mysql.yaml \
    --version 2.20.3

    Monitor progress using the watch command.

    tux > watch --color 'kubectl get pods --namespace uaa'
  5. Upgrade scf.

    Reuse the custom configuration file, called ha-strict-false-single-mysql.yaml, created earlier. The file uses the config.HA_strict feature to allow setting the mysql role to a single instance while other roles are kept HA. Note that it is assumed config.HA=true is set in your existing scf-config-values.yaml file.

    Perform the upgrade.

    Note
    Note: Setting UAA_CA_CERT

    Starting with SUSE Cloud Application Platform 1.5.2, you no longer need to set UAA_CA_CERT when using an external UAA with a certificate signed by a well known Certificate Authority (CA). It is only needed when you use an external UAA with either a certificate generated by the secret-generator or a self-signed certificate.

    If you need to set UAA_CA_CERT:

    1. Obtain your UAA secret and certificate:

      tux > SECRET=$(kubectl get pods --namespace uaa \
      --output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')
      
      tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \
      --output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"
    2. Then pass --set "secrets.UAA_CA_CERT=${CA_CERT}" as part of your helm command.

    tux > helm upgrade susecf-scf suse/cf \
    --values scf-config-values.yaml \
    --values ha-strict-false-single-mysql.yaml \
    --version 2.20.3

    Monitor progress using the watch command.

    tux > watch --color 'kubectl get pods --namespace scf'
  6. Scale the entire uaa deployment, including the mysql role, back to the default high availability mode.

    Create a custom configuration file, called ha-strict-true.yaml in this example, with the values below. The file sets config.HA_strict to true, which enforces all roles, including mysql, are at the required minimum required instance count of 3 for a default HA configuration. Note that it is assumed config.HA=true is set in your existing scf-config-values.yaml file.

    config:
      HA_strict: true

    Perform the scaling.

    tux > helm upgrade susecf-uaa suse/uaa \
    --values scf-config-values.yaml \
    --values ha-strict-true.yaml \
    --version 2.20.3

    Monitor progress using the watch command.

    tux > watch --color 'kubectl get pods --namespace uaa'
  7. Scale the entire scf deployment, including the mysql role, back to the default high availability mode.

    Reuse the custom configuration file, called ha-strict-true.yaml in this example, created earlier. The file sets config.HA_strict to true, which enforces all roles, including mysql, are at the required minimum required instance count of 3 for a default HA configuration. Note that it is assumed config.HA=true is set in your existing scf-config-values.yaml file.

    Note
    Note: Setting UAA_CA_CERT

    Starting with SUSE Cloud Application Platform 1.5.2, you no longer need to set UAA_CA_CERT when using an external UAA with a certificate signed by a well known Certificate Authority (CA). It is only needed when you use an external UAA with either a certificate generated by the secret-generator or a self-signed certificate.

    If you need to set UAA_CA_CERT:

    1. Obtain your UAA secret and certificate:

      tux > SECRET=$(kubectl get pods --namespace uaa \
      --output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')
      
      tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \
      --output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"
    2. Then pass --set "secrets.UAA_CA_CERT=${CA_CERT}" as part of your helm command.

    tux > helm upgrade susecf-scf suse/cf \
    --values scf-config-values.yaml \
    --values ha-strict-true.yaml \
    --version 2.20.3
  8. If installed, upgrade Stratos.

    tux > helm upgrade --recreate-pods susecf-console suse/console \
    --values scf-config-values.yaml \
    --version 2.6.0

14.2.2 High Availability mysql through Custom Sizing Values

This section describes the upgrade process for deployments where the mysql roles of uaa and scf are currently in high availability mode by configuring custom sizing values (see Section 7.1.4, “Example Custom High Availability Configurations”).

  1. For the uaa deployment, scale the instance count of the mysql role to 1 and remove the persistent volume claims associated with the removed mysql instances.

    1. Scale the mysql instance count to 1.

      Modify your existing custom sizing configuration file, called uaa-sizing.yaml in this example, so that the mysql role has a count of 1.

      sizing:
        mysql:
          count: 1

      Perform the scaling.

      tux > helm upgrade susecf-uaa suse/uaa \
      --values scf-config-values.yaml \
      --values uaa-sizing.yaml \
      --version 2.19.1

      Monitor progress using the watch command.

      tux > watch --color 'kubectl get pods --namespace uaa'
    2. Delete the persistent volume claims (PVC) associated with the removed mysql instances from the uaa namespace. Do not delete the persistent volumes (PV).

      This example assumes mysql.count was previously set to 3 instances in your custom sizing configuration file. This means that after scaling to single availability, the PVCs associated with mysql-1 and mysql-2 need to be deleted.

      If your mysql.count was previously set to 5 or 7 instances, repeat the procedure. For 5 instances, the PVCs associated with mysql-3 and mysql-4 would need to be deleted as well. For 7 instances, the PVCs associated with mysql-3, mysql-4, mysql-5, and mysql-6 would need to be deleted as well.

      tux > kubectl delete persistentvolumeclaim --namespace uaa mysql-data-mysql-1
      
      tux > kubectl delete persistentvolumeclaim --namespace uaa mysql-data-mysql-2
  2. For the scf deployment, scale the instance count of the mysql role to 1 and remove the persistent volume claims associated with the removed mysql instances.

    1. Scale the mysql instance count to 1.

      Modify your existing custom sizing configuration file, called scf-sizing.yaml in this example, so that the mysql role has a count of 1.

      sizing:
        mysql:
          count: 1

      Perform the scaling.

      tux > helm upgrade susecf-scf suse/cf \
      --values scf-config-values.yaml \
      --values scf-sizing.yaml \
      --version 2.19.1

      Monitor progress using the watch command.

      tux > watch --color 'kubectl get pods --namespace scf'
    2. Delete the persistent volume claims (PVC) associated with the removed mysql instances from the scf namespace. Do not delete the persistent volumes (PV).

      This example assumes mysql.count was previously set to 3 instances in your custom sizing configuration file. This means that after scaling to single availability, the PVCs associated with mysql-1 and mysql-2 need to be deleted.

      If your mysql.count was previously set to 5 or 7 instances, repeat the procedure. For 5 instances, the PVCs associated with mysql-3 and mysql-4 would need to be deleted as well. For 7 instances, the PVCs associated with mysql-3, mysql-4, mysql-5, and mysql-6 would need to be deleted as well.

      tux > kubectl delete persistentvolumeclaim --namespace scf mysql-data-mysql-1
      
      tux > kubectl delete persistentvolumeclaim --namespace scf mysql-data-mysql-2
  3. Get the latest updates from your Helm chart repositories. This will allow upgrading to newer releases of the SUSE Cloud Application Platform charts.

    tux > helm repo update
  4. Upgrade uaa.

    Reuse your existing custom sizing configuration file, called uaa-sizing.yaml in this example, where the mysql role has a count of 1.

    tux > helm upgrade susecf-uaa suse/uaa \
    --values scf-config-values.yaml \
    --values uaa-sizing.yaml \
    --version 2.20.3

    Monitor progress using the watch command.

    tux > watch --color 'kubectl get pods --namespace uaa'
  5. Upgrade scf.

    Reuse your existing custom sizing configuration file, called scf-sizing.yaml in this example, where the mysql role has a count of 1.

    Note
    Note: Setting UAA_CA_CERT

    Starting with SUSE Cloud Application Platform 1.5.2, you no longer need to set UAA_CA_CERT when using an external UAA with a certificate signed by a well known Certificate Authority (CA). It is only needed when you use an external UAA with either a certificate generated by the secret-generator or a self-signed certificate.

    If you need to set UAA_CA_CERT:

    1. Obtain your UAA secret and certificate:

      tux > SECRET=$(kubectl get pods --namespace uaa \
      --output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')
      
      tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \
      --output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"
    2. Then pass --set "secrets.UAA_CA_CERT=${CA_CERT}" as part of your helm command.

    tux > helm upgrade susecf-scf suse/cf \
    --values scf-config-values.yaml \
    --values scf-sizing.yaml \
    --version 2.20.3

    Monitor progress using the watch command.

    tux > watch --color 'kubectl get pods --namespace scf'
  6. Scale the entire uaa deployment, including the mysql role, back to high availability mode. This example assumes the mysql.count was previously set to 3 instances.

    Note that the mysql role requires at least 3 instances for high availability and must have an odd number of instances.

    Modify your existing custom sizing configuration file, called uaa-sizing.yaml in this example, so that the mysql role has a count of 3.

    sizing:
      mysql:
        count: 3

    Perform the scaling.

    tux > helm upgrade susecf-uaa suse/uaa \
    --values scf-config-values.yaml \
    --values uaa-sizing.yaml \
    --version 2.20.3

    Monitor progress using the watch command.

    tux > watch --color 'kubectl get pods --namespace uaa'
  7. Scale the entire scf deployment, including the mysql role, back to high availability mode. This example assumes the mysql.count was previously set to 3 instances.

    Note the mysql role requires at least 3 instances for high availability and must have an odd number of instances.

    Modify your existing custom sizing configuration file, called scf-sizing.yaml in this example, so that the mysql role has a count of 3.

    Note
    Note: Setting UAA_CA_CERT

    Starting with SUSE Cloud Application Platform 1.5.2, you no longer need to set UAA_CA_CERT when using an external UAA with a certificate signed by a well known Certificate Authority (CA). It is only needed when you use an external UAA with either a certificate generated by the secret-generator or a self-signed certificate.

    If you need to set UAA_CA_CERT:

    1. Obtain your UAA secret and certificate:

      tux > SECRET=$(kubectl get pods --namespace uaa \
      --output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')
      
      tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \
      --output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"
    2. Then pass --set "secrets.UAA_CA_CERT=${CA_CERT}" as part of your helm command.

    tux > helm upgrade susecf-scf suse/cf \
    --values scf-config-values.yaml \
    --values scf-sizing.yaml \
    --version 2.20.3
  8. If installed, upgrade Stratos.

    tux > helm upgrade --recreate-pods susecf-console suse/console \
    --values scf-config-values.yaml \
    --version 2.6.0

14.2.3 Single Availability mysql

This section describes the upgrade process for deployments where the mysql roles of uaa and scf are currently in single availability mode.

  1. Get the latest updates from your Helm chart repositories. This will allow upgrading to newer releases of the SUSE Cloud Application Platform charts.

    tux > helm repo update
  2. Upgrade uaa.

    tux > helm upgrade susecf-uaa suse/uaa \
    --values scf-config-values.yaml \
    --version 2.20.3

    Monitor progress using the watch command.

    tux > watch --color 'kubectl get pods --namespace uaa'
  3. Upgrade scf.

    Note
    Note: Setting UAA_CA_CERT

    Starting with SUSE Cloud Application Platform 1.5.2, you no longer need to set UAA_CA_CERT when using an external UAA with a certificate signed by a well known Certificate Authority (CA). It is only needed when you use an external UAA with either a certificate generated by the secret-generator or a self-signed certificate.

    If you need to set UAA_CA_CERT:

    1. Obtain your UAA secret and certificate:

      tux > SECRET=$(kubectl get pods --namespace uaa \
      --output jsonpath='{.items[?(.metadata.name=="uaa-0")].spec.containers[?(.name=="uaa")].env[?(.name=="INTERNAL_CA_CERT")].valueFrom.secretKeyRef.name}')
      
      tux > CA_CERT="$(kubectl get secret $SECRET --namespace uaa \
      --output jsonpath="{.data['internal-ca-cert']}" | base64 --decode -)"
    2. Then pass --set "secrets.UAA_CA_CERT=${CA_CERT}" as part of your helm command.

    tux > helm upgrade susecf-scf suse/cf \
    --values scf-config-values.yaml \
    --version 2.20.3

    Monitor progress using the watch command.

    tux > watch --color 'kubectl get pods --namespace scf'
  4. If installed, upgrade Stratos.

    tux > helm upgrade --recreate-pods susecf-console suse/console \
    --values scf-config-values.yaml \
    --version 2.6.0

14.2.4 Change in URL of Internal cf-usb Broker Endpoint

This change is only applicable for upgrades from Cloud Application Platform 1.2.1 to Cloud Application Platform 1.3 and upgrades from Cloud Application Platform 1.3 to Cloud Application Platform 1.3.1. The URL of the internal cf-usb broker endpoint has changed. Brokers for PostgreSQL and MySQL that use cf-usb will require the following manual fix after upgrading to reconnect with SCF/CAP:

For Cloud Application Platform 1.2.1 to Cloud Application Platform 1.3 upgrades:

  1. Get the name of the secret (for example secrets-2.14.5-1):

    tux > kubectl get secret --namespace scf
  2. Get the URL of the cf-usb host (for example https://cf-usb.scf.svc.cluster.local:24054):

    tux > cf service-brokers
  3. Get the current cf-usb password. Use the name of the secret obtained in the first step:

    tux > kubectl get secret --namespace scf secrets-2.14.5-1 --output yaml | grep \\scf-usb-password: | cut -d: -f2 | base64 -id
  4. Update the service broker, where password is the password from the previous step, and the URL is the result of the second step with the leading cf-usb part doubled with a dash separator

    tux > cf update-service-broker usb broker-admin password https://cf-usb-cf-usb.scf.svc.cluster.local:24054

For Cloud Application Platform 1.3 to Cloud Application Platform 1.3.1 upgrades:

  1. Get the name of the secret (for example secrets-2.15.2-1):

    tux > kubectl get secret --namespace scf
  2. Get the URL of the cf-usb host (for example https://cf-usb-cf-usb.scf.svc.cluster.local:24054):

    tux > cf service-brokers
  3. Get the current cf-usb password. Use the name of the secret obtained in the first step:

    tux > kubectl get secret --namespace scf secrets-2.15.2-1 --output yaml | grep \\scf-usb-password: | cut -d: -f2 | base64 -id
  4. Update the service broker, where password is the password from the previous step, and the URL is the result of the second step with the leading cf-usb- part removed:

    tux > cf update-service-broker usb broker-admin password https://cf-usb.scf.svc.cluster.local:24054

14.3 Installing Skipped Releases

By default, Helm always installs the latest release. What if you accidentally skipped a release, and need to apply it before upgrading to the current release? Install the missing release by specifying the Helm chart version number. For example, your current uaa and scf versions are 2.10.1. Consult the table at the beginning of this chapter to see which releases you have missed. In this example, the missing Helm chart version for uaa and scf is 2.11.0. Use the --version option to install a specific version:

tux > helm upgrade susecf-uaa suse/uaa \
--values scf-config-values.yaml
--recreate-pods \
--version 2.11.0

Be sure to install the corresponding versions for scf and Stratos.

Print this page