Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
Applies to SUSE Cloud Application Platform 1.5.2

21 Backup and Restore

21.1 Backup and Restore Using cf-plugin-backup

cf-plugin-backup backs up and restores your Cloud Controller Database (CCDB), using the Cloud Foundry command line interface (cf CLI). (See Section 30.1, “Using the cf CLI with SUSE Cloud Application Platform”.)

cf-plugin-backup is not a general-purpose backup and restore plugin. It is designed to save the state of a SUSE Cloud Foundry instance before making changes to it. If the changes cause problems, use cf-plugin-backup to restore the instance from scratch. Do not use it to restore to a non-pristine SUSE Cloud Foundry instance. Some of the limitations for applying the backup to a non-pristine SUSE Cloud Foundry instance are:

  • Application configuration is not restored to running applications, as the plugin does not have the ability to determine which applications should be restarted to load the restored configurations.

  • User information is managed by the User Account and Authentication (uaa) Server, not the Cloud Controller (CC). As the plugin talks only to the CC it cannot save full user information, nor restore users. Saving and restoring users must be performed separately, and user restoration must be performed before the backup plugin is invoked.

  • The set of available stacks is part of the SUSE Cloud Foundry instance setup, and is not part of the CC configuration. Trying to restore applications using stacks not available on the target SUSE Cloud Foundry instance will fail. Setting up the necessary stacks must be performed separately before the backup plugin is invoked.

  • Buildpacks are not saved. Applications using custom buildpacks not available on the target SUSE Cloud Foundry instance will not be restored. Custom buildpacks must be managed separately, and relevant buildpacks must be in place before the affected applications are restored.

21.1.1 Installing the cf-plugin-backup

Download the plugin from https://github.com/SUSE/cf-plugin-backup/releases.

Then install it with cf, using the name of the plugin binary that you downloaded:

tux > cf install-plugin cf-plugin-backup-1.0.8.0.g9e8438e.linux-amd64
 Attention: Plugins are binaries written by potentially untrusted authors.
 Install and use plugins at your own risk.
 Do you want to install the plugin
 backup-plugin/cf-plugin-backup-1.0.8.0.g9e8438e.linux-amd64? [yN]: y
 Installing plugin backup...
 OK
 Plugin backup 1.0.8 successfully installed.

Verify installation by listing installed plugins:

tux > cf plugins
 Listing installed plugins...

 plugin   version   command name      command help
 backup   1.0.8     backup-info       Show information about the current snapshot
 backup   1.0.8     backup-restore    Restore the CloudFoundry state from a
  backup created with the snapshot command
 backup   1.0.8     backup-snapshot   Create a new CloudFoundry backup snapshot
  to a local file

 Use 'cf repo-plugins' to list plugins in registered repos available to install.

21.1.2 Using cf-plugin-backup

The plugin has three commands:

  • backup-info

  • backup-snapshot

  • backup-restore

View the online help for any command, like this example:

tux >  cf backup-info --help
 NAME:
   backup-info - Show information about the current snapshot

 USAGE:
   cf backup-info

Create a backup of your SUSE Cloud Application Platform data and applications. The command outputs progress messages until it is completed:

tux > cf backup-snapshot
 2018/08/18 12:48:27 Retrieving resource /v2/quota_definitions
 2018/08/18 12:48:30 org quota definitions done
 2018/08/18 12:48:30 Retrieving resource /v2/space_quota_definitions
 2018/08/18 12:48:32 space quota definitions done
 2018/08/18 12:48:32 Retrieving resource /v2/organizations
 [...]

Your Cloud Application Platform data is saved in the current directory in cf-backup.json, and application data in the app-bits/ directory.

View the current backup:

tux > cf backup-info
 - Org  system

Restore from backup:

tux > cf backup-restore

There are two additional restore options: --include-security-groups and --include-quota-definitions.

21.1.3 Scope of Backup

The following table lists the scope of the cf-plugin-backup backup. Organization and space users are backed up at the SUSE Cloud Application Platform level. The user account in uaa/LDAP, the service instances and their application bindings, and buildpacks are not backed up. The sections following the table goes into more detail.

ScopeRestore
OrgsYes
Org auditorsYes
Org billing-managerYes
Quota definitionsOptional
SpacesYes
Space developersYes
Space auditorsYes
Space managersYes
AppsYes
App binariesYes
RoutesYes
Route mappingsYes
DomainsYes
Private domainsYes
Stacksnot available
Feature flagsYes
Security groupsOptional
Custom buildpacksNo

cf backup-info reads the cf-backup.json snapshot file found in the current working directory, and reports summary statistics on the content.

cf backup-snapshot extracts and saves the following information from the CC into a cf-backup.json snapshot file. Note that it does not save user information, but only the references needed for the roles. The full user information is handled by the uaa server, and the plugin talks only to the CC. The following list provides a summary of what each plugin command does.

  • Org Quota Definitions

  • Space Quota Definitions

  • Shared Domains

  • Security Groups

  • Feature Flags

  • Application droplets (zip files holding the staged app)

  • Orgs

    • Spaces

      • Applications

      • Users' references (role in the space)

cf backup-restore reads the cf-backup.json snapshot file found in the current working directory, and then talks to the targeted SUSE Cloud Foundry instance to upload the following information, in the specified order:

  • Shared domains

  • Feature flags

  • Quota Definitions (iff --include-quota-definitions)

  • Orgs

    • Space Quotas (iff --include-quota-definitions)

    • UserRoles

    • (private) Domains

    • Spaces

      • UserRoles

      • Applications (+ droplet)

        • Bound Routes

      • Security Groups (iff --include-security-groups)

The following list provides more details of each action.

Shared Domains

Attempts to create domains from the backup. Existing domains are retained, and not overwritten.

Feature Flags

Attempts to update flags from the backup.

Quota Definitions

Existing quotas are overwritten from the backup (deleted, re-created).

Orgs

Attempts to create orgs from the backup. Attempts to update existing orgs from the backup.

Space Quota Definitions

Existing quotas are overwritten from the backup (deleted, re-created).

User roles

Expect the referenced user to exist. Will fail when the user is already associated with the space, in the given role.

(private) Domains

Attempts to create domains from the backup. Existing domains are retained, and not overwritten.

Spaces

Attempts to create spaces from the backup. Attempts to update existing spaces from the backup.

User roles

Expect the referenced user to exist. Will fail when the user is already associated with the space, in the given role.

Apps

Attempts to create apps from the backup. Attempts to update existing apps from the backup (memory, instances, buildpack, state, ...)

Security groups

Existing groups are overwritten from the backup

21.2 Disaster Recovery through Raw Data Backup and Restore

An existing SUSE Cloud Application Platform deployment's data can be migrated to a new SUSE Cloud Application Platform deployment through a backup and restore of its raw data. The process involves performing a backup and restore of both the uaa and scf components respectively. This procedure is agnostic of the underlying Kubernetes infrastructure and can be included as part of your disaster recovery solution.

21.2.1 Prerequisites

In order to complete a raw data backup and restore, the following are required:

21.2.2 Scope of Raw Data Backup and Restore

The following lists the data that is included as part of the backup (and restore) procedure:

21.2.3 Performing a Raw Data Backup

Note
Note: Restore to the Same Version

This process is intended for backing up and restoring to a target deployment with the same version as the source deployment. For example, data from a backup of uaa version 2.18.0 should be restored to a version version 2.18.0 uaa deployment.

Perform the following steps to create a backup of your source uaa deployment.

  • Export the UAA database into a file:

    tux > kubectl exec --tty mysql-0 --namespace uaa -- bash -c \
      '/var/vcap/packages/mariadb/bin/mysqldump \
      --defaults-file=/var/vcap/jobs/mysql/config/mylogin.cnf \
      --socket /var/vcap/sys/run/pxc-mysql/mysqld.sock \
      uaadb' > /tmp/uaadb-src.sql

Perform the following steps to create a backup of your source scf deployment.

  1. Connect to the blobstore pod:

    tux > kubectl exec --stdin --tty blobstore-0 --namespace scf -- env /bin/bash
  2. Create an archive of the blobstore directory to preserve all needed files (see the Cloud Controller Blobstore content of Section 21.2.2, “Scope of Raw Data Backup and Restore”) then disconnect from the pod:

    tux > tar cfvz blobstore-src.tgz /var/vcap/store/shared
    tux > exit
  3. Copy the archive to a location outside of the pod:

    tux > kubectl cp scf/blobstore-0:blobstore-src.tgz /tmp/blobstore-src.tgz
  4. Export the Cloud Controller Database (CCDB) into a file:

    tux > kubectl exec mysql-0 --namespace scf -- bash -c \
      '/var/vcap/packages/mariadb/bin/mysqldump \
      --defaults-file=/var/vcap/jobs/mysql/config/mylogin.cnf \
      --socket /var/vcap/sys/run/pxc-mysql/mysqld.sock \
      ccdb' > /tmp/ccdb-src.sql
  5. Next, obtain the CCDB encryption key(s). The method used to capture the key will depend on whether current_key_label has been defined on the source cluster. This value is defined in /var/vcap/jobs/cloud_controller_ng/config/cloud_controller_ng.yml of the api-group-0 pod and also found in various tables of the MySQL database.

    Begin by examining the configuration file for thecurrent_key_label setting:

    tux > kubectl exec --stdin --tty --namespace scf api-group-0 -- bash -c "cat /var/vcap/jobs/cloud_controller_ng/config/cloud_controller_ng.yml | grep -A 3 database_encryption"
    • If the output contains the current_key_label setting, save the output for the restoration process. Adjust the -A flag as needed to include all keys.

    • If the output does not contain the current_key_label setting, run the following command and save the output for the restoration process:

      tux > kubectl exec api-group-0 --namespace scf -- bash -c 'echo $DB_ENCRYPTION_KEY'

21.2.4 Performing a Raw Data Restore

Important
Important: Ensure Access to the Correct Deployment

Working with multiple Kubernetes clusters simultaneously can be confusing. Ensure you are communicating with the desired cluster by setting $KUBECONFIG correctly.

Perform the following steps to restore your backed up data to the target uaa deployment.

  1. Recreate the UAA database on the mysql pod:

    tux > kubectl exec -t mysql-0 --namespace uaa -- bash -c \
      "/var/vcap/packages/mariadb/bin/mysql \
      --defaults-file=/var/vcap/jobs/mysql/config/mylogin.cnf \
      --socket /var/vcap/sys/run/pxc-mysql/mysqld.sock \
      -e 'drop database uaadb; create database uaadb;'";
  2. Restore the UAA database on the mysql pod:

    tux > kubectl exec --stdin mysql-0 --namespace uaa -- bash -c \
      '/var/vcap/packages/mariadb/bin/mysql \
      --defaults-file=/var/vcap/jobs/mysql/config/mylogin.cnf \
      --socket /var/vcap/sys/run/pxc-mysql/mysqld.sock \
      uaadb' < /tmp/uaadb-src.sql

Perform the following steps to restore your backed up data to the target scf deployment.

  1. The target scf cluster needs to be deployed with the correct database encryption key(s) set in your scf-config-values.yaml before data can be restored. How the encryption key(s) will be prepared in your scf-config-values.yaml depends on the result of Step 5 in Section 21.2.3, “Performing a Raw Data Backup”

    • If current_key_label was set, use the current_key_label obtained as the value of CC_DB_CURRENT_KEY_LABEL and all the keys under the keys are defined under CC_DB_ENCRYPTION_KEYS. See the following example scf-config-values.yaml:

      env:
        CC_DB_CURRENT_KEY_LABEL: migrated_key_1
      
      secrets:
        CC_DB_ENCRYPTION_KEYS:
          migrated_key_1: "<key_goes_here>"
          migrated_key_2: "<key_goes_here>"
    • If current_key_label was not set, create one for the new cluster through scf-config-values.yaml and set it to the $DB_ENCRYPTION_KEY value from the old cluster. In this example, migrated_key is the new current_key_label created:

      env:
        CC_DB_CURRENT_KEY_LABEL: migrated_key
      
      secrets:
        CC_DB_ENCRYPTION_KEYS:
          migrated_key: "OLD_CLUSTER_DB_ENCRYPTION_KEY"
  2. Deploy a non-high-availability configuration of scf and wait until all pods are ready before proceeding.

  3. In the ccdb-src.sql file created earlier, replace the domain name of the source deployment with the domain name of the target deployment.

    tux > sed --in-place 's/old-example.com/new-example.com/g' /tmp/ccdb-src.sql
  4. Stop the monit services on the api-group-0, cc-worker-0, and cc-clock-0 pods:

    tux > for n in api-group-0 cc-worker-0 cc-clock-0; do
      kubectl exec --stdin --tty --namespace scf $n -- bash -l -c 'monit stop all'
    done
  5. Copy the blobstore-src.tgz archive to the blobstore pod:

    tux > kubectl cp /tmp/blobstore-src.tgz scf/blobstore-0:/.
  6. Restore the contents of the archive created during the backup process to the blobstore pod:

    tux > kubectl exec --stdin --tty --namespace scf blobstore-0 -- bash -l -c 'monit stop all && sleep 10 && rm -rf /var/vcap/store/shared/* && tar xvf blobstore-src.tgz && monit start all && rm blobstore-src.tgz'
  7. Recreate the CCDB on the mysql pod:

    tux > kubectl exec mysql-0 --namespace scf -- bash -c \
      "/var/vcap/packages/mariadb/bin/mysql \
      --defaults-file=/var/vcap/jobs/mysql/config/mylogin.cnf \
      --socket /var/vcap/sys/run/pxc-mysql/mysqld.sock \
      -e 'drop database ccdb; create database ccdb;'"
  8. Restore the CCDB on the mysql pod:

    tux > kubectl exec --stdin mysql-0 --namespace scf -- bash -c \
      '/var/vcap/packages/mariadb/bin/mysql \
      --defaults-file=/var/vcap/jobs/mysql/config/mylogin.cnf \
      --socket /var/vcap/sys/run/pxc-mysql/mysqld.sock \
      ccdb' < /tmp/ccdb-src.sql
  9. Start the monit services on the api-group-0, cc-worker-0, and cc-clock-0 pods

    tux > for n in api-group-0 cc-worker-0 cc-clock-0; do
      kubectl exec --stdin --tty --namespace scf $n -- bash -l -c 'monit start all'
    done
  10. If your old cluster did not have current_key_label defined, perform a key rotation. Otherwise, a key rotation is not necessary.

    1. Run the rotation for the encryption keys:

      tux > kubectl exec --namespace scf api-group-0 -- bash -c \
      "source /var/vcap/jobs/cloud_controller_ng/bin/ruby_version.sh; \
      export CLOUD_CONTROLLER_NG_CONFIG=/var/vcap/jobs/cloud_controller_ng/config/cloud_controller_ng.yml; \
      cd /var/vcap/packages/cloud_controller_ng/cloud_controller_ng; \
      bundle exec rake rotate_cc_database_key:perform"
    2. Restart the api-group pod.

      tux > kubectl delete pod api-group-0 --namespace scf --force --grace-period=0
  11. Perform a cf restage appname for existing applications to ensure their existing data is updated with the new encryption key.

  12. The data restore is now complete. Run some cf commands, such as cf apps, cf marketplace, or cf services, and verify data from the old cluster is returned.

Print this page