Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
Applies to SUSE OpenStack Cloud 9

17 Backup and Restore Edit source

The following sections cover backup and restore operations. Before installing your cloud, there are several things you must do so that you achieve the backup and recovery results you need. SUSE OpenStack Cloud comes with playbooks and procedures to recover the control plane from various disaster scenarios.

As of SUSE OpenStack Cloud 9, Freezer (a distributed backup restore and disaster recovery service) is no longer supported; backup and restore are manual operations.

Consider Section 17.2, “Enabling Backups to a Remote Server” in case you lose cloud servers that back up and restore services.

The following features are supported:

  • File system backup using a point-in-time snapshot.

  • Strong encryption: AES-256-CFB.

  • MariaDB database backup with LVM snapshot.

  • Restoring your data from a previous backup.

  • Low storage requirement: backups are stored as compressed files.

  • Flexible backup (both incremental and differential).

  • Data is archived in GNU Tar format for file-based incremental backup and restore.

  • When a key is provided, Open SSL is used to encrypt data (AES-256-CFB).

17.1 Manual Backup Overview Edit source

This section covers manual backup and some restore processes. Full documentation for restore operations is in Section 15.2, “Unplanned System Maintenance”.To back up outside the cluster, refer to Section 17.2, “Enabling Backups to a Remote Server”. Backups of the following types of resources are covered:

  • Cloud Lifecycle Manager Data.  All important information on the Cloud Lifecycle Manager

  • MariaDB database that is part of the Control Plane.  The MariaDB database contains most of the data needed to restore services. MariaDB supports full back up and recovery for all services. Logging data in Elasticsearch is not backed up. swift objects are not backed up because of the redundant nature of swift.

  • swift Rings used in the swift storage deployment.  swift rings are backed up so that you can recover more quickly than rebuilding with swift. swift can rebuild the rings without this backup data, but automatically rebuilding the rings is slower than restoring from a backup.

  • Audit Logs.  Audit Logs are backed up to provide retrospective information and statistical data for performance and security purposes.

The following services will be backed up. Specifically, the data needed to restore the services is backed up. This includes databases and configuration-related files.

Important
Important

Data content for some services is not backed up, as indicated below.

  • ceilometer. There is no backup of metrics data.

  • cinder. There is no backup of the volumes.

  • glance. There is no backup of the images.

  • heat

  • horizon

  • keystone

  • neutron

  • nova. There is no backup of the images.

  • swift. There is no backup of the objects. swift has its own high availability and redundancy. swift rings are backed up. Although swift can rebuild the rings itself, restoring from backup is faster.

  • Operations Console

  • monasca. There is no backup of the metrics.

17.2 Enabling Backups to a Remote Server Edit source

We recommend that you set up a remote server to store your backups, so that you can restore the control plane nodes. This may be necessary if you lose all of your control plane nodes at the same time.

Important
Important

A remote backup server must be set up before proceeding.

You do not have to restore from the remote server if only one or two control plane nodes are lost. In that case, the control planes can be recovered from the data on a remaining control plane node following the restore procedures in Section 15.2.3.2, “Recovering the Control Plane”.

17.2.1 Securing your SSH backup server Edit source

You can do the following to harden an SSH server:

  • Disable root login

  • Move SSH to a non-default port (the default SSH port is 22)

  • Disable password login (only allow RSA keys)

  • Disable SSH v1

  • Authorize Secure File Transfer Protocol (SFTP) only for the designated backup user (disable SSH shell)

  • Firewall SSH traffic to ensure it comes from the SUSE OpenStack Cloud address range

  • Install a Fail2Ban solution

  • Restrict users who are allowed to SSH

  • Additional suggestions are available online

Remove the key pair generated earlier on the backup server; the only thing needed is .ssh/authorized_keys. You can remove the .ssh/id_rsa and .ssh/id_rsa.pub files. Be sure to save a backup of them.

17.2.2 General tips Edit source

  • Provide adequate space in the directory that is used for backup.

  • Monitor the space left on that directory.

  • Keep the system up to date on that server.

17.3 Manual Backup and Restore Procedures Edit source

Each backup requires the following steps:

  1. Create a snapshot.

  2. Mount the snapshot.

  3. Generate a TAR archive and save it.

  4. Unmount and delete the snapshot.

17.3.1 Cloud Lifecycle Manager Data Backup Edit source

The following procedure is used for each of the seven BACKUP_TARGETS (list below). Incremental backup instructions follow the full backup procedure. For both full and incremental backups, the last step of the procedure is to unmount and delete the snapshot after the TAR archive has been created and saved. A new snapshot must be created every time a backup is created.

Procedure 17.1: Manual Backup Setup
  1. Create a snapshot on the Cloud Lifecycle Manager in (ardana-vg), the location where all Cloud Lifecycle Manager data is stored.

    ardana > sudo lvcreate --size 2G --snapshot --permission r \
    --name lvm_clm_snapshot /dev/ardana-vg/root
    Note
    Note

    If you have stored additional data or files in your ardana-vg directory, you may need more space than the 2G indicated for the size parameter. In this situation, create a preliminary TAR archive with the tar command on the directory before creating a snapshot. Set the size snapshot parameter larger than the size of the archive.

  2. Mount the snapshot

    ardana > sudo mkdir /var/tmp/clm_snapshot
    ardana > sudo mount -o ro /dev/ardana-vg/lvm_clm_snapshot /var/tmp/clm_snapshot
  3. Generate a TAR archive (does not apply to incremental backups) with an appropriate BACKUP_TAR_ARCHIVE_NAME.tar.gz backup file for each of the following BACKUP_TARGETS.

    Backup Targets

    • home

    • ssh

    • shadow

    • passwd

    • group

    The backup TAR archive should contain only the necessary data; nothing extra. Some of the archives will be stored as directories, others as files. The backup commands are slightly different for each type.

    If the BACKUP_TARGET is a directory, then that directory must be appended to /var/tmp/clm_snapshot/TARGET_DIR. If the BACKUP_TARGET is a file, then its parent directory must be appended to /var/tmp/clm_snapshot/.

    In the commands that follow, replace BACKUP_TARGET with the appropriate BACKUP_PATH (replacement table is below).

    ardana > sudo tar --create -z --warning=none --no-check-device \
    --one-file-system --preserve-permissions --same-owner --seek \
    --ignore-failed-read --file BACKUP_TAR_ARCHIVE_NAME.tar.gz -C \
    /var/tmp/clm_snapshotTARGET_DIR|BACKUP_TARGET_WITHOUT_LEADING_DIR
    • If BACKUP_TARGET is a directory, replace TARGET_DIR with BACKUP_PATH.

      For example, where BACKUP_PATH=/etc/ssh/ (a directory):

      ardana > sudo tar --create -z --warning=none --no-check-device \
      --one-file-system --preserve-permissions --same-owner --seek \
      --ignore-failed-read --file ssh.tar.gz -C /var/tmp/clm_snapshot/etc/ssh .
    • If BACKUP_TARGET is a file (not a directory), replace TARGET_DIR with the parent directory of BACKUP_PATH.

      For example, where BACKUP_PATH=/etc/passwd (a file):

      ardana > sudo tar --create -z --warning=none --no-check-device \
      --one-file-system --preserve-permissions --same-owner --seek \
      --ignore-failed-read --file passwd.tar.gz -C /var/tmp/clm_snapshot/etc/passwd
  4. Save the TAR archive to the remote server.

    ardana > scp TAR_ARCHIVE USER@REMOTE_SERVER
  5. Use the following commands to unmount and delete a snapshot.

    ardana > sudo umount -l -f /var/tmp/clm_snapshot; rm -rf /var/tmp/clm_snapshot
    ardana > sudo lvremove -f /dev/ardana-vg/lvm_clm_snapshot

The table below shows Cloud Lifecycle Manager backup_targets and their respective backup_paths.

Table 17.1: Cloud Lifecycle Manager Backup Paths

backup_name

backup_path

home_backup

/var/lib/ardana (file)

etc_ssh_backup

/etc/ssh/ (directory)

shadow_backup

/etc/shadow (file)

passwd_backup

/etc/passwd (file)

group_backup

/etc/group (file)

17.3.1.1 Cloud Lifecycle Manager Incremental Backup Edit source

Incremental backups require a meta file. If you use the incremental backup option, a meta file must be included in the tar command in the initial backup and whenever you do an incremental backup. A copy of the original meta file should be stored in each backup. The meta file is used to determine the incremental changes from the previous backup, so it is rewritten with each incremental backup.

Versions are useful for incremental backup because they provide a way to differentiate between each backup. Versions are included in the tar command.

Every incremental backup requires creating and mounting a separate snapshot. After the TAR archive is created, the snapshot is unmounted and deleted.

To prepare for incremental backup, follow the steps in Procedure 17.1, “Manual Backup Setup” with the following differences in the commands for generating a tar archive.

  • First time full backup

    ardana > sudo tar --create -z --warning=none --no-check-device \
    --one-file-system --preserve-permissions --same-owner --seek \
    --ignore-failed-read --listed-incremental=PATH_TO_YOUR_META \
    --file BACKUP_TAR_ARCHIVE_NAME.tar.gz -C \
    /var/tmp/clm_snapshotTARGET_DIR|BACKUP_TARGET_WITHOUT_LEADING_DIR

    For example, where BACKUP_PATH=/etc/ssh/

    ardana > sudo tar --create -z --warning=none --no-check-device \
    --one-file-system --preserve-permissions --same-owner --seek --ignore-failed-read \
    --listed-incremental=mysshMeta --file ssh.tar.gz -C \
    /var/tmp/clm_snapshot/etc/ssh .
  • Incremental backup

    ardana > sudo tar --create -z --warning=none --no-check-device \
    --one-file-system --preserve-permissions --same-owner --seek \
    --ignore-failed-read --listed-incremental=PATH_TO_YOUR_META\
    --file BACKUP_TAR_ARCHIVE_NAME_VERSION.tar.gz -C \
    /var/tmp/clm_snapshotTARGET_DIR|BACKUP_TARGET_WITHOUT_LEADING_DIR

    For example, where BACKUP_PATH=/etc/ssh/:

    ardana > sudo tar --create -z --warning=none --no-check-device \
    --one-file-system --preserve-permissions --same-owner --seek --ignore-failed-read \
    --listed-incremental=mysshMeta --file \
    ssh_v1.tar.gz -C \
    /var/tmp/clm_snapshot/etc/ssh .

After creating an incremental backup, use the following commands to unmount and delete a snapshot.

ardana > sudo umount -l -f /var/tmp/clm_snapshot; rm -rf /var/tmp/clm_snapshot
ardana > sudo lvremove -f /dev/ardana-vg/lvm_clm_snapshot

17.3.1.2 Encryption Edit source

When a key is provided, Open SSL is used to encrypt data (AES-256-CFB). Backup files can be encrypted with the following command:

ardana > sudo openssl enc -aes-256-cfb -pass file:ENCRYPT_PASS_FILE_PATH -in \
YOUR_BACKUP_TAR_ARCHIVE_NAME.tar.gz -out YOUR_BACKUP_TAR_ARCHIVE_NAME.tar.gz.enc

For example, using the ssh.tar.gz generated above:

ardana > sudo openssl enc  -aes-256-cfb -pass file:myEncFile -in ssh.tar.gz  -out ssh.tar.gz.enc

17.3.2 MariaDB Database Backup Edit source

When backing up MariaDB, the following process must be performed on all nodes in the cluster. It is similar to the backup procedure above for the Cloud Lifecycle Manager (see Procedure 17.1, “Manual Backup Setup”). The difference is the addition of SQL commands, which are run with the create_db_snapshot.yml playbook.

Create the create_db_snapshot.yml file in ~/scratch/ansible/next/ardana/ansible/ on the deployer with the following content:

- hosts: FND-MDB
vars:
 - snapshot_name: lvm_mysql_snapshot
 - lvm_target: /dev/ardana-vg/mysql

 tasks:
 - name: Cleanup old snapshots
   become: yes
   shell: |
    lvremove -f /dev/ardana-vg/{{ snapshot_name }}
   ignore_errors: True

 - name: Create snapshot
   become: yes
   shell: |
    lvcreate --size 2G --snapshot --permission r --name {{ snapshot_name }} {{ lvm_target }}
   register: snapshot_st
   ignore_errors: True

 - fail:
     msg: "Fail to create snapshot on  {{ lvm_target }}"
   when: snapshot_st.rc != 0
Note
Note

Verify the validity of the lvm_target variable (which refers to the actual database LVM volume) before proceeding with the backup.

Doing the MariaDB backup

  1. We recommend storing the MariaDB version with your backup. The following command saves the MariaDB version as MARIADB_VER.

    mysql -V | grep -Eo '(\S+?)-MariaDB' > MARIADB_VER
  2. Open a MariaDB client session on all controllers.

  3. Run the command to spread read lock on all controllers and keep the MariaDB session open.

    >> FLUSH TABLES WITH READ LOCK;
  4. Open a new terminal and run the create_db_snapshot.yml playbook created above.

    ardana > cd ~/scratch/ansible/next/ardana/ansible/
    ardana > ansible-playbook -i hosts/verb_hosts create_db_snapshot.yml
  5. Go back to the open MariaDB session and run the command to flush the lock on all controllers.

    >> UNLOCK TABLES;
  6. Mount the snapshot

    dbnode>> mkdir /var/tmp/mysql_snapshot
    dbnode>> sudo mount -o ro /dev/ardana-vg/lvm_mysql_snapshot  /var/tmp/mysql_snapshot
  7. On each database node, generate a TAR archive with an appropriate BACKUP_TAR_ARCHIVE_NAME.tar.gz backup file for the BACKUP_TARGET.

    The backup_name is mysql_backup and the backup_path (BACKUP_TARGET) is /var/lib/mysql/.

    dbnode>> sudo tar --create -z --warning=none --no-check-device \
    --one-file-system --preserve-permissions --same-owner --seek --ignore-failed-read \
    --file mydb.tar.gz /var/tmp/mysql_snapshot/var/lib/mysql .
  8. Unmount and delete the MariaDB snapshot on each database node.

    dbnode>> sudo  umount -l -f /var/tmp/mysql_snapshot; \
    sudo rm -rf /var/tmp/mysql_snapshot; sudo lvremove -f /dev/ardana-vg/lvm_mysql_snapshot

17.3.2.1 Incremental MariaDB Database Backup Edit source

Incremental backups require a meta file. If you use the incremental backup option, a meta file must be included in the tar command in the initial backup and whenever you do an incremental backup. A copy of the original meta file should be stored in each backup. The meta file is used to determine the incremental changes from the previous backup, so it is rewritten with each incremental backup.

Versions are useful for incremental backup because they provide a way to differentiate between each backup. Versions are included in the tar command.

To prepare for incremental backup, follow the steps in the previous section except for the tar commands. Incremental backup tar commands must have additional information.

  • First time MariaDB database full backup

    dbnode>> sudo tar --create -z --warning=none --no-check-device \
    --one-file-system --preserve-permissions --same-owner --seek \
    --ignore-failed-read --listed-incremental=PATH_TO_YOUR_DB_META \
    --file mydb.tar.gz -C /var/tmp/mysql_snapshot/var/lib/mysql .

    For example, where BACKUP_PATH=/var/lib/mysql/:

    dbnode>> sudo tar --create -z --warning=none --no-check-device \
    --one-file-system --preserve-permissions --same-owner --seek --ignore-failed-read \
    --listed-incremental=mydbMeta --file mydb.tar.gz -C \
    /var/tmp/mysql_snapshot/var/lib/mysql .
  • Incremental MariaDB database backup

    dbnode>> sudo tar --create -z --warning=none --no-check-device \
    --one-file-system --preserve-permissions --same-owner --seek \
    --ignore-failed-read --listed-incremental=PATH_TO_YOUR_META\
    --file BACKUP_TAR_ARCHIVE_NAME_VERSION.tar.gz -C \
    /var/tmp/clm_snapshotTARGET_DIR

    For example, where BACKUP_PATH=/var/lib/mysql/:

    dbnode>> sudo tar --create -z --warning=none --no-check-device \
    --one-file-system --preserve-permissions --same-owner --seek --ignore-failed-read \
    --listed-incremental=mydbMeta --file \
    mydb_v1.tar.gz -C /var/tmp/mysql_snapshot/var/lib/mysql .

After creating and saving the TAR archive, unmount and delete the snapshot.

dbnode>> sudo  umount -l -f /var/tmp/mysql_snapshot; \
sudo rm -rf /var/tmp/mysql_snapshot; sudo lvremove -f /dev/ardana-vg/lvm_mysql_snapshot

17.3.2.2 MariaDB Database Encryption Edit source

  1. Encrypt your MariaDB database backup following the instructions in Section 17.3.1.2, “Encryption”

  2. Upload your BACKUP_TARGET.tar.gz to your preferred remote server.

17.3.3 swift Ring Backup Edit source

The following procedure is used to back up swift rings. It is similar to the Cloud Lifecycle Manager backup (see Procedure 17.1, “Manual Backup Setup”).

Important
Important

The steps must be performed only on the building server (For more information, see Section 18.6.2.4, “Identifying the Swift Ring Building Server”.).

The backup_name is swift_builder_dir_backup and the backup_path is /etc/swiftlm/.

  1. Create a snapshot

    ardana > sudo lvcreate --size 2G --snapshot --permission r \
    --name lvm_root_snapshot /dev/ardana-vg/root
  2. Mount the snapshot

    ardana > mkdir /var/tmp/root_snapshot; sudo mount -o ro \
    /dev/ardana-vg/lvm_root_snapshot /var/tmp/root_snapshot
  3. Create the TAR archive

    ardana > sudo tar --create -z --warning=none --no-check-device \
    --one-file-system --preserve-permissions --same-owner --seek --ignore-failed-read \
    --file swring.tar.gz -C /var/tmp/root_snapshot/etc/swiftlm .
  4. Upload your swring.tar.gz TAR archive to your preferred remote server.

  5. Unmount and delete the snapshot

    ardana > sudo umount -l -f /var/tmp/root_snapshot; sudo rm -rf \
    /var/tmp/root_snapshot; sudo lvremove -f /dev/ardana-vg/lvm_root_snapshot

17.3.4 Audit Log Backup and Restore Edit source

17.3.4.1 Audit Log Backup Edit source

The following procedure is used to back up Audit Logs. It is similar to the Cloud Lifecycle Manager backup (see Procedure 17.1, “Manual Backup Setup”). The steps must be performed on all nodes; there will be a backup TAR archive for each node. Before performing the following steps, run through Section 13.2.7.2, “Enable Audit Logging” .

The backup_name is audit_log_backup and the backup_path is /var/audit.

  1. Create a snapshot

    ardana > sudo lvcreate --size 2G --snapshot --permission r --name \
    lvm_root_snapshot /dev/ardana-vg/root
  2. Mount the snapshot

    ardana > mkdir /var/tmp/root_snapshot; sudo mount -o ro \
    /dev/ardana-vg/lvm_root_snapshot /var/tmp/root_snapshot
  3. Create the TAR archive

    ardana > sudo tar --create -z --warning=none --no-check-device \
    --one-file-system --preserve-permissions --same-owner --seek --ignore-failed-read \
    --file audit.tar.gz -C /var/tmp/root_snapshot/var/audit .
  4. Upload your audit.tar.gz TAR archive to your preferred remote server.

  5. Unmount and delete a snapshot

    ardana > sudo umount -l -f /var/tmp/root_snapshot; sudo rm -rf \
    /var/tmp/root_snapshot; sudo lvremove -f /dev/ardana-vg/lvm_root_snapshot

17.3.4.2 Audit Logs Restore Edit source

Restore the Audit Logs backup with the following commands

  1. Retrieve the Audit Logs TAR archive

  2. Extract the TAR archive to the proper backup location

    ardana > sudo tar -z --incremental --extract --ignore-zeros \
    --warning=none --overwrite --directory /var/audit/  -f audit.tar.gz

17.4 Full Disaster Recovery Test Edit source

Full Disaster Recovery Test

17.4.1 High Level View of the Recovery Process Edit source

  1. Back up the control plane using the manual backup procedure

  2. Backup the Cassandra Database

  3. Re-install Controller 1 with the SUSE OpenStack Cloud ISO

  4. Use manual restore steps to recover deployment data (and model)

  5. Re-install SUSE OpenStack Cloud on Controllers 1, 2, 3

  6. Recover the backup of the MariaDB database

  7. Recover the Cassandra Database

  8. Verify testing

17.4.2 Description of the testing environment Edit source

The testing environment is similar to the Entry Scale model.

It uses five servers: three Control Nodes and two Compute Nodes.

The controller node has three disks. The first is reserved for the system; the others are used for swift.

Note
Note

For this Disaster Recovery test, data has been saved on disks 2 and 3 of the swift controllers, which allows for swift objects to be restored the recovery. If these disks were also wiped, swift data would be lost, but the procedure would not change. The only difference is that glance images would be lost and would have to be uploaded again.

Unless specified otherwise, all commands should be executed on controller 1, which is also the deployer node.

17.4.3 Pre-Disaster testing Edit source

In order to validate the procedure after recovery, we need to create some workloads.

  1. Source the service credential file

    ardana > source ~/service.osrc
  2. Copy an image to the platform and create a glance image with it. In this example, Cirros is used

    ardana > openstack image create --disk-format raw --container-format \
    bare --public --file ~/cirros-0.3.5-x86_64-disk.img cirros
  3. Create a network

    ardana > openstack network create test_net
  4. Create a subnet

    ardana > openstack subnet create 07c35d11-13f9-41d4-8289-fa92147b1d44 192.168.42.0/24 --name test_subnet
  5. Create some instances

    ardana > openstack server create server_1 --image 411a0363-7f4b-4bbc-889c-b9614e2da52e --flavor m1.small --nic net-id=07c35d11-13f9-41d4-8289-fa92147b1d44
    ardana > openstack server create server_2 --image 411a03...e2da52e --flavor m1.small --nic net-id=07c35d...147b1d44
    ardana > openstack server create server_3 --image 411a03...e2da52e --flavor m1.small --nic net-id=07c35d...147b1d44
    ardana > openstack server create server_4 --image 411a03...e2da52e --flavor m1.small --nic net-id=07c35d...147b1d44
    ardana > openstack server create server_5 --image 411a03...e2da52e --flavor m1.small --nic net-id=07c35d...147b1d44
    ardana > openstack server list
  6. Create containers and objects

    ardana > openstack object create container_1 ~/service.osrc
    var/lib/ardana/service.osrc
    
    ardana > openstack object create container_1 ~/backup.osrc
    swift upload container_1 ~/backup.osrc
    
    ardana > openstack object list container_1
    var/lib/ardana/backup.osrc
    var/lib/ardana/service.osrc

17.4.4 Preparation of the test backup server Edit source

17.4.4.1 Preparation to store backups Edit source

In this example, backups are stored on the server 192.168.69.132

  1. Connect to the backup server

  2. Create the user

    root # useradd BACKUPUSER --create-home --home-dir /mnt/backups/
  3. Switch to that user

    root # su BACKUPUSER
  4. Create the SSH keypair

    backupuser > ssh-keygen -t rsa
    > # Leave the default for the first question and do not set any passphrase
    > Generating public/private rsa key pair.
    > Enter file in which to save the key (/mnt/backups//.ssh/id_rsa):
    > Created directory '/mnt/backups//.ssh'.
    > Enter passphrase (empty for no passphrase):
    > Enter same passphrase again:
    > Your identification has been saved in /mnt/backups//.ssh/id_rsa
    > Your public key has been saved in /mnt/backups//.ssh/id_rsa.pub
    > The key fingerprint is:
    > a9:08:ae:ee:3c:57:62:31:d2:52:77:a7:4e:37:d1:28 backupuser@padawan-ccp-c0-m1-mgmt
    > The key's randomart image is:
    > +---[RSA 2048]----+
    > |          o      |
    > |   . . E + .     |
    > |  o . . + .      |
    > | o +   o +       |
    > |  + o o S .      |
    > | . + o o         |
    > |  o + .          |
    > |.o .             |
    > |++o              |
    > +-----------------+
  5. Add the public key to the list of the keys authorized to connect to that user on this server

    backupuser > cat /mnt/backups/.ssh/id_rsa.pub >> /mnt/backups/.ssh/authorized_keys
  6. Print the private key. This will be used for the backup configuration (ssh_credentials.yml file)

    backupuser > cat /mnt/backups/.ssh/id_rsa
    
    > -----BEGIN RSA PRIVATE KEY-----
    > MIIEogIBAAKCAQEAvjwKu6f940IVGHpUj3ffl3eKXACgVr3L5s9UJnb15+zV3K5L
    > BZuor8MLvwtskSkgdXNrpPZhNCsWSkryJff5I335Jhr/e5o03Yy+RqIMrJAIa0X5
    > ...
    > ...
    > ...
    > iBKVKGPhOnn4ve3dDqy3q7fS5sivTqCrpaYtByJmPrcJNjb2K7VMLNvgLamK/AbL
    > qpSTZjicKZCCl+J2+8lrKAaDWqWtIjSUs29kCL78QmaPOgEvfsw=
    > -----END RSA PRIVATE KEY-----

17.4.4.2 Preparation to store Cassandra backups Edit source

In this example, backups will be stored on the server 192.168.69.132, in the /mnt/backups/cassandra_backups/ directory.

  1. Create a directory on the backup server to store Cassandra backups.

    backupuser > mkdir /mnt/backups/cassandra_backups
  2. Copy the private SSH key from the backup server to all controller nodes.

    Replace CONTROLLER with each control node e.g. doc-cp1-c1-m1-mgmt, doc-cp1-c1-m2-mgmt etc

  3. Log in to each controller node and copy the private SSH key to .ssh directory of the root user.

    ardana >  sudo cp /var/lib/ardana/.ssh/id_rsa_backup /root/.ssh/
  4. Verify that you can SSH to the backup server as backupuser using the private key.

    root # ssh -i ~/.ssh/id_rsa_backup backupuser@192.168.69.132

17.4.5 Perform Backups for disaster recovery test Edit source

17.4.5.1 Execute backup of Cassandra Edit source

Create the following cassandra-backup-extserver.sh script on all controller nodes.

root # cat > ~/cassandra-backup-extserver.sh << EOF
#!/bin/sh

# backup user
BACKUP_USER=backupuser
# backup server
BACKUP_SERVER=192.168.69.132
# backup directory
BACKUP_DIR=/mnt/backups/cassandra_backups/

# Setup variables
DATA_DIR=/var/cassandra/data/data
NODETOOL=/usr/bin/nodetool

# example: cassandra-snp-2018-06-26-1003
SNAPSHOT_NAME=cassandra-snp-\$(date +%F-%H%M)
HOST_NAME=\$(/bin/hostname)_

# Take a snapshot of Cassandra database
\$NODETOOL snapshot -t \$SNAPSHOT_NAME monasca

# Collect a list of directories that make up the snapshot
SNAPSHOT_DIR_LIST=\$(find \$DATA_DIR -type d -name \$SNAPSHOT_NAME)
for d in \$SNAPSHOT_DIR_LIST
  do
    # copy snapshot directories to external server
    rsync -avR -e "ssh -i /root/.ssh/id_rsa_backup" \$d \$BACKUP_USER@\$BACKUP_SERVER:\$BACKUP_DIR/\$HOST_NAME\$SNAPSHOT_NAME
  done

\$NODETOOL clearsnapshot monasca
EOF
root # chmod +x ~/cassandra-backup-extserver.sh

Execute following steps on all the controller nodes

Note
Note

The /usr/local/sbin/cassandra-backup-extserver.sh script should be executed on all three controller nodes at the same time (within seconds of each other) for a successful backup.

  1. Edit the /usr/local/sbin/cassandra-backup-extserver.sh script

    Set BACKUP_USER and BACKUP_SERVER to the desired backup user (for example, backupuser) and desired backup server (for example, 192.168.68.132), respectively.

    BACKUP_USER=backupuser
    BACKUP_SERVER=192.168.69.132
    BACKUP_DIR=/mnt/backups/cassandra_backups/
  2. Execute ~/cassandra-backup-extserver.sh on on all controller nodes which are also Cassandra nodes.

    root # ~/cassandra-backup-extserver.sh
    
    Requested creating snapshot(s) for [monasca] with snapshot name [cassandra-snp-2018-06-28-0251] and options {skipFlush=false}
    Snapshot directory: cassandra-snp-2018-06-28-0251
    sending incremental file list
    created directory /mnt/backups/cassandra_backups//doc-cp1-c1-m1-mgmt_cassandra-snp-2018-06-28-0251
    /var/
    /var/cassandra/
    /var/cassandra/data/
    /var/cassandra/data/data/
    /var/cassandra/data/data/monasca/
    
    ...
    ...
    ...
    
    /var/cassandra/data/data/monasca/measurements-e29033d0488d11e8bdabc32666406af1/snapshots/cassandra-snp-2018-06-28-0306/mc-72-big-Summary.db
    /var/cassandra/data/data/monasca/measurements-e29033d0488d11e8bdabc32666406af1/snapshots/cassandra-snp-2018-06-28-0306/mc-72-big-TOC.txt
    /var/cassandra/data/data/monasca/measurements-e29033d0488d11e8bdabc32666406af1/snapshots/cassandra-snp-2018-06-28-0306/schema.cql
    sent 173,691 bytes  received 531 bytes  116,148.00 bytes/sec
    total size is 171,378  speedup is 0.98
    Requested clearing snapshot(s) for [monasca]
  3. Verify the Cassandra backup directory on the backup server.

    backupuser > ls -alt /mnt/backups/cassandra_backups
    total 16
    drwxr-xr-x 4 backupuser users 4096 Jun 28 03:06 .
    drwxr-xr-x 3 backupuser users 4096 Jun 28 03:06 doc-cp1-c1-m2-mgmt_cassandra-snp-2018-06-28-0306
    drwxr-xr-x 3 backupuser users 4096 Jun 28 02:51 doc-cp1-c1-m1-mgmt_cassandra-snp-2018-06-28-0251
    drwxr-xr-x 8 backupuser users 4096 Jun 27 20:56 ..
    
    $backupuser@backupserver> du -shx /mnt/backups/cassandra_backups/*
    6.2G    /mnt/backups/cassandra_backups/doc-cp1-c1-m1-mgmt_cassandra-snp-2018-06-28-0251
    6.3G    /mnt/backups/cassandra_backups/doc-cp1-c1-m2-mgmt_cassandra-snp-2018-06-28-0306

17.4.5.2 Execute backup of SUSE OpenStack Cloud Edit source

  1. Back up the Cloud Lifecycle Manager using the procedure at Section 17.3.1, “Cloud Lifecycle Manager Data Backup”

  2. Back up the MariaDB database using the procedure at Section 17.3.2, “MariaDB Database Backup”

  3. Back up swift rings using the procedure at Section 17.3.3, “swift Ring Backup”

17.4.5.2.1 Restore the first controller Edit source
  1. Log in to the Cloud Lifecycle Manager.

  2. Retrieve the Cloud Lifecycle Manager backups that were created with Section 17.3.1, “Cloud Lifecycle Manager Data Backup”. There are multiple backups; directories are handled differently than files.

  3. Extract the TAR archives for each of the seven locations.

    ardana > sudo tar -z --incremental --extract --ignore-zeros \
    --warning=none --overwrite --directory RESTORE_TARGET \
    -f BACKUP_TARGET.tar.gz

    For example, with a directory such as BACKUP_TARGET=/etc/ssh/

    ardana > sudo tar -z --incremental --extract --ignore-zeros \
    --warning=none --overwrite --directory /etc/ssh/ -f ssh.tar.gz

    With a file such as BACKUP_TARGET=/etc/passwd

    ardana > sudo tar -z --incremental --extract --ignore-zeros --warning=none --overwrite --directory /etc/ -f passwd.tar.gz
17.4.5.2.2 Re-deployment of controllers 1, 2 and 3 Edit source
  1. Change back to the default ardana user.

  2. Run the cobbler-deploy.yml playbook.

    ardana > cd ~/openstack/ardana/ansible
    ardana > ansible-playbook -i hosts/localhost cobbler-deploy.xml
  3. Run the bm-reimage.yml playbook limited to the second and third controllers.

    ardana > ansible-playbook -i hosts/localhost bm-reimage.yml -e nodelist=controller2,controller3

    The names of controller2 and controller3. Use the bm-power-status.yml playbook to check the cobbler names of these nodes.

  4. Run the site.yml playbook limited to the three controllers and localhost—in this example, doc-cp1-c1-m1-mgmt, doc-cp1-c1-m2-mgmt, doc-cp1-c1-m3-mgmt, and localhost

    ardana > cd ~/scratch/ansible/next/ardana/ansible
    ardana > ansible-playbook -i hosts/verb_hosts site.yml --limit \
    doc-cp1-c1-m1-mgmt,doc-cp1-c1-m2-mgmt,doc-cp1-c1-m3-mgmt,localhost
17.4.5.2.3 Restore Databases Edit source
17.4.5.2.3.1 Restore MariaDB database Edit source
  1. Log in to the first controller node.

  2. Retrieve the MariaDB backup that was created with Section 17.3.2, “MariaDB Database Backup”.

  3. Create a temporary directory and extract the TAR archive (for example, mydb.tar.gz).

    ardana > mkdir /tmp/mysql_restore; sudo tar -z --incremental \
    --extract --ignore-zeros --warning=none --overwrite --directory /tmp/mysql_restore/ \
    -f mydb.tar.gz
  4. Verify that the files have been restored on the controller.

    ardana > sudo du -shx /tmp/mysql_restore/*
    16K     /tmp/mysql_restore/aria_log.00000001
    4.0K    /tmp/mysql_restore/aria_log_control
    3.4M    /tmp/mysql_restore/barbican
    8.0K    /tmp/mysql_restore/ceilometer
    4.2M    /tmp/mysql_restore/cinder
    2.9M    /tmp/mysql_restore/designate
    129M    /tmp/mysql_restore/galera.cache
    2.1M    /tmp/mysql_restore/glance
    4.0K    /tmp/mysql_restore/grastate.dat
    4.0K    /tmp/mysql_restore/gvwstate.dat
    2.6M    /tmp/mysql_restore/heat
    752K    /tmp/mysql_restore/horizon
    4.0K    /tmp/mysql_restore/ib_buffer_pool
    76M     /tmp/mysql_restore/ibdata1
    128M    /tmp/mysql_restore/ib_logfile0
    128M    /tmp/mysql_restore/ib_logfile1
    12M     /tmp/mysql_restore/ibtmp1
    16K     /tmp/mysql_restore/innobackup.backup.log
    313M    /tmp/mysql_restore/keystone
    716K    /tmp/mysql_restore/magnum
    12M     /tmp/mysql_restore/mon
    8.3M    /tmp/mysql_restore/monasca_transform
    0       /tmp/mysql_restore/multi-master.info
    11M     /tmp/mysql_restore/mysql
    4.0K    /tmp/mysql_restore/mysql_upgrade_info
    14M     /tmp/mysql_restore/nova
    4.4M    /tmp/mysql_restore/nova_api
    14M     /tmp/mysql_restore/nova_cell0
    3.6M    /tmp/mysql_restore/octavia
    208K    /tmp/mysql_restore/opsconsole
    38M     /tmp/mysql_restore/ovs_neutron
    8.0K    /tmp/mysql_restore/performance_schema
    24K     /tmp/mysql_restore/tc.log
    4.0K    /tmp/mysql_restore/test
    8.0K    /tmp/mysql_restore/winchester
    4.0K    /tmp/mysql_restore/xtrabackup_galera_info
  5. Stop SUSE OpenStack Cloud services on the three controllers (using the hostnames of the controllers in your configuration).

    ardana > cd ~/scratch/ansible/next/ardana/ansible
    ardana > ansible-playbook -i hosts/verb_hosts ardana-stop.yml --limit \
    doc-cp1-c1-m1-mgmt,doc-cp1-c1-m2-mgmt,doc-cp1-c1-m3-mgmt,localhost
  6. Delete the files in the mysql directory and copy the restored backup to that directory.

    root # cd /var/lib/mysql/
    root # rm -rf ./*
    root # cp -pr /tmp/mysql_restore/* ./
  7. Switch back to the ardana user when the copy is finished.

17.4.5.2.3.2 Restore Cassandra database Edit source

Create a script called cassandra-restore-extserver.sh on all controller nodes

root # cat > ~/cassandra-restore-extserver.sh << EOF
#!/bin/sh

# backup user
BACKUP_USER=backupuser
# backup server
BACKUP_SERVER=192.168.69.132
# backup directory
BACKUP_DIR=/mnt/backups/cassandra_backups/

# Setup variables
DATA_DIR=/var/cassandra/data/data
NODETOOL=/usr/bin/nodetool

HOST_NAME=\$(/bin/hostname)_

#Get snapshot name from command line.
if [ -z "\$*"  ]
then
  echo "usage \$0 <snapshot to restore>"
  exit 1
fi
SNAPSHOT_NAME=\$1

# restore
rsync -av -e "ssh -i /root/.ssh/id_rsa_backup" \$BACKUP_USER@\$BACKUP_SERVER:\$BACKUP_DIR/\$HOST_NAME\$SNAPSHOT_NAME/ /

# set ownership of newley restored files
chown -R cassandra:cassandra \$DATA_DIR/monasca/*

# Get a list of snapshot directories that have files to be restored.
RESTORE_LIST=\$(find \$DATA_DIR -type d -name \$SNAPSHOT_NAME)

# use RESTORE_LIST to move snapshot files back into place of database.
for d in \$RESTORE_LIST
do
  cd \$d
  mv * ../..
  KEYSPACE=\$(pwd | rev | cut -d '/' -f4 | rev)
  TABLE_NAME=\$(pwd | rev | cut -d '/' -f3 |rev | cut -d '-' -f1)
  \$NODETOOL refresh \$KEYSPACE \$TABLE_NAME
done
cd
# Cleanup snapshot directories
\$NODETOOL clearsnapshot \$KEYSPACE
EOF
root # chmod +x ~/cassandra-restore-extserver.sh

Execute following steps on all the controller nodes.

  1. Edit the ~/cassandra-restore-extserver.sh script.

    Set BACKUP_USER,BACKUP_SERVER to the desired backup user (for example, backupuser) and the desired backup server (for example, 192.168.68.132), respectively.

    BACKUP_USER=backupuser
    BACKUP_SERVER=192.168.69.132
    BACKUP_DIR=/mnt/backups/cassandra_backups/
  2. Execute ~/cassandra-restore-extserver.sh SNAPSHOT_NAME

    Find SNAPSHOT_NAME from listing of /mnt/backups/cassandra_backups. All the directories have the format HOST_SNAPSHOT_NAME.

    ardana > ls -alt /mnt/backups/cassandra_backups
    total 16
    drwxr-xr-x 4 backupuser users 4096 Jun 28 03:06 .
    drwxr-xr-x 3 backupuser users 4096 Jun 28 03:06 doc-cp1-c1-m2-mgmt_cassandra-snp-2018-06-28-0306
    root # ~/cassandra-restore-extserver.sh cassandra-snp-2018-06-28-0306
    
    receiving incremental file list
    ./
    var/
    var/cassandra/
    var/cassandra/data/
    var/cassandra/data/data/
    var/cassandra/data/data/monasca/
    var/cassandra/data/data/monasca/alarm_state_history-e6bbdc20488d11e8bdabc32666406af1/
    var/cassandra/data/data/monasca/alarm_state_history-e6bbdc20488d11e8bdabc32666406af1/snapshots/
    var/cassandra/data/data/monasca/alarm_state_history-e6bbdc20488d11e8bdabc32666406af1/snapshots/cassandra-snp-2018-06-28-0306/
    var/cassandra/data/data/monasca/alarm_state_history-e6bbdc20488d11e8bdabc32666406af1/snapshots/cassandra-snp-2018-06-28-0306/manifest.json
    var/cassandra/data/data/monasca/alarm_state_history-e6bbdc20488d11e8bdabc32666406af1/snapshots/cassandra-snp-2018-06-28-0306/mc-37-big-CompressionInfo.db
    var/cassandra/data/data/monasca/alarm_state_history-e6bbdc20488d11e8bdabc32666406af1/snapshots/cassandra-snp-2018-06-28-0306/mc-37-big-Data.db
    ...
    ...
    ...
    /usr/bin/nodetool clearsnapshot monasca
17.4.5.2.3.3 Restart SUSE OpenStack Cloud services Edit source
  1. Restart the MariaDB database

    ardana > cd ~/scratch/ansible/next/ardana/ansible
    ardana > ansible-playbook -i hosts/verb_hosts galera-bootstrap.yml

    On the deployer node, execute the galera-bootstrap.yml playbook which will determine the log sequence number, bootstrap the main node, and start the database cluster.

    If this process fails to recover the database cluster, refer to Section 15.2.3.1.2, “Recovering the MariaDB Database”.

  2. Restart SUSE OpenStack Cloud services on the three controllers as in the following example.

    ardana > ansible-playbook -i hosts/verb_hosts ardana-start.yml \
    --limit doc-cp1-c1-m1-mgmt,doc-cp1-c1-m2-mgmt,doc-cp1-c1-m3-mgmt,localhost
  3. Reconfigure SUSE OpenStack Cloud

    ardana > ansible-playbook -i hosts/verb_hosts ardana-reconfigure.yml
17.4.5.2.4 Post restore testing Edit source
  1. Source the service credential file

    ardana > source ~/service.osrc
  2. swift

    ardana > openstack container list
    container_1
    volumebackups
    
    ardana > openstack object list container_1
    var/lib/ardana/backup.osrc
    var/lib/ardana/service.osrc
    
    ardana > openstack object save container_1 /tmp/backup.osrc
  3. neutron

    ardana > openstack network list
    +--------------------------------------+---------------------+--------------------------------------+
    | ID                                   | Name                | Subnets                              |
    +--------------------------------------+---------------------+--------------------------------------+
    | 07c35d11-13f9-41d4-8289-fa92147b1d44 | test-net             | 02d5ca3b-1133-4a74-a9ab-1f1dc2853ec8|
    +--------------------------------------+---------------------+--------------------------------------+
  4. glance

    ardana > openstack image list
    +--------------------------------------+----------------------+--------+
    | ID                                   | Name                 | Status |
    +--------------------------------------+----------------------+--------+
    | 411a0363-7f4b-4bbc-889c-b9614e2da52e | cirros-0.4.0-x86_64  | active |
    +--------------------------------------+----------------------+--------+
    ardana > openstack image save --file /tmp/cirros f751c39b-f1e3-4f02-8332-3886826889ba
    ardana > ls -lah /tmp/cirros
    -rw-r--r-- 1 ardana ardana 12716032 Jul  2 20:52 /tmp/cirros
  5. nova

    ardana > openstack server list
    
    ardana > openstack server create server_6 --image 411a0363-7f4b-4bbc-889c-b9614e2da52e  --flavor m1.small --nic net-id=07c35d11-13f9-41d4-8289-fa92147b1d44
    +-------------------------------------+------------------------------------------------------------+
    | Field                               | Value                                                      |
    +-------------------------------------+------------------------------------------------------------+
    | OS-DCF:diskConfig                   | MANUAL                                                     |
    | OS-EXT-AZ:availability_zone         |                                                            |
    | OS-EXT-SRV-ATTR:host                | None                                                       |
    | OS-EXT-SRV-ATTR:hypervisor_hostname | None                                                       |
    | OS-EXT-SRV-ATTR:instance_name       |                                                            |
    | OS-EXT-STS:power_state              | NOSTATE                                                    |
    | OS-EXT-STS:task_state               | scheduling                                                 |
    | OS-EXT-STS:vm_state                 | building                                                   |
    | OS-SRV-USG:launched_at              | None                                                       |
    | OS-SRV-USG:terminated_at            | None                                                       |
    | accessIPv4                          |                                                            |
    | accessIPv6                          |                                                            |
    | addresses                           |                                                            |
    | adminPass                           | iJBoBaj53oUd                                               |
    | config_drive                        |                                                            |
    | created                             | 2018-07-02T21:02:01Z                                       |
    | flavor                              | m1.small (2)                                               |
    | hostId                              |                                                            |
    | id                                  | ce7689ff-23bf-4fe9-b2a9-922d4aa9412c                       |
    | image                               | cirros-0.4.0-x86_64 (f751c39b-f1e3-4f02-8332-3886826889ba) |
    | key_name                            | None                                                       |
    | name                                | server_6                                                   |
    | progress                            | 0                                                          |
    | project_id                          | cca416004124432592b2949a5c5d9949                           |
    | properties                          |                                                            |
    | security_groups                     | name='default'                                             |
    | status                              | BUILD                                                      |
    | updated                             | 2018-07-02T21:02:01Z                                       |
    | user_id                             | 8cb1168776d24390b44c3aaa0720b532                           |
    | volumes_attached                    |                                                            |
    +-------------------------------------+------------------------------------------------------------+
    
    ardana > openstack server list
    +--------------------------------------+----------+--------+---------------------------------+---------------------+-----------+
    | ID                                   | Name     | Status | Networks                        | Image               | Flavor    |
    +--------------------------------------+----------+--------+---------------------------------+---------------------+-----------+
    | ce7689ff-23bf-4fe9-b2a9-922d4aa9412c | server_6 | ACTIVE | n1=1.1.1.8                      | cirros-0.4.0-x86_64 | m1.small  |
    
    ardana > openstack server delete ce7689ff-23bf-4fe9-b2a9-922d4aa9412c
Print this page