Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
documentation.suse.com / Documentation / Operations Guide CLM / Managing Block Storage
Applies to SUSE OpenStack Cloud 9

8 Managing Block Storage

Information about managing and configuring the Block Storage service.

8.1 Managing Block Storage using Cinder

SUSE OpenStack Cloud Block Storage volume operations use the OpenStack cinder service to manage storage volumes, which includes creating volumes, attaching/detaching volumes to nova instances, creating volume snapshots, and configuring volumes.

SUSE OpenStack Cloud supports the following storage back ends for block storage volumes and backup datastore configuration:

8.1.1 Setting Up Multiple Block Storage Back-ends

SUSE OpenStack Cloud supports setting up multiple block storage backends and multiple volume types.

Whether you have a single or multiple block storage back-ends defined in your cinder.conf.j2 file, you can create one or more volume types using the specific attributes associated with the back-end. You can find details on how to do that for each of the supported back-end types here:

8.1.2 Creating a Volume Type for your Volumes

Creating volume types allows you to create standard specifications for your volumes.

Volume types are used to specify a standard Block Storage back-end and collection of extra specifications for your volumes. This allows an administrator to give its users a variety of options while simplifying the process of creating volumes.

The tasks involved in this process are:

8.1.2.1 Create a Volume Type for your Volumes

The default volume type will be thin provisioned and will have no fault tolerance (RAID 0). You should configure cinder to fully provision volumes, and you may want to configure fault tolerance. Follow the instructions below to create a new volume type that is fully provisioned and fault tolerant:

Perform the following steps to create a volume type using the horizon GUI:

  1. Log in to the horizon dashboard.

  2. Ensure that you are scoped to your admin Project. Then under the Admin menu in the navigation pane, click on Volumes under the System subheading.

  3. Select the Volume Types tab and then click the Create Volume Type button to display a dialog box.

  4. Enter a unique name for the volume type and then click the Create Volume Type button to complete the action.

The newly created volume type will be displayed in the Volume Types list confirming its creation.

Important
Important

You must set a default_volume_type in cinder.conf.j2, whether it is default_type or one you have created. For more information, see Section 35.1.4, “Configure 3PAR FC as a Cinder Backend”.

8.1.2.2 Associate the Volume Type to the Back-end

After the volume type(s) have been created, you can assign extra specification attributes to the volume types. Each Block Storage back-end option has unique attributes that can be used.

To map a volume type to a back-end, do the following:

  1. Log into the horizon dashboard.

  2. Ensure that you are scoped to your admin Project (for more information, see Section 5.10.7, “Scope Federated User to Domain”. Then under the Admin menu in the navigation pane, click on Volumes under the System subheading.

  3. Click the Volume Type tab to list the volume types.

  4. In the Actions column of the Volume Type you created earlier, click the drop-down option and select View Extra Specs which will bring up the Volume Type Extra Specs options.

  5. Click the Create button on the Volume Type Extra Specs screen.

  6. In the Key field, enter one of the key values in the table in the next section. In the Value box, enter its corresponding value. Once you have completed that, click the Create button to create the extra volume type specs.

Once the volume type is mapped to a back-end, you can create volumes with this volume type.

8.1.2.3 Extra Specification Options for 3PAR

3PAR supports volumes creation with additional attributes. These attributes can be specified using the extra specs options for your volume type. The administrator is expected to define appropriate extra spec for 3PAR volume type as per the guidelines provided at http://docs.openstack.org/liberty/config-reference/content/hp-3par-supported-ops.html.

The following cinder Volume Type extra-specs options enable control over the 3PAR storage provisioning type:

KeyValueDescription
volume_backend_namevolume backend name

The name of the back-end to which you want to associate the volume type, which you also specified earlier in the cinder.conf.j2 file.

hp3par:provisioning (optional)thin, full, or dedup 

For more information, see Section 35.1, “Configuring for 3PAR Block Storage Backend”.

8.1.3 Managing cinder Volume and Backup Services

Important
Important: Use Only When Needed

If the host running the cinder-volume service fails for any reason, it should be restarted as quickly as possible. Often, the host running cinder services also runs high availability (HA) services such as MariaDB and RabbitMQ. These HA services are at risk while one of the nodes in the cluster is down. If it will take a significant amount of time to recover the failed node, then you may migrate the cinder-volume service and its backup service to one of the other controller nodes. When the node has been recovered, you should migrate the cinder-volume service and its backup service to the original (default) node.

The cinder-volume service and its backup service migrate as a pair. If you migrate the cinder-volume service, its backup service will also be migrated.

8.1.3.1 Migrating the cinder-volume service

The following steps will migrate the cinder-volume service and its backup service.

  1. Log in to the Cloud Lifecycle Manager node.

  2. Determine the host index numbers for each of your control plane nodes. This host index number will be used in a later step. They can be obtained by running this playbook:

    cd ~/scratch/ansible/next/ardana/ansible
    ansible-playbook -i hosts/verb_hosts cinder-show-volume-hosts.yml

    Here is an example snippet showing the output of a single three node control plane, with the host index numbers in bold:

    TASK: [_CND-CMN | show_volume_hosts | Show cinder Volume hosts index and hostname] ***
    ok: [ardana-cp1-c1-m1] => (item=(0, 'ardana-cp1-c1-m1')) => {
        "item": [
            0,
            "ardana-cp1-c1-m1"
        ],
        "msg": "Index 0 Hostname ardana-cp1-c1-m1"
    }
    ok: [ardana-cp1-c1-m1] => (item=(1, 'ardana-cp1-c1-m2')) => {
        "item": [
            1,
            "ardana-cp1-c1-m2"
        ],
        "msg": "Index 1 Hostname ardana-cp1-c1-m2"
    }
    ok: [ardana-cp1-c1-m1] => (item=(2, 'ardana-cp1-c1-m3')) => {
        "item": [
            2,
            "ardana-cp1-c1-m3"
        ],
        "msg": "Index 2 Hostname ardana-cp1-c1-m3"
    }
  3. Locate the control plane fact file for the control plane you need to migrate the service from. It will be located in the following directory:

    /etc/ansible/facts.d/

    These fact files use the following naming convention:

    cinder_volume_run_location_<control_plane_name>.fact
  4. Edit the fact file to include the host index number of the control plane node you wish to migrate the cinder-volume services to. For example, if they currently reside on your first controller node, host index 0, and you wish to migrate them to your second controller, you would change the value in the fact file to 1.

  5. If you are using data encryption on your Cloud Lifecycle Manager, ensure you have included the encryption key in your environment variables. For more information see Chapter 10, Encryption of Passwords and Sensitive Data.

    export HOS_USER_PASSWORD_ENCRYPT_KEY=<encryption key>
  6. After you have edited the control plane fact file, run the cinder volume migration playbook for the control plane nodes involved in the migration. At minimum this includes the one to start cinder-volume manager on and the one on which to stop it:

    cd ~/scratch/ansible/next/ardana/ansible
    ansible-playbook -i hosts/verb_hosts cinder-migrate-volume.yml --limit=<limit_pattern1,limit_pattern2>
    Note
    Note

    <limit_pattern> is the pattern used to limit the hosts that are selected to those within a specific control plane. For example, with the nodes in the snippet shown above, --limit=>ardana-cp1-c1-m1,ardana-cp1-c1-m2<

  7. Even though the playbook summary reports no errors, you may disregard informational messages such as:

    msg: Marking ardana_notify_cinder_restart_required to be cleared from the fact cache
  8. Ensure that once your maintenance or other tasks are completed that you migrate the cinder-volume services back to their original node using these same steps.