8 Managing Block Storage #
Information about managing and configuring the Block Storage service.
8.1 Managing Block Storage using Cinder #
SUSE OpenStack Cloud Block Storage volume operations use the OpenStack cinder service to manage storage volumes, which includes creating volumes, attaching/detaching volumes to nova instances, creating volume snapshots, and configuring volumes.
SUSE OpenStack Cloud supports the following storage back ends for block storage volumes and backup datastore configuration:
Volumes
SUSE Enterprise Storage; for more information, see Section 35.3, “SUSE Enterprise Storage Integration”.
3PAR FC or iSCSI; for more information, see Section 35.1, “Configuring for 3PAR Block Storage Backend”.
Backup
swift
8.1.1 Setting Up Multiple Block Storage Back-ends #
SUSE OpenStack Cloud supports setting up multiple block storage backends and multiple volume types.
Whether you have a single or multiple block storage back-ends defined in
your cinder.conf.j2
file, you can create one or more
volume types using the specific attributes associated with the back-end. You
can find details on how to do that for each of the supported back-end types
here:
8.1.2 Creating a Volume Type for your Volumes #
Creating volume types allows you to create standard specifications for your volumes.
Volume types are used to specify a standard Block Storage back-end and collection of extra specifications for your volumes. This allows an administrator to give its users a variety of options while simplifying the process of creating volumes.
The tasks involved in this process are:
8.1.2.1 Create a Volume Type for your Volumes #
The default volume type will be thin provisioned and will have no fault tolerance (RAID 0). You should configure cinder to fully provision volumes, and you may want to configure fault tolerance. Follow the instructions below to create a new volume type that is fully provisioned and fault tolerant:
Perform the following steps to create a volume type using the horizon GUI:
Log in to the horizon dashboard.
Ensure that you are scoped to your
admin
Project. Then under the menu in the navigation pane, click on under the subheading.Select the
tab and then click the button to display a dialog box.Enter a unique name for the volume type and then click the
button to complete the action.
The newly created volume type will be displayed in the Volume
Types
list confirming its creation.
You must set a default_volume_type
in
cinder.conf.j2
, whether it is
default_type
or one you have created. For more
information, see Section 35.1.4, “Configure 3PAR FC as a Cinder Backend”.
8.1.2.2 Associate the Volume Type to the Back-end #
After the volume type(s) have been created, you can assign extra specification attributes to the volume types. Each Block Storage back-end option has unique attributes that can be used.
To map a volume type to a back-end, do the following:
Log into the horizon dashboard.
Ensure that you are scoped to your Section 5.10.7, “Scope Federated User to Domain”. Then under the menu in the navigation pane, click on under the subheading.
Project (for more information, seeClick the
tab to list the volume types.In the
column of the Volume Type you created earlier, click the drop-down option and select which will bring up the options.Click the
button on theVolume Type Extra Specs
screen.In the
Key
field, enter one of the key values in the table in the next section. In theValue
box, enter its corresponding value. Once you have completed that, click the button to create the extra volume type specs.
Once the volume type is mapped to a back-end, you can create volumes with this volume type.
8.1.2.3 Extra Specification Options for 3PAR #
3PAR supports volumes creation with additional attributes. These attributes can be specified using the extra specs options for your volume type. The administrator is expected to define appropriate extra spec for 3PAR volume type as per the guidelines provided at http://docs.openstack.org/liberty/config-reference/content/hp-3par-supported-ops.html.
The following cinder Volume Type extra-specs options enable control over the 3PAR storage provisioning type:
Key | Value | Description |
---|---|---|
volume_backend_name | volume backend name |
The name of the back-end to which you want to associate the volume type,
which you also specified earlier in the
|
hp3par:provisioning (optional) | thin, full, or dedup |
For more information, see Section 35.1, “Configuring for 3PAR Block Storage Backend”.
8.1.3 Managing cinder Volume and Backup Services #
If the host running the cinder-volume
service fails for
any reason, it should be restarted as quickly as possible. Often, the host
running cinder services also runs high availability (HA) services
such as MariaDB and RabbitMQ. These HA services are at risk while one of the
nodes in the cluster is down. If it will take a significant amount of time
to recover the failed node, then you may migrate the
cinder-volume
service and its backup service to one of
the other controller nodes. When the node has been recovered, you should
migrate the cinder-volume
service and its backup service
to the original (default) node.
The cinder-volume
service and its backup service migrate
as a pair. If you migrate the cinder-volume
service, its
backup service will also be migrated.
8.1.3.1 Migrating the cinder-volume service #
The following steps will migrate the cinder-volume service and its backup service.
Log in to the Cloud Lifecycle Manager node.
Determine the host index numbers for each of your control plane nodes. This host index number will be used in a later step. They can be obtained by running this playbook:
cd ~/scratch/ansible/next/ardana/ansible ansible-playbook -i hosts/verb_hosts cinder-show-volume-hosts.yml
Here is an example snippet showing the output of a single three node control plane, with the host index numbers in bold:
TASK: [_CND-CMN | show_volume_hosts | Show cinder Volume hosts index and hostname] *** ok: [ardana-cp1-c1-m1] => (item=(0, 'ardana-cp1-c1-m1')) => { "item": [ 0, "ardana-cp1-c1-m1" ], "msg": "Index 0 Hostname ardana-cp1-c1-m1" } ok: [ardana-cp1-c1-m1] => (item=(1, 'ardana-cp1-c1-m2')) => { "item": [ 1, "ardana-cp1-c1-m2" ], "msg": "Index 1 Hostname ardana-cp1-c1-m2" } ok: [ardana-cp1-c1-m1] => (item=(2, 'ardana-cp1-c1-m3')) => { "item": [ 2, "ardana-cp1-c1-m3" ], "msg": "Index 2 Hostname ardana-cp1-c1-m3" }
Locate the control plane fact file for the control plane you need to migrate the service from. It will be located in the following directory:
/etc/ansible/facts.d/
These fact files use the following naming convention:
cinder_volume_run_location_<control_plane_name>.fact
Edit the fact file to include the host index number of the control plane node you wish to migrate the
cinder-volume
services to. For example, if they currently reside on your first controller node, host index 0, and you wish to migrate them to your second controller, you would change the value in the fact file to1
.If you are using data encryption on your Cloud Lifecycle Manager, ensure you have included the encryption key in your environment variables. For more information see Chapter 10, Encryption of Passwords and Sensitive Data.
export HOS_USER_PASSWORD_ENCRYPT_KEY=<encryption key>
After you have edited the control plane fact file, run the cinder volume migration playbook for the control plane nodes involved in the migration. At minimum this includes the one to start cinder-volume manager on and the one on which to stop it:
cd ~/scratch/ansible/next/ardana/ansible ansible-playbook -i hosts/verb_hosts cinder-migrate-volume.yml --limit=<limit_pattern1,limit_pattern2>
Note<limit_pattern> is the pattern used to limit the hosts that are selected to those within a specific control plane. For example, with the nodes in the snippet shown above,
--limit=>ardana-cp1-c1-m1,ardana-cp1-c1-m2<
Even though the playbook summary reports no errors, you may disregard informational messages such as:
msg: Marking ardana_notify_cinder_restart_required to be cleared from the fact cache
Ensure that once your maintenance or other tasks are completed that you migrate the
cinder-volume
services back to their original node using these same steps.