Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
Applies to SUSE OpenStack Cloud 8

7 Block Storage

The OpenStack Block Storage service works through the interaction of a series of daemon processes named cinder-* that reside persistently on the host machine or machines. You can run all the binaries from a single node, or spread across multiple nodes. You can also run them on the same node as other OpenStack services.

To administer the OpenStack Block Storage service, it is helpful to understand a number of concepts. You must make certain choices when you configure the Block Storage service in OpenStack. The bulk of the options come down to two choices - single node or multi-node install. You can read a longer discussion about Storage Decisions in the OpenStack Operations Guide.

OpenStack Block Storage enables you to add extra block-level storage to your OpenStack Compute instances. This service is similar to the Amazon EC2 Elastic Block Storage (EBS) offering.

7.1 Increase Block Storage API service throughput

By default, the Block Storage API service runs in one process. This limits the number of API requests that the Block Storage service can process at any given time. In a production environment, you should increase the Block Storage API throughput by allowing the Block Storage API service to run in as many processes as the machine capacity allows.

Note
Note

The Block Storage API service is named openstack-cinder-api on the following distributions: CentOS, Fedora, openSUSE, Red Hat Enterprise Linux, and SUSE Linux Enterprise. In Ubuntu and Debian distributions, the Block Storage API service is named cinder-api.

To do so, use the Block Storage API service option osapi_volume_workers. This option allows you to specify the number of API service workers (or OS processes) to launch for the Block Storage API service.

To configure this option, open the /etc/cinder/cinder.conf configuration file and set the osapi_volume_workers configuration key to the number of CPU cores/threads on a machine.

On distributions that include openstack-config, you can configure this by running the following command instead:

# openstack-config --set /etc/cinder/cinder.conf \
  DEFAULT osapi_volume_workers CORES

Replace CORES with the number of CPU cores/threads on a machine.

7.2 Manage volumes

The default OpenStack Block Storage service implementation is an iSCSI solution that uses Logical Volume Manager (LVM) for Linux.

Note
Note

The OpenStack Block Storage service is not a shared storage solution like a Network Attached Storage (NAS) of NFS volumes where you can attach a volume to multiple servers. With the OpenStack Block Storage service, you can attach a volume to only one instance at a time.

The OpenStack Block Storage service also provides drivers that enable you to use several vendors' back-end storage devices in addition to the base LVM implementation. These storage devices can also be used instead of the base LVM installation.

This high-level procedure shows you how to create and attach a volume to a server instance.

To create and attach a volume to an instance

  1. Configure the OpenStack Compute and the OpenStack Block Storage services through the /etc/cinder/cinder.conf file.

  2. Use the openstack volume create command to create a volume. This command creates an LV into the volume group (VG) cinder-volumes.

  3. Use the openstack server add volume command to attach the volume to an instance. This command creates a unique iSCSI Qualified Name (IQN) that is exposed to the compute node.

    • The compute node, which runs the instance, now has an active iSCSI session and new local storage (usually a /dev/sdX disk).

    • Libvirt uses that local storage as storage for the instance. The instance gets a new disk (usually a /dev/vdX disk).

For this particular walkthrough, one cloud controller runs nova-api, nova-scheduler, nova-objectstore, nova-network and cinder-* services. Two additional compute nodes run nova-compute. The walkthrough uses a custom partitioning scheme that carves out 60 GB of space and labels it as LVM. The network uses the FlatManager and NetworkManager settings for OpenStack Compute.

The network mode does not interfere with OpenStack Block Storage operations, but you must set up networking for Block Storage to work. For details, see Chapter 9, Networking.

To set up Compute to use volumes, ensure that Block Storage is installed along with lvm2. This guide describes how to troubleshoot your installation and back up your Compute volumes.

7.2.1 Boot from volume

In some cases, you can store and run instances from inside volumes. For information, see the Launch an instance from a volume section in the OpenStack End User Guide.

7.2.2 Configure an NFS storage back end

This section explains how to configure OpenStack Block Storage to use NFS storage. You must be able to access the NFS shares from the server that hosts the cinder volume service.

Note
Note

The cinder volume service is named openstack-cinder-volume on the following distributions:

  • CentOS

  • Fedora

  • openSUSE

  • Red Hat Enterprise Linux

  • SUSE Linux Enterprise

In Ubuntu and Debian distributions, the cinder volume service is named cinder-volume.

Configure Block Storage to use an NFS storage back end

  1. Log in as root to the system hosting the cinder volume service.

  2. Create a text file named nfsshares in the /etc/cinder/ directory.

  3. Add an entry to /etc/cinder/nfsshares for each NFS share that the cinder volume service should use for back end storage. Each entry should be a separate line, and should use the following format:

    HOST:SHARE

    Where:

    • HOST is the IP address or host name of the NFS server.

    • SHARE is the absolute path to an existing and accessible NFS share.

  4. Set /etc/cinder/nfsshares to be owned by the root user and the cinder group:

    # chown root:cinder /etc/cinder/nfsshares
  5. Set /etc/cinder/nfsshares to be readable by members of the cinder group:

    # chmod 0640 /etc/cinder/nfsshares
  6. Configure the cinder volume service to use the /etc/cinder/nfsshares file created earlier. To do so, open the /etc/cinder/cinder.conf configuration file and set the nfs_shares_config configuration key to /etc/cinder/nfsshares.

    On distributions that include openstack-config, you can configure this by running the following command instead:

    # openstack-config --set /etc/cinder/cinder.conf \
      DEFAULT nfs_shares_config /etc/cinder/nfsshares

    The following distributions include openstack-config:

    • CentOS

    • Fedora

    • openSUSE

    • Red Hat Enterprise Linux

    • SUSE Linux Enterprise

  7. Optionally, provide any additional NFS mount options required in your environment in the nfs_mount_options configuration key of /etc/cinder/cinder.conf. If your NFS shares do not require any additional mount options (or if you are unsure), skip this step.

    On distributions that include openstack-config, you can configure this by running the following command instead:

    # openstack-config --set /etc/cinder/cinder.conf \
      DEFAULT nfs_mount_options OPTIONS

    Replace OPTIONS with the mount options to be used when accessing NFS shares. See the manual page for NFS for more information on available mount options (man nfs).

  8. Configure the cinder volume service to use the correct volume driver, namely cinder.volume.drivers.nfs.NfsDriver. To do so, open the /etc/cinder/cinder.conf configuration file and set the volume_driver configuration key to cinder.volume.drivers.nfs.NfsDriver.

    On distributions that include openstack-config, you can configure this by running the following command instead:

    # openstack-config --set /etc/cinder/cinder.conf \
      DEFAULT volume_driver cinder.volume.drivers.nfs.NfsDriver
  9. You can now restart the service to apply the configuration.

    Note
    Note

    The nfs_sparsed_volumes configuration key determines whether volumes are created as sparse files and grown as needed or fully allocated up front. The default and recommended value is true, which ensures volumes are initially created as sparse files.

    Setting nfs_sparsed_volumes to false will result in volumes being fully allocated at the time of creation. This leads to increased delays in volume creation.

    However, should you choose to set nfs_sparsed_volumes to false, you can do so directly in /etc/cinder/cinder.conf.

    On distributions that include openstack-config, you can configure this by running the following command instead:

    # openstack-config --set /etc/cinder/cinder.conf \
      DEFAULT nfs_sparsed_volumes false
    Warning
    Warning

    If a client host has SELinux enabled, the virt_use_nfs boolean should also be enabled if the host requires access to NFS volumes on an instance. To enable this boolean, run the following command as the root user:

    # setsebool -P virt_use_nfs on

    This command also makes the boolean persistent across reboots. Run this command on all client hosts that require access to NFS volumes on an instance. This includes all compute nodes.

7.2.3 Configure a GlusterFS back end

This section explains how to configure OpenStack Block Storage to use GlusterFS as a back end. You must be able to access the GlusterFS shares from the server that hosts the cinder volume service.

Note
Note

The cinder volume service is named openstack-cinder-volume on the following distributions:

  • CentOS

  • Fedora

  • openSUSE

  • Red Hat Enterprise Linux

  • SUSE Linux Enterprise

In Ubuntu and Debian distributions, the cinder volume service is named cinder-volume.

Mounting GlusterFS volumes requires utilities and libraries from the glusterfs-fuse package. This package must be installed on all systems that will access volumes backed by GlusterFS.

Note
Note

The utilities and libraries required for mounting GlusterFS volumes on Ubuntu and Debian distributions are available from the glusterfs-client package instead.

For information on how to install and configure GlusterFS, refer to the GlusterDocumentation page.

Configure GlusterFS for OpenStack Block Storage

The GlusterFS server must also be configured accordingly in order to allow OpenStack Block Storage to use GlusterFS shares:

  1. Log in as root to the GlusterFS server.

  2. Set each Gluster volume to use the same UID and GID as the cinder user:

    # gluster volume set VOL_NAME storage.owner-uid CINDER_UID
    # gluster volume set VOL_NAME storage.owner-gid CINDER_GID

    Where:

    • VOL_NAME is the Gluster volume name.

    • CINDER_UID is the UID of the cinder user.

    • CINDER_GID is the GID of the cinder user.

    Note
    Note

    The default UID and GID of the cinder user is 165 on most distributions.

  3. Configure each Gluster volume to accept libgfapi connections. To do this, set each Gluster volume to allow insecure ports:

    # gluster volume set VOL_NAME server.allow-insecure on
  4. Enable client connections from unprivileged ports. To do this, add the following line to /etc/glusterfs/glusterd.vol:

    option rpc-auth-allow-insecure on
  5. Restart the glusterd service:

    # service glusterd restart

Configure Block Storage to use a GlusterFS back end

After you configure the GlusterFS service, complete these steps:

  1. Log in as root to the system hosting the Block Storage service.

  2. Create a text file named glusterfs in /etc/cinder/ directory.

  3. Add an entry to /etc/cinder/glusterfs for each GlusterFS share that OpenStack Block Storage should use for back end storage. Each entry should be a separate line, and should use the following format:

    HOST:/VOL_NAME

    Where:

    • HOST is the IP address or host name of the Red Hat Storage server.

    • VOL_NAME is the name of an existing and accessible volume on the GlusterFS server.

    Optionally, if your environment requires additional mount options for a share, you can add them to the share's entry:

    HOST:/VOL_NAME -o OPTIONS

    Replace OPTIONS with a comma-separated list of mount options.

  4. Set /etc/cinder/glusterfs to be owned by the root user and the cinder group:

    # chown root:cinder /etc/cinder/glusterfs
  5. Set /etc/cinder/glusterfs to be readable by members of the cinder group:

    # chmod 0640 /etc/cinder/glusterfs
  6. Configure OpenStack Block Storage to use the /etc/cinder/glusterfs file created earlier. To do so, open the /etc/cinder/cinder.conf configuration file and set the glusterfs_shares_config configuration key to /etc/cinder/glusterfs.

    On distributions that include openstack-config, you can configure this by running the following command instead:

    # openstack-config --set /etc/cinder/cinder.conf \
      DEFAULT glusterfs_shares_config /etc/cinder/glusterfs

    The following distributions include openstack-config:

    • CentOS

    • Fedora

    • openSUSE

    • Red Hat Enterprise Linux

    • SUSE Linux Enterprise

  7. Configure OpenStack Block Storage to use the correct volume driver, namely cinder.volume.drivers.glusterfs.GlusterfsDriver. To do so, open the /etc/cinder/cinder.conf configuration file and set the volume_driver configuration key to cinder.volume.drivers.glusterfs.GlusterfsDriver.

    On distributions that include openstack-config, you can configure this by running the following command instead:

    # openstack-config --set /etc/cinder/cinder.conf \
      DEFAULT volume_driver cinder.volume.drivers.glusterfs.GlusterfsDriver
  8. You can now restart the service to apply the configuration.

OpenStack Block Storage is now configured to use a GlusterFS back end.

Warning
Warning

If a client host has SELinux enabled, the virt_use_fusefs boolean should also be enabled if the host requires access to GlusterFS volumes on an instance. To enable this Boolean, run the following command as the root user:

# setsebool -P virt_use_fusefs on

This command also makes the Boolean persistent across reboots. Run this command on all client hosts that require access to GlusterFS volumes on an instance. This includes all compute nodes.

7.2.4 Configure multiple-storage back ends

When you configure multiple-storage back ends, you can create several back-end storage solutions that serve the same OpenStack Compute configuration and one cinder-volume is launched for each back-end storage or back-end storage pool.

In a multiple-storage back-end configuration, each back end has a name (volume_backend_name). Several back ends can have the same name. In that case, the scheduler properly decides which back end the volume has to be created in.

The name of the back end is declared as an extra-specification of a volume type (such as, volume_backend_name=LVM). When a volume is created, the scheduler chooses an appropriate back end to handle the request, according to the volume type specified by the user.

7.2.4.1 Enable multiple-storage back ends

To enable a multiple-storage back ends, you must set the enabled_backends flag in the cinder.conf file. This flag defines the names (separated by a comma) of the configuration groups for the different back ends: one name is associated to one configuration group for a back end (such as, [lvmdriver-1]).

Note
Note

The configuration group name is not related to the volume_backend_name.

Note
Note

After setting the enabled_backends flag on an existing cinder service, and restarting the Block Storage services, the original host service is replaced with a new host service. The new service appears with a name like host@backend. Use:

$ cinder-manage volume update_host --currenthost CURRENTHOST --newhost CURRENTHOST@BACKEND

to convert current block devices to the new host name.

The options for a configuration group must be defined in the group (or default options are used). All the standard Block Storage configuration options (volume_group, volume_driver, and so on) might be used in a configuration group. Configuration values in the [DEFAULT] configuration group are not used.

These examples show three back ends:

enabled_backends=lvmdriver-1,lvmdriver-2,lvmdriver-3
[lvmdriver-1]
volume_group=cinder-volumes-1
volume_driver=cinder.volume.drivers.lvm.LVMVolumeDriver
volume_backend_name=LVM
[lvmdriver-2]
volume_group=cinder-volumes-2
volume_driver=cinder.volume.drivers.lvm.LVMVolumeDriver
volume_backend_name=LVM
[lvmdriver-3]
volume_group=cinder-volumes-3
volume_driver=cinder.volume.drivers.lvm.LVMVolumeDriver
volume_backend_name=LVM_b

In this configuration, lvmdriver-1 and lvmdriver-2 have the same volume_backend_name. If a volume creation requests the LVM back end name, the scheduler uses the capacity filter scheduler to choose the most suitable driver, which is either lvmdriver-1 or lvmdriver-2. The capacity filter scheduler is enabled by default. The next section provides more information. In addition, this example presents a lvmdriver-3 back end.

Note
Note

For Fiber Channel drivers that support multipath, the configuration group requires the use_multipath_for_image_xfer=true option. In the example below, you can see details for HPE 3PAR and EMC Fiber Channel drivers.

[3par]
use_multipath_for_image_xfer = true
volume_driver = cinder.volume.drivers.hpe.hpe_3par_fc.HPE3PARFCDriver
volume_backend_name = 3parfc

[emc]
use_multipath_for_image_xfer = true
volume_driver = cinder.volume.drivers.emc.emc_smis_fc.EMCSMISFCDriver
volume_backend_name = emcfc

7.2.4.2 Configure Block Storage scheduler multi back end

You must enable the filter_scheduler option to use multiple-storage back ends. The filter scheduler:

  1. Filters the available back ends. By default, AvailabilityZoneFilter, CapacityFilter and CapabilitiesFilter are enabled.

  2. Weights the previously filtered back ends. By default, the CapacityWeigher option is enabled. When this option is enabled, the filter scheduler assigns the highest weight to back ends with the most available capacity.

The scheduler uses filters and weights to pick the best back end to handle the request. The scheduler uses volume types to explicitly create volumes on specific back ends. For more information about filter and weighing, see Section 7.2.13, “Configure and use driver filter and weighing for scheduler”.

7.2.4.3 Volume type

Before using it, a volume type has to be declared to Block Storage. This can be done by the following command:

$ openstack --os-username admin --os-tenant-name admin volume type create lvm

Then, an extra-specification has to be created to link the volume type to a back end name. Run this command:

$ openstack --os-username admin --os-tenant-name admin volume type set lvm \
  --property volume_backend_name=LVM_iSCSI

This example creates a lvm volume type with volume_backend_name=LVM_iSCSI as extra-specifications.

Create another volume type:

$ openstack --os-username admin --os-tenant-name admin volume type create lvm_gold

$ openstack --os-username admin --os-tenant-name admin volume type set lvm_gold \
  --property volume_backend_name=LVM_iSCSI_b

This second volume type is named lvm_gold and has LVM_iSCSI_b as back end name.

Note
Note

To list the extra-specifications, use this command:

$ cinder --os-username admin --os-tenant-name admin extra-specs-list
Note
Note

If a volume type points to a volume_backend_name that does not exist in the Block Storage configuration, the filter_scheduler returns an error that it cannot find a valid host with the suitable back end.

7.2.4.4 Usage

When you create a volume, you must specify the volume type. The extra-specifications of the volume type are used to determine which back end has to be used.

$ openstack volume create --size 1 --type lvm test_multi_backend

Considering the cinder.conf described previously, the scheduler creates this volume on lvmdriver-1 or lvmdriver-2.

$ openstack volume create --size 1 --type lvm_gold test_multi_backend

This second volume is created on lvmdriver-3.

7.2.5 Back up Block Storage service disks

While you can use the LVM snapshot to create snapshots, you can also use it to back up your volumes. By using LVM snapshot, you reduce the size of the backup; only existing data is backed up instead of the entire volume.

To back up a volume, you must create a snapshot of it. An LVM snapshot is the exact copy of a logical volume, which contains data in a frozen state. This prevents data corruption because data cannot be manipulated during the volume creation process. Remember that the volumes created through a nova volume-create command exist in an LVM logical volume.

You must also make sure that the operating system is not using the volume and that all data has been flushed on the guest file systems. This usually means that those file systems have to be unmounted during the snapshot creation. They can be mounted again as soon as the logical volume snapshot has been created.

Before you create the snapshot you must have enough space to save it. As a precaution, you should have at least twice as much space as the potential snapshot size. If insufficient space is available, the snapshot might become corrupted.

For this example assume that a 100 GB volume named volume-00000001 was created for an instance while only 4 GB are used. This example uses these commands to back up only those 4 GB:

  • lvm2 command. Directly manipulates the volumes.

  • kpartx command. Discovers the partition table created inside the instance.

  • tar command. Creates a minimum-sized backup.

  • sha1sum command. Calculates the backup checksum to check its consistency.

You can apply this process to volumes of any size.

To back up Block Storage service disks

  1. Create a snapshot of a used volume

    • Use this command to list all volumes

      # lvdisplay
    • Create the snapshot; you can do this while the volume is attached to an instance:

      # lvcreate --size 10G --snapshot --name volume-00000001-snapshot \
        /dev/cinder-volumes/volume-00000001

      Use the --snapshot configuration option to tell LVM that you want a snapshot of an already existing volume. The command includes the size of the space reserved for the snapshot volume, the name of the snapshot, and the path of an already existing volume. Generally, this path is /dev/cinder-volumes/VOLUME_NAME.

      The size does not have to be the same as the volume of the snapshot. The --size parameter defines the space that LVM reserves for the snapshot volume. As a precaution, the size should be the same as that of the original volume, even if the whole space is not currently used by the snapshot.

    • Run the lvdisplay command again to verify the snapshot:

      --- Logical volume ---
      LV Name                /dev/cinder-volumes/volume-00000001
      VG Name                cinder-volumes
      LV UUID                gI8hta-p21U-IW2q-hRN1-nTzN-UC2G-dKbdKr
      LV Write Access        read/write
      LV snapshot status     source of
                             /dev/cinder-volumes/volume-00000026-snap [active]
      LV Status              available
      # open                 1
      LV Size                15,00 GiB
      Current LE             3840
      Segments               1
      Allocation             inherit
      Read ahead sectors     auto
      - currently set to     256
      Block device           251:13
      
      --- Logical volume ---
      LV Name                /dev/cinder-volumes/volume-00000001-snap
      VG Name                cinder-volumes
      LV UUID                HlW3Ep-g5I8-KGQb-IRvi-IRYU-lIKe-wE9zYr
      LV Write Access        read/write
      LV snapshot status     active destination for /dev/cinder-volumes/volume-00000026
      LV Status              available
      # open                 0
      LV Size                15,00 GiB
      Current LE             3840
      COW-table size         10,00 GiB
      COW-table LE           2560
      Allocated to snapshot  0,00%
      Snapshot chunk size    4,00 KiB
      Segments               1
      Allocation             inherit
      Read ahead sectors     auto
      - currently set to     256
      Block device           251:14
  2. Partition table discovery

    • To exploit the snapshot with the tar command, mount your partition on the Block Storage service server.

      The kpartx utility discovers and maps table partitions. You can use it to view partitions that are created inside the instance. Without using the partitions created inside instances, you cannot see its content and create efficient backups.

      # kpartx -av /dev/cinder-volumes/volume-00000001-snapshot
      Note
      Note

      On a Debian-based distribution, you can use the apt-get install kpartx command to install kpartx.

      If the tools successfully find and map the partition table, no errors are returned.

    • To check the partition table map, run this command:

      $ ls /dev/mapper/nova*

      You can see the cinder--volumes-volume--00000001--snapshot1 partition.

      If you created more than one partition on that volume, you see several partitions; for example: cinder--volumes-volume--00000001--snapshot2, cinder--volumes-volume--00000001--snapshot3, and so on.

    • Mount your partition

      # mount /dev/mapper/cinder--volumes-volume--volume--00000001--snapshot1 /mnt

      If the partition mounts successfully, no errors are returned.

      You can directly access the data inside the instance. If a message prompts you for a partition or you cannot mount it, determine whether enough space was allocated for the snapshot or the kpartx command failed to discover the partition table.

      Allocate more space to the snapshot and try the process again.

  3. Use the tar command to create archives

    Create a backup of the volume:

    $ tar --exclude="lost+found" --exclude="some/data/to/exclude" -czf \
      volume-00000001.tar.gz -C /mnt/ /backup/destination

    This command creates a tar.gz file that contains the data, and data only. This ensures that you do not waste space by backing up empty sectors.

  4. Checksum calculation I

    You should always have the checksum for your backup files. When you transfer the same file over the network, you can run a checksum calculation to ensure that your file was not corrupted during its transfer. The checksum is a unique ID for a file. If the checksums are different, the file is corrupted.

    Run this command to run a checksum for your file and save the result to a file:

    $ sha1sum volume-00000001.tar.gz > volume-00000001.checksum
    Note
    Note

    Use the sha1sum command carefully because the time it takes to complete the calculation is directly proportional to the size of the file.

    Depending on your CPU, the process might take a long time for files larger than around 4 to 6 GB.

  5. After work cleaning

    Now that you have an efficient and consistent backup, use this command to clean up the file system:

    • Unmount the volume.

      $ umount /mnt
    • Delete the partition table.

      $ kpartx -dv /dev/cinder-volumes/volume-00000001-snapshot
    • Remove the snapshot.

      $ lvremove -f /dev/cinder-volumes/volume-00000001-snapshot

    Repeat these steps for all your volumes.

  6. Automate your backups

    Because more and more volumes might be allocated to your Block Storage service, you might want to automate your backups. The SCR_5005_V01_NUAC-OPENSTACK-EBS-volumes-backup.sh script assists you with this task. The script performs the operations from the previous example, but also provides a mail report and runs the backup based on the backups_retention_days setting.

    Launch this script from the server that runs the Block Storage service.

    This example shows a mail report:

    Backup Start Time - 07/10 at 01:00:01
    Current retention - 7 days
    
    The backup volume is mounted. Proceed...
    Removing old backups...  : /BACKUPS/EBS-VOL/volume-00000019/volume-00000019_28_09_2011.tar.gz
         /BACKUPS/EBS-VOL/volume-00000019 - 0 h 1 m and 21 seconds. Size - 3,5G
    
    The backup volume is mounted. Proceed...
    Removing old backups...  : /BACKUPS/EBS-VOL/volume-0000001a/volume-0000001a_28_09_2011.tar.gz
         /BACKUPS/EBS-VOL/volume-0000001a - 0 h 4 m and 15 seconds. Size - 6,9G
    ---------------------------------------
    Total backups size - 267G - Used space : 35%
    Total execution time - 1 h 75 m and 35 seconds

    The script also enables you to SSH to your instances and run a mysqldump command into them. To make this work, enable the connection to the Compute project keys. If you do not want to run the mysqldump command, you can add enable_mysql_dump=0 to the script to turn off this functionality.

7.2.6 Migrate volumes

OpenStack has the ability to migrate volumes between back ends which support its volume-type. Migrating a volume transparently moves its data from the current back end for the volume to a new one. This is an administrator function, and can be used for functions including storage evacuation (for maintenance or decommissioning), or manual optimizations (for example, performance, reliability, or cost).

These workflows are possible for a migration:

  1. If the storage can migrate the volume on its own, it is given the opportunity to do so. This allows the Block Storage driver to enable optimizations that the storage might be able to perform. If the back end is not able to perform the migration, the Block Storage uses one of two generic flows, as follows.

  2. If the volume is not attached, the Block Storage service creates a volume and copies the data from the original to the new volume.

    Note
    Note

    While most back ends support this function, not all do. See the driver documentation in the OpenStack Configuration Reference for more details.

  3. If the volume is attached to a VM instance, the Block Storage creates a volume, and calls Compute to copy the data from the original to the new volume. Currently this is supported only by the Compute libvirt driver.

As an example, this scenario shows two LVM back ends and migrates an attached volume from one to the other. This scenario uses the third migration flow.

First, list the available back ends:

# cinder get-pools
+----------+----------------------------------------------------+
| Property |                       Value                        |
+----------+----------------------------------------------------+
|   name   |           server1@lvmstorage-1#lvmstorage-1        |
+----------+----------------------------------------------------+
+----------+----------------------------------------------------+
| Property |                      Value                         |
+----------+----------------------------------------------------+
|   name   |           server2@lvmstorage-2#lvmstorage-2        |
+----------+----------------------------------------------------+
Note
Note

Only Block Storage V2 API supports cinder get-pools.

You can also get available back ends like following:

# cinder-manage host list
server1@lvmstorage-1    zone1
server2@lvmstorage-2    zone1

But it needs to add pool name in the end. For example, server1@lvmstorage-1#zone1.

Next, as the admin user, you can see the current status of the volume (replace the example ID with your own):

$ openstack volume show 6088f80a-f116-4331-ad48-9afb0dfb196c

+--------------------------------+--------------------------------------+
| Field                          | Value                                |
+--------------------------------+--------------------------------------+
| attachments                    | []                                   |
| availability_zone              | zone1                                |
| bootable                       | false                                |
| consistencygroup_id            | None                                 |
| created_at                     | 2013-09-01T14:53:22.000000           |
| description                    | test                                 |
| encrypted                      | False                                |
| id                             | 6088f80a-f116-4331-ad48-9afb0dfb196c |
| migration_status               | None                                 |
| multiattach                    | False                                |
| name                           | test                                 |
| os-vol-host-attr:host          | controller@lvm#LVM                   |
| os-vol-mig-status-attr:migstat | None                                 |
| os-vol-mig-status-attr:name_id | None                                 |
| os-vol-tenant-attr:tenant_id   | d88310717a8e4ebcae84ed075f82c51e     |
| properties                     | readonly='False'                     |
| replication_status             | disabled                             |
| size                           | 1                                    |
| snapshot_id                    | None                                 |
| source_volid                   | None                                 |
| status                         | in-use                               |
| type                           | None                                 |
| updated_at                     | 2016-07-31T07:22:19.000000           |
| user_id                        | d8e5e5727f3a4ce1886ac8ecec058e83     |
+--------------------------------+--------------------------------------+

Note these attributes:

  • os-vol-host-attr:host - the volume's current back end.

  • os-vol-mig-status-attr:migstat - the status of this volume's migration (None means that a migration is not currently in progress).

  • os-vol-mig-status-attr:name_id - the volume ID that this volume's name on the back end is based on. Before a volume is ever migrated, its name on the back end storage may be based on the volume's ID (see the volume_name_template configuration parameter). For example, if volume_name_template is kept as the default value (volume-%s), your first LVM back end has a logical volume named volume-6088f80a-f116-4331-ad48-9afb0dfb196c. During the course of a migration, if you create a volume and copy over the data, the volume get the new name but keeps its original ID. This is exposed by the name_id attribute.

    Note
    Note

    If you plan to decommission a block storage node, you must stop the cinder volume service on the node after performing the migration.

    On nodes that run CentOS, Fedora, openSUSE, Red Hat Enterprise Linux, or SUSE Linux Enterprise, run:

    # service openstack-cinder-volume stop
    # chkconfig openstack-cinder-volume off

    On nodes that run Ubuntu or Debian, run:

    # service cinder-volume stop
    # chkconfig cinder-volume off

    Stopping the cinder volume service will prevent volumes from being allocated to the node.

Migrate this volume to the second LVM back end:

$ cinder migrate 6088f80a-f116-4331-ad48-9afb0dfb196c \
  server2@lvmstorage-2#lvmstorage-2

You can use the openstack volume show command to see the status of the migration. While migrating, the migstat attribute shows states such as migrating or completing. On error, migstat is set to None and the host attribute shows the original host. On success, in this example, the output looks like:

+--------------------------------+--------------------------------------+
| Field                          | Value                                |
+--------------------------------+--------------------------------------+
| attachments                    | []                                   |
| availability_zone              | zone1                                |
| bootable                       | false                                |
| consistencygroup_id            | None                                 |
| created_at                     | 2013-09-01T14:53:22.000000           |
| description                    | test                                 |
| encrypted                      | False                                |
| id                             | 6088f80a-f116-4331-ad48-9afb0dfb196c |
| migration_status               | None                                 |
| multiattach                    | False                                |
| name                           | test                                 |
| os-vol-host-attr:host          | controller@lvm#LVM                   |
| os-vol-mig-status-attr:migstat | None                                 |
| os-vol-mig-status-attr:name_id | None                                 |
| os-vol-tenant-attr:tenant_id   | d88310717a8e4ebcae84ed075f82c51e     |
| properties                     | readonly='False'                     |
| replication_status             | disabled                             |
| size                           | 1                                    |
| snapshot_id                    | None                                 |
| source_volid                   | None                                 |
| status                         | in-use                               |
| type                           | None                                 |
| updated_at                     | 2016-07-31T07:22:19.000000           |
| user_id                        | d8e5e5727f3a4ce1886ac8ecec058e83     |
+--------------------------------+--------------------------------------+

Note that migstat is None, host is the new host, and name_id holds the ID of the volume created by the migration. If you look at the second LVM back end, you find the logical volume volume-133d1f56-9ffc-4f57-8798-d5217d851862.

Note
Note

The migration is not visible to non-admin users (for example, through the volume status). However, some operations are not allowed while a migration is taking place, such as attaching/detaching a volume and deleting a volume. If a user performs such an action during a migration, an error is returned.

Note
Note

Migrating volumes that have snapshots are currently not allowed.

7.2.7 Gracefully remove a GlusterFS volume from usage

Configuring the cinder volume service to use GlusterFS involves creating a shares file (for example, /etc/cinder/glusterfs). This shares file lists each GlusterFS volume (with its corresponding storage server) that the cinder volume service can use for back end storage.

To remove a GlusterFS volume from usage as a back end, delete the volume's corresponding entry from the shares file. After doing so, restart the Block Storage services.

Restarting the Block Storage services will prevent the cinder volume service from exporting the deleted GlusterFS volume. This will prevent any instances from mounting the volume from that point onwards.

However, the removed GlusterFS volume might still be mounted on an instance at this point. Typically, this is the case when the volume was already mounted while its entry was deleted from the shares file. Whenever this occurs, you will have to unmount the volume as normal after the Block Storage services are restarted.

7.2.8 Back up and restore volumes and snapshots

The openstack command-line interface provides the tools for creating a volume backup. You can restore a volume from a backup as long as the backup's associated database information (or backup metadata) is intact in the Block Storage database.

Run this command to create a backup of a volume:

$ openstack volume backup create [--incremental] [--force] VOLUME

Where VOLUME is the name or ID of the volume, incremental is a flag that indicates whether an incremental backup should be performed, and force is a flag that allows or disallows backup of a volume when the volume is attached to an instance.

Without the incremental flag, a full backup is created by default. With the incremental flag, an incremental backup is created.

Without the force flag, the volume will be backed up only if its status is available. With the force flag, the volume will be backed up whether its status is available or in-use. A volume is in-use when it is attached to an instance. The backup of an in-use volume means your data is crash consistent. The force flag is False by default.

Note
Note

The incremental and force flags are only available for block storage API v2. You have to specify [--os-volume-api-version 2] in the cinder command-line interface to use this parameter.

Note
Note

The force flag is new in OpenStack Liberty.

The incremental backup is based on a parent backup which is an existing backup with the latest timestamp. The parent backup can be a full backup or an incremental backup depending on the timestamp.

Note
Note

The first backup of a volume has to be a full backup. Attempting to do an incremental backup without any existing backups will fail. There is an is_incremental flag that indicates whether a backup is incremental when showing details on the backup. Another flag, has_dependent_backups, returned when showing backup details, will indicate whether the backup has dependent backups. If it is true, attempting to delete this backup will fail.

A new configure option backup_swift_block_size is introduced into cinder.conf for the default Swift backup driver. This is the size in bytes that changes are tracked for incremental backups. The existing backup_swift_object_size option, the size in bytes of Swift backup objects, has to be a multiple of backup_swift_block_size. The default is 32768 for backup_swift_block_size, and the default is 52428800 for backup_swift_object_size.

The configuration option backup_swift_enable_progress_timer in cinder.conf is used when backing up the volume to Object Storage back end. This option enables or disables the timer. It is enabled by default to send the periodic progress notifications to the Telemetry service.

This command also returns a backup ID. Use this backup ID when restoring the volume:

$ openstack volume backup restore BACKUP_ID VOLUME_ID

When restoring from a full backup, it is a full restore.

When restoring from an incremental backup, a list of backups is built based on the IDs of the parent backups. A full restore is performed based on the full backup first, then restore is done based on the incremental backup, laying on top of it in order.

You can view a backup list with the cinder backup-list command. Optional arguments to clarify the status of your backups include: running --name, --status, and --volume-id to filter through backups by the specified name, status, or volume-id. Search with --all-tenants for details of the projects associated with the listed backups.

Because volume backups are dependent on the Block Storage database, you must also back up your Block Storage database regularly to ensure data recovery.

Note
Note

Alternatively, you can export and save the metadata of selected volume backups. Doing so precludes the need to back up the entire Block Storage database. This is useful if you need only a small subset of volumes to survive a catastrophic database failure.

If you specify a UUID encryption key when setting up the volume specifications, the backup metadata ensures that the key will remain valid when you back up and restore the volume.

For more information about how to export and import volume backup metadata, see the section called Section 7.2.9, “Export and import backup metadata”.

By default, the swift object store is used for the backup repository.

If instead you want to use an NFS export as the backup repository, add the following configuration options to the [DEFAULT] section of the cinder.conf file and restart the Block Storage services:

backup_driver = cinder.backup.drivers.nfs
backup_share = HOST:EXPORT_PATH

For the backup_share option, replace HOST with the DNS resolvable host name or the IP address of the storage server for the NFS share, and EXPORT_PATH with the path to that share. If your environment requires that non-default mount options be specified for the share, set these as follows:

backup_mount_options = MOUNT_OPTIONS

MOUNT_OPTIONS is a comma-separated string of NFS mount options as detailed in the NFS man page.

There are several other options whose default values may be overridden as appropriate for your environment:

backup_compression_algorithm = zlib
backup_sha_block_size_bytes = 32768
backup_file_size = 1999994880

The option backup_compression_algorithm can be set to bz2 or None. The latter can be a useful setting when the server providing the share for the backup repository itself performs deduplication or compression on the backup data.

The option backup_file_size must be a multiple of backup_sha_block_size_bytes. It is effectively the maximum file size to be used, given your environment, to hold backup data. Volumes larger than this will be stored in multiple files in the backup repository. The backup_sha_block_size_bytes option determines the size of blocks from the cinder volume being backed up on which digital signatures are calculated in order to enable incremental backup capability.

You also have the option of resetting the state of a backup. When creating or restoring a backup, sometimes it may get stuck in the creating or restoring states due to problems like the database or rabbitmq being down. In situations like these resetting the state of the backup can restore it to a functional status.

Run this command to restore the state of a backup:

$ cinder backup-reset-state [--state STATE] BACKUP_ID-1 BACKUP_ID-2 ...

Run this command to create a backup of a snapshot:

$ openstack volume backup create [--incremental] [--force] \
  [--snapshot SNAPSHOT_ID] VOLUME

Where VOLUME is the name or ID of the volume, SNAPSHOT_ID is the ID of the volume's snapshot.

7.2.9 Export and import backup metadata

A volume backup can only be restored on the same Block Storage service. This is because restoring a volume from a backup requires metadata available on the database used by the Block Storage service.

Note
Note

For information about how to back up and restore a volume, see the section called Section 7.2.8, “Back up and restore volumes and snapshots”.

You can, however, export the metadata of a volume backup. To do so, run this command as an OpenStack admin user (presumably, after creating a volume backup):

$ cinder backup-export BACKUP_ID

Where BACKUP_ID is the volume backup's ID. This command should return the backup's corresponding database information as encoded string metadata.

Exporting and storing this encoded string metadata allows you to completely restore the backup, even in the event of a catastrophic database failure. This will preclude the need to back up the entire Block Storage database, particularly if you only need to keep complete backups of a small subset of volumes.

If you have placed encryption on your volumes, the encryption will still be in place when you restore the volume if a UUID encryption key is specified when creating volumes. Using backup metadata support, UUID keys set up for a volume (or volumes) will remain valid when you restore a backed-up volume. The restored volume will remain encrypted, and will be accessible with your credentials.

In addition, having a volume backup and its backup metadata also provides volume portability. Specifically, backing up a volume and exporting its metadata will allow you to restore the volume on a completely different Block Storage database, or even on a different cloud service. To do so, first import the backup metadata to the Block Storage database and then restore the backup.

To import backup metadata, run the following command as an OpenStack admin:

$ cinder backup-import METADATA

Where METADATA is the backup metadata exported earlier.

Once you have imported the backup metadata into a Block Storage database, restore the volume (see the section called Section 7.2.8, “Back up and restore volumes and snapshots”).

7.2.10 Use LIO iSCSI support

The default mode for the iscsi_helper tool is tgtadm. To use LIO iSCSI, install the python-rtslib package, and set iscsi_helper=lioadm in the cinder.conf file.

Once configured, you can use the cinder-rtstool command to manage the volumes. This command enables you to create, delete, and verify volumes and determine targets and add iSCSI initiators to the system.

7.2.11 Configure and use volume number weigher

OpenStack Block Storage enables you to choose a volume back end according to free_capacity and allocated_capacity. The volume number weigher feature lets the scheduler choose a volume back end based on its volume number in the volume back end. This can provide another means to improve the volume back ends' I/O balance and the volumes' I/O performance.

7.2.11.1 Enable volume number weigher

To enable a volume number weigher, set the scheduler_default_weighers to VolumeNumberWeigher flag in the cinder.conf file to define VolumeNumberWeigher as the selected weigher.

7.2.11.2 Configure multiple-storage back ends

To configure VolumeNumberWeigher, use LVMVolumeDriver as the volume driver.

This configuration defines two LVM volume groups: stack-volumes with 10 GB capacity and stack-volumes-1 with 60 GB capacity. This example configuration defines two back ends:

scheduler_default_weighers=VolumeNumberWeigher
enabled_backends=lvmdriver-1,lvmdriver-2
[lvmdriver-1]
volume_group=stack-volumes
volume_driver=cinder.volume.drivers.lvm.LVMVolumeDriver
volume_backend_name=LVM

[lvmdriver-2]
volume_group=stack-volumes-1
volume_driver=cinder.volume.drivers.lvm.LVMVolumeDriver
volume_backend_name=LVM

7.2.11.3 Volume type

Define a volume type in Block Storage:

$ openstack volume type create lvm

Create an extra specification that links the volume type to a back-end name:

$ openstack volume type set lvm --property volume_backend_name=LVM

This example creates a lvm volume type with volume_backend_name=LVM as extra specifications.

7.2.11.4 Usage

To create six 1-GB volumes, run the openstack volume create --size 1 --type lvm volume1 command six times:

$ openstack volume create --size 1 --type lvm volume1

This command creates three volumes in stack-volumes and three volumes in stack-volumes-1.

List the available volumes:

# lvs
LV                                          VG              Attr      LSize  Pool Origin Data%  Move Log Copy%  Convert
volume-3814f055-5294-4796-b5e6-1b7816806e5d stack-volumes   -wi-a----  1.00g
volume-72cf5e79-99d2-4d23-b84e-1c35d3a293be stack-volumes   -wi-a----  1.00g
volume-96832554-0273-4e9d-902b-ad421dfb39d1 stack-volumes   -wi-a----  1.00g
volume-169386ef-3d3e-4a90-8439-58ceb46889d9 stack-volumes-1 -wi-a----  1.00g
volume-460b0bbb-d8a0-4bc3-9882-a129a5fe8652 stack-volumes-1 -wi-a----  1.00g
volume-9a08413b-0dbc-47c9-afb8-41032ab05a41 stack-volumes-1 -wi-a----  1.00g

7.2.12 Consistency groups

Consistency group support is available in OpenStack Block Storage. The support is added for creating snapshots of consistency groups. This feature leverages the storage level consistency technology. It allows snapshots of multiple volumes in the same consistency group to be taken at the same point-in-time to ensure data consistency. The consistency group operations can be performed using the Block Storage command line.

Note
Note

Only Block Storage V2 API supports consistency groups. You can specify --os-volume-api-version 2 when using Block Storage command line for consistency group operations.

Before using consistency groups, make sure the Block Storage driver that you are running has consistency group support by reading the Block Storage manual or consulting the driver maintainer. There are a small number of drivers that have implemented this feature. The default LVM driver does not support consistency groups yet because the consistency technology is not available at the storage level.

Before using consistency groups, you must change policies for the consistency group APIs in the /etc/cinder/policy.json file. By default, the consistency group APIs are disabled. Enable them before running consistency group operations.

Here are existing policy entries for consistency groups:

"consistencygroup:create": "group:nobody",
"consistencygroup:delete": "group:nobody",
"consistencygroup:update": "group:nobody",
"consistencygroup:get": "group:nobody",
"consistencygroup:get_all": "group:nobody",
"consistencygroup:create_cgsnapshot" : "group:nobody",
"consistencygroup:delete_cgsnapshot": "group:nobody",
"consistencygroup:get_cgsnapshot": "group:nobody",
"consistencygroup:get_all_cgsnapshots": "group:nobody",

Remove group:nobody to enable these APIs:

"consistencygroup:create": "",
"consistencygroup:delete": "",
"consistencygroup:update": "",
"consistencygroup:get": "",
"consistencygroup:get_all": "",
"consistencygroup:create_cgsnapshot" : "",
"consistencygroup:delete_cgsnapshot": "",
"consistencygroup:get_cgsnapshot": "",
"consistencygroup:get_all_cgsnapshots": "",

Restart Block Storage API service after changing policies.

The following consistency group operations are supported:

  • Create a consistency group, given volume types.

    Note
    Note

    A consistency group can support more than one volume type. The scheduler is responsible for finding a back end that can support all given volume types.

    A consistency group can only contain volumes hosted by the same back end.

    A consistency group is empty upon its creation. Volumes need to be created and added to it later.

  • Show a consistency group.

  • List consistency groups.

  • Create a volume and add it to a consistency group, given volume type and consistency group id.

  • Create a snapshot for a consistency group.

  • Show a snapshot of a consistency group.

  • List consistency group snapshots.

  • Delete a snapshot of a consistency group.

  • Delete a consistency group.

  • Modify a consistency group.

  • Create a consistency group from the snapshot of another consistency group.

  • Create a consistency group from a source consistency group.

The following operations are not allowed if a volume is in a consistency group:

  • Volume migration.

  • Volume retype.

  • Volume deletion.

    Note
    Note

    A consistency group has to be deleted as a whole with all the volumes.

The following operations are not allowed if a volume snapshot is in a consistency group snapshot:

  • Volume snapshot deletion.

    Note
    Note

    A consistency group snapshot has to be deleted as a whole with all the volume snapshots.

The details of consistency group operations are shown in the following.

Note
Note

Currently, no OpenStack client command is available to run in place of the cinder consistency group creation commands. Use the cinder commands detailed in the following examples.

Create a consistency group:

cinder consisgroup-create
[--name name]
[--description description]
[--availability-zone availability-zone]
volume-types
Note
Note

The parameter volume-types is required. It can be a list of names or UUIDs of volume types separated by commas without spaces in between. For example, volumetype1,volumetype2,volumetype3..

$ cinder consisgroup-create --name bronzeCG2 volume_type_1

+-------------------+--------------------------------------+
|      Property     |                Value                 |
+-------------------+--------------------------------------+
| availability_zone |                 nova                 |
|     created_at    |      2014-12-29T12:59:08.000000      |
|    description    |                 None                 |
|         id        | 1de80c27-3b2f-47a6-91a7-e867cbe36462 |
|        name       |              bronzeCG2               |
|       status      |               creating               |
+-------------------+--------------------------------------+

Show a consistency group:

$ cinder consisgroup-show 1de80c27-3b2f-47a6-91a7-e867cbe36462

+-------------------+--------------------------------------+
|      Property     |                Value                 |
+-------------------+--------------------------------------+
| availability_zone |                 nova                 |
|     created_at    |      2014-12-29T12:59:08.000000      |
|    description    |                 None                 |
|         id        | 2a6b2bda-1f43-42ce-9de8-249fa5cbae9a |
|        name       |              bronzeCG2               |
|       status      |              available               |
|     volume_types  |              volume_type_1           |
+-------------------+--------------------------------------+

List consistency groups:

$ cinder consisgroup-list

+--------------------------------------+-----------+-----------+
|                  ID                  |   Status  |    Name   |
+--------------------------------------+-----------+-----------+
| 1de80c27-3b2f-47a6-91a7-e867cbe36462 | available | bronzeCG2 |
| 3a2b3c42-b612-479a-91eb-1ed45b7f2ad5 |   error   |  bronzeCG |
+--------------------------------------+-----------+-----------+

Create a volume and add it to a consistency group:

Note
Note

When creating a volume and adding it to a consistency group, a volume type and a consistency group id must be provided. This is because a consistency group can support more than one volume type.

$ openstack volume create --type volume_type_1 --consistency-group \
  1de80c27-3b2f-47a6-91a7-e867cbe36462 --size 1 cgBronzeVol

+---------------------------------------+--------------------------------------+
| Field                                 | Value                                |
+---------------------------------------+--------------------------------------+
|              attachments              |                  []                  |
|           availability_zone           |                 nova                 |
|                bootable               |                false                 |
|          consistencygroup_id          | 1de80c27-3b2f-47a6-91a7-e867cbe36462 |
|               created_at              |      2014-12-29T13:16:47.000000      |
|              description              |                 None                 |
|               encrypted               |                False                 |
|                   id                  | 5e6d1386-4592-489f-a56b-9394a81145fe |
|                metadata               |                  {}                  |
|                  name                 |             cgBronzeVol              |
|         os-vol-host-attr:host         |      server-1@backend-1#pool-1       |
|     os-vol-mig-status-attr:migstat    |                 None                 |
|     os-vol-mig-status-attr:name_id    |                 None                 |
|      os-vol-tenant-attr:tenant_id     |   1349b21da2a046d8aa5379f0ed447bed   |
|   os-volume-replication:driver_data   |                 None                 |
| os-volume-replication:extended_status |                 None                 |
|           replication_status          |               disabled               |
|                  size                 |                  1                   |
|              snapshot_id              |                 None                 |
|              source_volid             |                 None                 |
|                 status                |               creating               |
|                user_id                |   93bdea12d3e04c4b86f9a9f172359859   |
|              volume_type              |            volume_type_1             |
+---------------------------------------+--------------------------------------+

Create a snapshot for a consistency group:

$ cinder cgsnapshot-create 1de80c27-3b2f-47a6-91a7-e867cbe36462

+---------------------+--------------------------------------+
|       Property      |                Value                 |
+---------------------+--------------------------------------+
| consistencygroup_id | 1de80c27-3b2f-47a6-91a7-e867cbe36462 |
|      created_at     |      2014-12-29T13:19:44.000000      |
|     description     |                 None                 |
|          id         | d4aff465-f50c-40b3-b088-83feb9b349e9 |
|         name        |                 None                 |
|        status       |               creating               |
+---------------------+-------------------------------------+

Show a snapshot of a consistency group:

$ cinder cgsnapshot-show d4aff465-f50c-40b3-b088-83feb9b349e9

List consistency group snapshots:

$ cinder cgsnapshot-list

+--------------------------------------+--------+----------+
|                  ID                  | Status | Name     |
+--------------------------------------+--------+----------+
| 6d9dfb7d-079a-471e-b75a-6e9185ba0c38 | available  | None |
| aa129f4d-d37c-4b97-9e2d-7efffda29de0 | available  | None |
| bb5b5d82-f380-4a32-b469-3ba2e299712c | available  | None |
| d4aff465-f50c-40b3-b088-83feb9b349e9 | available  | None |
+--------------------------------------+--------+----------+

Delete a snapshot of a consistency group:

$ cinder cgsnapshot-delete d4aff465-f50c-40b3-b088-83feb9b349e9

Delete a consistency group:

Note
Note

The force flag is needed when there are volumes in the consistency group:

$ cinder consisgroup-delete --force 1de80c27-3b2f-47a6-91a7-e867cbe36462

Modify a consistency group:

cinder consisgroup-update
[--name NAME]
[--description DESCRIPTION]
[--add-volumes UUID1,UUID2,......]
[--remove-volumes UUID3,UUID4,......]
CG

The parameter CG is required. It can be a name or UUID of a consistency group. UUID1,UUID2,...... are UUIDs of one or more volumes to be added to the consistency group, separated by commas. Default is None. UUID3,UUID4,...... are UUIDs of one or more volumes to be removed from the consistency group, separated by commas. Default is None.

$ cinder consisgroup-update --name 'new name' --description 'new descripti\
  on' --add-volumes 0b3923f5-95a4-4596-a536-914c2c84e2db,1c02528b-3781-4e3\
  2-929c-618d81f52cf3 --remove-volumes 8c0f6ae4-efb1-458f-a8fc-9da2afcc5fb\
  1,a245423f-bb99-4f94-8c8c-02806f9246d8 1de80c27-3b2f-47a6-91a7-e867cbe36462

Create a consistency group from the snapshot of another consistency group:

$ cinder consisgroup-create-from-src
[--cgsnapshot CGSNAPSHOT]
[--name NAME]
[--description DESCRIPTION]

The parameter CGSNAPSHOT is a name or UUID of a snapshot of a consistency group:

$ cinder consisgroup-create-from-src --cgsnapshot 6d9dfb7d-079a-471e-b75a-\
  6e9185ba0c38 --name 'new cg' --description 'new cg from cgsnapshot'

Create a consistency group from a source consistency group:

$ cinder consisgroup-create-from-src
[--source-cg SOURCECG]
[--name NAME]
[--description DESCRIPTION]

The parameter SOURCECG is a name or UUID of a source consistency group:

$ cinder consisgroup-create-from-src --source-cg 6d9dfb7d-079a-471e-b75a-\
  6e9185ba0c38 --name 'new cg' --description 'new cloned cg'

7.2.13 Configure and use driver filter and weighing for scheduler

OpenStack Block Storage enables you to choose a volume back end based on back-end specific properties by using the DriverFilter and GoodnessWeigher for the scheduler. The driver filter and weigher scheduling can help ensure that the scheduler chooses the best back end based on requested volume properties as well as various back-end specific properties.

7.2.13.1 What is driver filter and weigher and when to use it

The driver filter and weigher gives you the ability to more finely control how the OpenStack Block Storage scheduler chooses the best back end to use when handling a volume request. One example scenario where using the driver filter and weigher can be if a back end that utilizes thin-provisioning is used. The default filters use the free capacity property to determine the best back end, but that is not always perfect. If a back end has the ability to provide a more accurate back-end specific value you can use that as part of the weighing. Another example of when the driver filter and weigher can prove useful is if a back end exists where there is a hard limit of 1000 volumes. The maximum volume size is 500 GB. Once 75% of the total space is occupied the performance of the back end degrades. The driver filter and weigher can provide a way for these limits to be checked for.

7.2.13.2 Enable driver filter and weighing

To enable the driver filter, set the scheduler_default_filters option in the cinder.conf file to DriverFilter or add it to the list if other filters are already present.

To enable the goodness filter as a weigher, set the scheduler_default_weighers option in the cinder.conf file to GoodnessWeigher or add it to the list if other weighers are already present.

You can choose to use the DriverFilter without the GoodnessWeigher or vice-versa. The filter and weigher working together, however, create the most benefits when helping the scheduler choose an ideal back end.

Important
Important

The support for the DriverFilter and GoodnessWeigher is optional for back ends. If you are using a back end that does not support the filter and weigher functionality you may not get the full benefit.

Example cinder.conf configuration file:

scheduler_default_filters = DriverFilter
scheduler_default_weighers = GoodnessWeigher
Note
Note

It is useful to use the other filters and weighers available in OpenStack in combination with these custom ones. For example, the CapacityFilter and CapacityWeigher can be combined with these.

7.2.13.3 Defining your own filter and goodness functions

You can define your own filter and goodness functions through the use of various properties that OpenStack Block Storage has exposed. Properties exposed include information about the volume request being made, volume_type settings, and back-end specific information about drivers. All of these allow for a lot of control over how the ideal back end for a volume request will be decided.

The filter_function option is a string defining an equation that will determine whether a back end should be considered as a potential candidate in the scheduler.

The goodness_function option is a string defining an equation that will rate the quality of the potential host (0 to 100, 0 lowest, 100 highest).

Important
Important

The drive filter and weigher will use default values for filter and goodness functions for each back end if you do not define them yourself. If complete control is desired then a filter and goodness function should be defined for each of the back ends in the cinder.conf file.

7.2.13.4 Supported operations in filter and goodness functions

Below is a table of all the operations currently usable in custom filter and goodness functions created by you:

Operations

Type

+, -, *, /, ^

standard math

not, and, or, &, |, !

logic

>, >=, <, <=, ==, <>, !=

equality

+, -

sign

x ? a : b

ternary

abs(x), max(x, y), min(x, y)

math helper functions

Important
Important

Syntax errors you define in filter or goodness strings are thrown at a volume request time.

7.2.13.5 Available properties when creating custom functions

There are various properties that can be used in either the filter_function or the goodness_function strings. The properties allow access to volume info, qos settings, extra specs, and so on.

The following properties and their sub-properties are currently available for use:

7.2.13.5.1 Host stats for a back end
host

The host's name

volume_backend_name

The volume back end name

vendor_name

The vendor name

driver_version

The driver version

storage_protocol

The storage protocol

QoS_support

Boolean signifying whether QoS is supported

total_capacity_gb

The total capacity in GB

allocated_capacity_gb

The allocated capacity in GB

reserved_percentage

The reserved storage percentage

7.2.13.5.2 Capabilities specific to a back end

These properties are determined by the specific back end you are creating filter and goodness functions for. Some back ends may not have any properties available here.

7.2.13.5.3 Requested volume properties
status

Status for the requested volume

volume_type_id

The volume type ID

display_name

The display name of the volume

volume_metadata

Any metadata the volume has

reservations

Any reservations the volume has

user_id

The volume's user ID

attach_status

The attach status for the volume

display_description

The volume's display description

id

The volume's ID

replication_status

The volume's replication status

snapshot_id

The volume's snapshot ID

encryption_key_id

The volume's encryption key ID

source_volid

The source volume ID

volume_admin_metadata

Any admin metadata for this volume

source_replicaid

The source replication ID

consistencygroup_id

The consistency group ID

size

The size of the volume in GB

metadata

General metadata

The property most used from here will most likely be the size sub-property.

7.2.13.6 Extra specs for the requested volume type

View the available properties for volume types by running:

$ cinder extra-specs-list

7.2.13.7 Current QoS specs for the requested volume type

View the available properties for volume types by running:

$ cinder qos-list

In order to access these properties in a custom string use the following format:

<property>.<sub_property>

7.2.13.8 Driver filter and weigher usage examples

Below are examples for using the filter and weigher separately, together, and using driver-specific properties.

Example cinder.conf file configuration for customizing the filter function:

[default]
scheduler_default_filters = DriverFilter
enabled_backends = lvm-1, lvm-2

[lvm-1]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_backend_name = sample_LVM
filter_function = "volume.size < 10"

[lvm-2]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_backend_name = sample_LVM
filter_function = "volume.size >= 10"

The above example will filter volumes to different back ends depending on the size of the requested volume. Default OpenStack Block Storage scheduler weighing is done. Volumes with a size less than 10 GB are sent to lvm-1 and volumes with a size greater than or equal to 10 GB are sent to lvm-2.

Example cinder.conf file configuration for customizing the goodness function:

[default]
scheduler_default_weighers = GoodnessWeigher
enabled_backends = lvm-1, lvm-2

[lvm-1]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_backend_name = sample_LVM
goodness_function = "(volume.size < 5) ? 100 : 50"

[lvm-2]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_backend_name = sample_LVM
goodness_function = "(volume.size >= 5) ? 100 : 25"

The above example will determine the goodness rating of a back end based off of the requested volume's size. Default OpenStack Block Storage scheduler filtering is done. The example shows how the ternary if statement can be used in a filter or goodness function. If a requested volume is of size 10 GB then lvm-1 is rated as 50 and lvm-2 is rated as 100. In this case lvm-2 wins. If a requested volume is of size 3 GB then lvm-1 is rated 100 and lvm-2 is rated 25. In this case lvm-1 would win.

Example cinder.conf file configuration for customizing both the filter and goodness functions:

[default]
scheduler_default_filters = DriverFilter
scheduler_default_weighers = GoodnessWeigher
enabled_backends = lvm-1, lvm-2

[lvm-1]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_backend_name = sample_LVM
filter_function = "stats.total_capacity_gb < 500"
goodness_function = "(volume.size < 25) ? 100 : 50"

[lvm-2]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_backend_name = sample_LVM
filter_function = "stats.total_capacity_gb >= 500"
goodness_function = "(volume.size >= 25) ? 100 : 75"

The above example combines the techniques from the first two examples. The best back end is now decided based off of the total capacity of the back end and the requested volume's size.

Example cinder.conf file configuration for accessing driver specific properties:

[default]
scheduler_default_filters = DriverFilter
scheduler_default_weighers = GoodnessWeigher
enabled_backends = lvm-1,lvm-2,lvm-3

[lvm-1]
volume_group = stack-volumes-lvmdriver-1
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_backend_name = lvmdriver-1
filter_function = "volume.size < 5"
goodness_function = "(capabilities.total_volumes < 3) ? 100 : 50"

[lvm-2]
volume_group = stack-volumes-lvmdriver-2
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_backend_name = lvmdriver-2
filter_function = "volumes.size < 5"
goodness_function = "(capabilities.total_volumes < 8) ? 100 : 50"

[lvm-3]
volume_group = stack-volumes-lvmdriver-3
volume_driver = cinder.volume.drivers.LVMVolumeDriver
volume_backend_name = lvmdriver-3
goodness_function = "55"

The above is an example of how back-end specific properties can be used in the filter and goodness functions. In this example the LVM driver's total_volumes capability is being used to determine which host gets used during a volume request. In the above example, lvm-1 and lvm-2 will handle volume requests for all volumes with a size less than 5 GB. The lvm-1 host will have priority until it contains three or more volumes. After than lvm-2 will have priority until it contains eight or more volumes. The lvm-3 will collect all volumes greater or equal to 5 GB as well as all volumes once lvm-1 and lvm-2 lose priority.

7.2.14 Rate-limit volume copy bandwidth

When you create a new volume from an image or an existing volume, or when you upload a volume image to the Image service, large data copy may stress disk and network bandwidth. To mitigate slow down of data access from the instances, OpenStack Block Storage supports rate-limiting of volume data copy bandwidth.

7.2.14.1 Configure volume copy bandwidth limit

To configure the volume copy bandwidth limit, set the volume_copy_bps_limit option in the configuration groups for each back end in the cinder.conf file. This option takes the integer of maximum bandwidth allowed for volume data copy in byte per second. If this option is set to 0, the rate-limit is disabled.

While multiple volume data copy operations are running in the same back end, the specified bandwidth is divided to each copy.

Example cinder.conf configuration file to limit volume copy bandwidth of lvmdriver-1 up to 100 MiB/s:

[lvmdriver-1]
volume_group=cinder-volumes-1
volume_driver=cinder.volume.drivers.lvm.LVMVolumeDriver
volume_backend_name=LVM
volume_copy_bps_limit=104857600
Note
Note

This feature requires libcgroup to set up blkio cgroup for disk I/O bandwidth limit. The libcgroup is provided by the cgroup-bin package in Debian and Ubuntu, or by the libcgroup-tools package in Fedora, Red Hat Enterprise Linux, CentOS, openSUSE, and SUSE Linux Enterprise.

Note
Note

Some back ends which use remote file systems such as NFS are not supported by this feature.

7.2.15 Oversubscription in thin provisioning

OpenStack Block Storage enables you to choose a volume back end based on virtual capacities for thin provisioning using the oversubscription ratio.

A reference implementation is provided for the default LVM driver. The illustration below uses the LVM driver as an example.

7.2.15.1 Configure oversubscription settings

To support oversubscription in thin provisioning, a flag max_over_subscription_ratio is introduced into cinder.conf. This is a float representation of the oversubscription ratio when thin provisioning is involved. Default ratio is 20.0, meaning provisioned capacity can be 20 times of the total physical capacity. A ratio of 10.5 means provisioned capacity can be 10.5 times of the total physical capacity. A ratio of 1.0 means provisioned capacity cannot exceed the total physical capacity. A ratio lower than 1.0 is ignored and the default value is used instead.

Note
Note

max_over_subscription_ratio can be configured for each back end when multiple-storage back ends are enabled. It is provided as a reference implementation and is used by the LVM driver. However, it is not a requirement for a driver to use this option from cinder.conf.

max_over_subscription_ratio is for configuring a back end. For a driver that supports multiple pools per back end, it can report this ratio for each pool. The LVM driver does not support multiple pools.

The existing reserved_percentage flag is used to prevent over provisioning. This flag represents the percentage of the back-end capacity that is reserved.

Note
Note

There is a change on how reserved_percentage is used. It was measured against the free capacity in the past. Now it is measured against the total capacity.

7.2.15.2 Capabilities

Drivers can report the following capabilities for a back end or a pool:

thin_provisioning_support = True(or False)
thick_provisioning_support = True(or False)
provisioned_capacity_gb = PROVISIONED_CAPACITY
max_over_subscription_ratio = MAX_RATIO

Where PROVISIONED_CAPACITY is the apparent allocated space indicating how much capacity has been provisioned and MAX_RATIO is the maximum oversubscription ratio. For the LVM driver, it is max_over_subscription_ratio in cinder.conf.

Two capabilities are added here to allow a back end or pool to claim support for thin provisioning, or thick provisioning, or both.

The LVM driver reports thin_provisioning_support=True and thick_provisioning_support=False if the lvm_type flag in cinder.conf is thin. Otherwise it reports thin_provisioning_support=False and thick_provisioning_support=True.

7.2.15.3 Volume type extra specs

If volume type is provided as part of the volume creation request, it can have the following extra specs defined:

'capabilities:thin_provisioning_support': '<is> True' or '<is> False'
'capabilities:thick_provisioning_support': '<is> True' or '<is> False'
Note
Note

capabilities scope key before thin_provisioning_support and thick_provisioning_support is not required. So the following works too:

'thin_provisioning_support': '<is> True' or '<is> False'
'thick_provisioning_support': '<is> True' or '<is> False'

The above extra specs are used by the scheduler to find a back end that supports thin provisioning, thick provisioning, or both to match the needs of a specific volume type.

7.2.15.4 Volume replication extra specs

OpenStack Block Storage has the ability to create volume replicas. Administrators can define a storage policy that includes replication by adjusting the cinder volume driver. Volume replication for OpenStack Block Storage helps safeguard OpenStack environments from data loss during disaster recovery.

To enable replication when creating volume types, configure the cinder volume with capabilities:replication="<is> True".

Each volume created with the replication capability set to True generates a copy of the volume on a storage back end.

One use case for replication involves an OpenStack cloud environment installed across two data centers located nearby each other. The distance between the two data centers in this use case is the length of a city.

At each data center, a cinder host supports the Block Storage service. Both data centers include storage back ends.

Depending on the storage requirements, there can be one or two cinder hosts. The administrator accesses the /etc/cinder/cinder.conf configuration file and sets capabilities:replication="<is> True".

If one data center experiences a service failure, administrators can redeploy the VM. The VM will run using a replicated, backed up volume on a host in the second data center.

7.2.15.5 Capacity filter

In the capacity filter, max_over_subscription_ratio is used when choosing a back end if thin_provisioning_support is True and max_over_subscription_ratio is greater than 1.0.

7.2.15.6 Capacity weigher

In the capacity weigher, virtual free capacity is used for ranking if thin_provisioning_support is True. Otherwise, real free capacity will be used as before.

7.2.16 Image-Volume cache

OpenStack Block Storage has an optional Image cache which can dramatically improve the performance of creating a volume from an image. The improvement depends on many factors, primarily how quickly the configured back end can clone a volume.

When a volume is first created from an image, a new cached image-volume will be created that is owned by the Block Storage Internal Tenant. Subsequent requests to create volumes from that image will clone the cached version instead of downloading the image contents and copying data to the volume.

The cache itself is configurable per back end and will contain the most recently used images.

7.2.16.1 Configure the Internal Tenant

The Image-Volume cache requires that the Internal Tenant be configured for the Block Storage services. This project will own the cached image-volumes so they can be managed like normal users including tools like volume quotas. This protects normal users from having to see the cached image-volumes, but does not make them globally hidden.

To enable the Block Storage services to have access to an Internal Tenant, set the following options in the cinder.conf file:

cinder_internal_tenant_project_id = PROJECT_ID
cinder_internal_tenant_user_id = USER_ID

An example cinder.conf configuration file:

cinder_internal_tenant_project_id = b7455b8974bb4064ad247c8f375eae6c
cinder_internal_tenant_user_id = f46924c112a14c80ab0a24a613d95eef
Note
Note

The actual user and project that are configured for the Internal Tenant do not require any special privileges. They can be the Block Storage service project or can be any normal project and user.

7.2.16.2 Configure the Image-Volume cache

To enable the Image-Volume cache, set the following configuration option in the cinder.conf file:

image_volume_cache_enabled = True
Note
Note

If you use Ceph as a back end, set the following configuration option in the cinder.conf file:

[ceph]
image_volume_cache_enabled = True

This can be scoped per back end definition or in the default options.

There are optional configuration settings that can limit the size of the cache. These can also be scoped per back end or in the default options in the cinder.conf file:

image_volume_cache_max_size_gb = SIZE_GB
image_volume_cache_max_count = MAX_COUNT

By default they will be set to 0, which means unlimited.

For example, a configuration which would limit the max size to 200 GB and 50 cache entries will be configured as:

image_volume_cache_max_size_gb = 200
image_volume_cache_max_count = 50

7.2.16.3 Notifications

Cache actions will trigger Telemetry messages. There are several that will be sent.

  • image_volume_cache.miss - A volume is being created from an image which was not found in the cache. Typically this will mean a new cache entry would be created for it.

  • image_volume_cache.hit - A volume is being created from an image which was found in the cache and the fast path can be taken.

  • image_volume_cache.evict - A cached image-volume has been deleted from the cache.

7.2.16.4 Managing cached Image-Volumes

In normal usage there should be no need for manual intervention with the cache. The entries and their backing Image-Volumes are managed automatically.

If needed, you can delete these volumes manually to clear the cache. By using the standard volume deletion APIs, the Block Storage service will clean up correctly.

7.2.17 Volume-backed image

OpenStack Block Storage can quickly create a volume from an image that refers to a volume storing image data (Image-Volume). Compared to the other stores such as file and swift, creating a volume from a Volume-backed image performs better when the block storage driver supports efficient volume cloning.

If the image is set to public in the Image service, the volume data can be shared among projects.

7.2.17.1 Configure the Volume-backed image

Volume-backed image feature requires locations information from the cinder store of the Image service. To enable the Image service to use the cinder store, add cinder to the stores option in the glance_store section of the glance-api.conf file:

stores = file, http, swift, cinder

To expose locations information, set the following options in the DEFAULT section of the glance-api.conf file:

show_multiple_locations = True

To enable the Block Storage services to create a new volume by cloning Image- Volume, set the following options in the DEFAULT section of the cinder.conf file. For example:

glance_api_version = 2
allowed_direct_url_schemes = cinder

To enable the openstack image create --volume <volume> command to create an image that refers an Image-Volume, set the following options in each back-end section of the cinder.conf file:

image_upload_use_cinder_backend = True

By default, the openstack image create --volume <volume> command creates the Image-Volume in the current project. To store the Image-Volume into the internal project, set the following options in each back-end section of the cinder.conf file:

image_upload_use_internal_tenant = True

To make the Image-Volume in the internal project accessible from the Image service, set the following options in the glance_store section of the glance-api.conf file:

  • cinder_store_auth_address

  • cinder_store_user_name

  • cinder_store_password

  • cinder_store_project_name

7.2.17.2 Creating a Volume-backed image

To register an existing volume as a new Volume-backed image, use the following commands:

$ openstack image create --disk-format raw --container-format bare IMAGE_NAME

$ glance location-add <image-uuid> --url cinder://<volume-uuid>

If the image_upload_use_cinder_backend option is enabled, the following command creates a new Image-Volume by cloning the specified volume and then registers its location to a new image. The disk format and the container format must be raw and bare (default). Otherwise, the image is uploaded to the default store of the Image service.

$ openstack image create --volume SOURCE_VOLUME IMAGE_NAME

7.2.18 Get capabilities

When an administrator configures volume type and extra specs of storage on the back end, the administrator has to read the right documentation that corresponds to the version of the storage back end. Deep knowledge of storage is also required.

OpenStack Block Storage enables administrators to configure volume type and extra specs without specific knowledge of the storage back end.

Note
Note
  • Volume Type: A group of volume policies.

  • Extra Specs: The definition of a volume type. This is a group of policies. For example, provision type, QOS that will be used to define a volume at creation time.

  • Capabilities: What the current deployed back end in Cinder is able to do. These correspond to extra specs.

7.2.18.1 Usage of cinder client

When an administrator wants to define new volume types for their OpenStack cloud, the administrator would fetch a list of capabilities for a particular back end using the cinder client.

First, get a list of the services:

$ openstack volume service list
+------------------+-------------------+------+---------+-------+----------------------------+
| Binary           | Host              | Zone | Status  | State | Updated At                 |
+------------------+-------------------+------+---------+-------+----------------------------+
| cinder-scheduler | controller        | nova | enabled | up    | 2016-10-24T13:53:35.000000 |
| cinder-volume    | block1@ABC-driver | nova | enabled | up    | 2016-10-24T13:53:35.000000 |
+------------------+-------------------+------+---------+-------+----------------------------+

With one of the listed hosts, pass that to get-capabilities, then the administrator can obtain volume stats and also back end capabilities as listed below.

$ cinder get-capabilities block1@ABC-driver
+---------------------+----------------------------------------------+
|     Volume stats    |                    Value                     |
+---------------------+----------------------------------------------+
|     description     |                     None                     |
|     display_name    |   Capabilities of Cinder Vendor ABC driver   |
|    driver_version   |                    2.0.0                     |
|      namespace      | OS::Storage::Capabilities::block1@ABC-driver |
|      pool_name      |                     None                     |
| replication_targets |                      []                      |
|   storage_protocol  |                    iSCSI                     |
|     vendor_name     |                  Vendor ABC                  |
|      visibility     |                     pool                     |
| volume_backend_name |                  ABC-driver                  |
+---------------------+----------------------------------------------+
+----------------------+-----------------------------------------------------+
|  Backend properties  |                     Value                           |
+----------------------+-----------------------------------------------------+
|      compression     | {u'type':u'boolean', u'title':u'Compression',  ...} |
| ABC:compression_type | {u'enum':u'['lossy', 'lossless', 'special']',  ...} |
|         qos          | {u'type':u'boolean', u'title':u'QoS',          ...} |
|     replication      | {u'type':u'boolean', u'title':u'Replication',  ...} |
|  thin_provisioning   | {u'type':u'boolean', u'title':u'Thin Provisioning'} |
|     ABC:minIOPS      | {u'type':u'integer', u'title':u'Minimum IOPS QoS',} |
|     ABC:maxIOPS      | {u'type':u'integer', u'title':u'Maximum IOPS QoS',} |
|    ABC:burstIOPS     | {u'type':u'integer', u'title':u'Burst IOPS QoS',..} |
+----------------------+-----------------------------------------------------+

7.2.18.2 Disable a service

When an administrator wants to disable a service, identify the Binary and the Host of the service. Use the cinder service-disable command combined with the Binary and Host to disable the service:

  1. Determine the binary and host of the service you want to remove initially.

    $ openstack volume service list
    +------------------+----------------------+------+---------+-------+----------------------------+
    | Binary           | Host                 | Zone | Status  | State | Updated At                 |
    +------------------+----------------------+------+---------+-------+----------------------------+
    | cinder-scheduler | devstack             | nova | enabled | up    | 2016-10-24T13:53:35.000000 |
    | cinder-volume    | devstack@lvmdriver-1 | nova | enabled | up    | 2016-10-24T13:53:35.000000 |
    +------------------+----------------------+------+---------+-------+----------------------------+
  2. Disable the service using the Binary and Host name, placing the Host before the Binary name.

    $ cinder service-disable HOST_NAME BINARY_NAME
  3. Remove the service from the database.

    $ cinder-manage service remove BINARY_NAME HOST_NAME

7.2.18.3 Usage of REST API

New endpoint to get capabilities list for specific storage back end is also available. For more details, refer to the Block Storage API reference.

API request:

GET /v2/{tenant_id}/capabilities/{hostname}

Example of return value:

{
  "namespace": "OS::Storage::Capabilities::block1@ABC-driver",
  "volume_backend_name": "ABC-driver",
  "pool_name": "pool",
  "driver_version": "2.0.0",
  "storage_protocol": "iSCSI",
  "display_name": "Capabilities of Cinder Vendor ABC driver",
  "description": "None",
  "visibility": "public",
  "properties": {
   "thin_provisioning": {
      "title": "Thin Provisioning",
      "description": "Sets thin provisioning.",
      "type": "boolean"
    },
    "compression": {
      "title": "Compression",
      "description": "Enables compression.",
      "type": "boolean"
    },
    "ABC:compression_type": {
      "title": "Compression type",
      "description": "Specifies compression type.",
      "type": "string",
      "enum": [
        "lossy", "lossless", "special"
      ]
    },
    "replication": {
      "title": "Replication",
      "description": "Enables replication.",
      "type": "boolean"
    },
    "qos": {
      "title": "QoS",
      "description": "Enables QoS.",
      "type": "boolean"
    },
    "ABC:minIOPS": {
      "title": "Minimum IOPS QoS",
      "description": "Sets minimum IOPS if QoS is enabled.",
      "type": "integer"
    },
    "ABC:maxIOPS": {
      "title": "Maximum IOPS QoS",
      "description": "Sets maximum IOPS if QoS is enabled.",
      "type": "integer"
    },
    "ABC:burstIOPS": {
      "title": "Burst IOPS QoS",
      "description": "Sets burst IOPS if QoS is enabled.",
      "type": "integer"
    },
  }
}

7.2.18.4 Usage of volume type access extension

Some volume types should be restricted only. For example, test volume types where you are testing a new technology or ultra high performance volumes (for special cases) where you do not want most users to be able to select these volumes. An administrator/operator can then define private volume types using cinder client. Volume type access extension adds the ability to manage volume type access. Volume types are public by default. Private volume types can be created by setting the is_public Boolean field to False at creation time. Access to a private volume type can be controlled by adding or removing a project from it. Private volume types without projects are only visible by users with the admin role/context.

Create a public volume type by setting is_public field to True:

$ openstack volume type create vol_Type1 --description test1 --public
+-------------+--------------------------------------+
| Field       | Value                                |
+-------------+--------------------------------------+
| description | test1                                |
| id          | b7dbed9e-de78-49f8-a840-651ae7308592 |
| is_public   | True                                 |
| name        | vol_Type1                            |
+-------------+--------------------------------------+

Create a private volume type by setting is_public field to False:

$ openstack volume type create vol_Type2 --description test2 --private
+-------------+--------------------------------------+
| Field       | Value                                |
+-------------+--------------------------------------+
| description | test2                                |
| id          | 154baa73-d2c4-462f-8258-a2df251b0d39 |
| is_public   | False                                |
| name        | vol_Type2                            |
+-------------+--------------------------------------+

Get a list of the volume types:

$ openstack volume type list
+--------------------------------------+-------------+
| ID                                   | Name        |
+--------------------------------------+-------------+
| 0a948c84-bad5-4fba-88a2-c062006e4f6b | vol_Type1   |
| 87e5be6f-9491-4ea5-9906-9ac56494bb91 | lvmdriver-1 |
| fd508846-213f-4a07-aaf2-40518fb9a23f | vol_Type2   |
+--------------------------------------+-------------+

Get a list of the projects:

$ openstack project list
+----------------------------------+--------------------+
| ID                               | Name               |
+----------------------------------+--------------------+
| 4105ead90a854100ab6b121266707f2b | alt_demo           |
| 4a22a545cedd4fcfa9836eb75e558277 | admin              |
| 71f9cdb1a3ab4b8e8d07d347a2e146bb | service            |
| c4860af62ffe465e99ed1bc08ef6082e | demo               |
| e4b648ba5108415cb9e75bff65fa8068 | invisible_to_admin |
+----------------------------------+--------------------+

Add volume type access for the given demo project, using its project-id:

$ openstack volume type set --project c4860af62ffe465e99ed1bc08ef6082e \
  vol_Type2

List the access information about the given volume type:

$ cinder type-access-list --volume-type vol_Type2
+--------------------------------------+----------------------------------+
|            Volume_type_ID            |            Project_ID            |
+--------------------------------------+----------------------------------+
| fd508846-213f-4a07-aaf2-40518fb9a23f | c4860af62ffe465e99ed1bc08ef6082e |
+--------------------------------------+----------------------------------+

Remove volume type access for the given project:

$ openstack volume type unset --project c4860af62ffe465e99ed1bc08ef6082e \
  vol_Type2
$ cinder type-access-list --volume-type vol_Type2
+----------------+------------+
| Volume_type_ID | Project_ID |
+----------------+------------+
+----------------+------------+

7.2.19 Generic volume groups

Generic volume group support is available in OpenStack Block Storage (cinder) since the Newton release. The support is added for creating group types and group specs, creating groups of volumes, and creating snapshots of groups. The group operations can be performed using the Block Storage command line.

A group type is a type for a group just like a volume type for a volume. A group type can also have associated group specs similar to extra specs for a volume type.

In cinder, there is a group construct called consistency group. Consistency groups only support consistent group snapshots and only a small number of drivers can support it. The following is a list of drivers that support consistency groups and the release when the support was added:

  • Juno: EMC VNX

  • Kilo: EMC VMAX, IBM (GPFS, Storwize, SVC, and XIV), ProphetStor, Pure

  • Liberty: Dell Storage Center, EMC XtremIO, HPE 3Par and LeftHand

  • Mitaka: EMC ScaleIO, NetApp Data ONTAP and E-Series, SolidFire

  • Newton: CoprHD, FalconStor, Huawei

Consistency group cannot be extended easily to serve other purposes. A tenant may want to put volumes used in the same application together in a group so that it is easier to manage them together, and this group of volumes may or may not support consistent group snapshot. Generic volume group is introduced to solve this problem.

There is a plan to migrate existing consistency group operations to use generic volume group operations in future releases. More information can be found in Cinder specs.

Note
Note

Only Block Storage V3 API supports groups. You can specify --os-volume-api-version 3.x when using the cinder command line for group operations where 3.x contains a microversion value for that command. The generic volume group feature was completed in several patches. As a result, the minimum required microversion is different for group types, groups, and group snapshots APIs.

The following group type operations are supported:

  • Create a group type.

  • Delete a group type.

  • Set group spec for a group type.

  • Unset group spec for a group type.

  • List group types.

  • Show a group type details.

  • Update a group.

  • List group types and group specs.

The following group and group snapshot operations are supported:

  • Create a group, given group type and volume types.

    Note
    Note

    A group must have one group type. A group can support more than one volume type. The scheduler is responsible for finding a back end that can support the given group type and volume types.

    A group can only contain volumes hosted by the same back end.

    A group is empty upon its creation. Volumes need to be created and added to it later.

  • Show a group.

  • List groups.

  • Delete a group.

  • Modify a group.

  • Create a volume and add it to a group.

  • Create a snapshot for a group.

  • Show a group snapshot.

  • List group snapshots.

  • Delete a group snapshot.

  • Create a group from a group snapshot.

  • Create a group from a source group.

The following operations are not allowed if a volume is in a group:

  • Volume migration.

  • Volume retype.

  • Volume deletion.

    Note
    Note

    A group has to be deleted as a whole with all the volumes.

The following operations are not allowed if a volume snapshot is in a group snapshot:

  • Volume snapshot deletion.

    Note
    Note

    A group snapshot has to be deleted as a whole with all the volume snapshots.

The details of group type operations are shown in the following. The minimum microversion to support group type and group specs is 3.11:

Create a group type:

cinder --os-volume-api-version 3.11 group-type-create
[--description DESCRIPTION]
[--is-public IS_PUBLIC]
NAME
Note
Note

The parameter NAME is required. The --is-public IS_PUBLIC determines whether the group type is accessible to the public. It is True by default. By default, the policy on privileges for creating a group type is admin-only.

Show a group type:

cinder --os-volume-api-version 3.11 group-type-show
GROUP_TYPE
Note
Note

The parameter GROUP_TYPE is the name or UUID of a group type.

List group types:

cinder --os-volume-api-version 3.11 group-type-list
Note
Note

Only admin can see private group types.

Update a group type:

cinder --os-volume-api-version 3.11 group-type-update
[--name NAME]
[--description DESCRIPTION]
[--is-public IS_PUBLIC]
GROUP_TYPE_ID
Note
Note

The parameter GROUP_TYPE_ID is the UUID of a group type. By default, the policy on privileges for updating a group type is admin-only.

Delete group type or types:

cinder --os-volume-api-version 3.11 group-type-delete
GROUP_TYPE [GROUP_TYPE ...]
Note
Note

The parameter GROUP_TYPE is name or UUID of the group type or group types to be deleted. By default, the policy on privileges for deleting a group type is admin-only.

Set or unset group spec for a group type:

cinder --os-volume-api-version 3.11 group-type-key
GROUP_TYPE ACTION KEY=VALUE [KEY=VALUE ...]
Note
Note

The parameter GROUP_TYPE is the name or UUID of a group type. Valid values for the parameter ACTION are set or unset. KEY=VALUE is the group specs key and value pair to set or unset. For unset, specify only the key. By default, the policy on privileges for setting or unsetting group specs key is admin-only.

List group types and group specs:

cinder --os-volume-api-version 3.11 group-specs-list
Note
Note

By default, the policy on privileges for seeing group specs is admin-only.

The details of group operations are shown in the following. The minimum microversion to support groups operations is 3.13.

Create a group:

cinder --os-volume-api-version 3.13 group-create
[--name NAME]
[--description DESCRIPTION]
[--availability-zone AVAILABILITY_ZONE]
GROUP_TYPE VOLUME_TYPES
Note
Note

The parameters GROUP_TYPE and VOLUME_TYPES are required. GROUP_TYPE is the name or UUID of a group type. VOLUME_TYPES can be a list of names or UUIDs of volume types separated by commas without spaces in between. For example, volumetype1,volumetype2,volumetype3..

Show a group:

cinder --os-volume-api-version 3.13 group-show
GROUP
Note
Note

The parameter GROUP is the name or UUID of a group.

List groups:

cinder --os-volume-api-version 3.13 group-list
[--all-tenants [<0|1>]]
Note
Note

--all-tenants specifies whether to list groups for all tenants. Only admin can use this option.

Create a volume and add it to a group:

cinder --os-volume-api-version 3.13 create
--volume-type VOLUME_TYPE
--group-id GROUP_ID SIZE
Note
Note

When creating a volume and adding it to a group, the parameters VOLUME_TYPE and GROUP_ID must be provided. This is because a group can support more than one volume type.

Delete a group:

cinder --os-volume-api-version 3.13 group-delete
[--delete-volumes]
GROUP [GROUP ...]
Note
Note

--delete-volumes allows or disallows groups to be deleted if they are not empty. If the group is empty, it can be deleted without --delete-volumes. If the group is not empty, the flag is required for it to be deleted. When the flag is specified, the group and all volumes in the group will be deleted.

Modify a group:

cinder --os-volume-api-version 3.13 group-update
[--name NAME]
[--description DESCRIPTION]
[--add-volumes UUID1,UUID2,......]
[--remove-volumes UUID3,UUID4,......]
GROUP
Note
Note

The parameter UUID1,UUID2,...... is the UUID of one or more volumes to be added to the group, separated by commas. Similarly the parameter UUID3,UUID4,...... is the UUID of one or more volumes to be removed from the group, separated by commas.

The details of group snapshots operations are shown in the following. The minimum microversion to support group snapshots operations is 3.14.

Create a snapshot for a group:

cinder --os-volume-api-version 3.14 group-snapshot-create
[--name NAME]
[--description DESCRIPTION]
GROUP
Note
Note

The parameter GROUP is the name or UUID of a group.

Show a group snapshot:

cinder --os-volume-api-version 3.14 group-snapshot-show
GROUP_SNAPSHOT
Note
Note

The parameter GROUP_SNAPSHOT is the name or UUID of a group snapshot.

List group snapshots:

cinder --os-volume-api-version 3.14 group-snapshot-list
[--all-tenants [<0|1>]]
[--status STATUS]
[--group-id GROUP_ID]
Note
Note

--all-tenants specifies whether to list group snapshots for all tenants. Only admin can use this option. --status STATUS filters results by a status. --group-id GROUP_ID filters results by a group id.

Delete group snapshot:

cinder --os-volume-api-version 3.14 group-snapshot-delete
GROUP_SNAPSHOT [GROUP_SNAPSHOT ...]
Note
Note

The parameter GROUP_SNAPSHOT specifies the name or UUID of one or more group snapshots to be deleted.

Create a group from a group snapshot or a source group:

$ cinder --os-volume-api-version 3.14 group-create-from-src
[--group-snapshot GROUP_SNAPSHOT]
[--source-group SOURCE_GROUP]
[--name NAME]
[--description DESCRIPTION]
Note
Note

The parameter GROUP_SNAPSHOT is a name or UUID of a group snapshot. The parameter SOURCE_GROUP is a name or UUID of a source group. Either GROUP_SNAPSHOT or SOURCE_GROUP must be specified, but not both.

7.3 Troubleshoot your installation

This section provides useful tips to help you troubleshoot your Block Storage installation.

7.3.1 Troubleshoot the Block Storage configuration

Most Block Storage errors are caused by incorrect volume configurations that result in volume creation failures. To resolve these failures, review these logs:

  • cinder-api log (/var/log/cinder/api.log)

  • cinder-volume log (/var/log/cinder/volume.log)

The cinder-api log is useful for determining if you have endpoint or connectivity issues. If you send a request to create a volume and it fails, review the cinder-api log to determine whether the request made it to the Block Storage service. If the request is logged and you see no errors or tracebacks, check the cinder-volume log for errors or tracebacks.

Note
Note

Create commands are listed in the cinder-api log.

These entries in the cinder.openstack.common.log file can be used to assist in troubleshooting your Block Storage configuration.

# Print debugging output (set logging level to DEBUG instead
# of default WARNING level). (boolean value)
# debug=false

# Print more verbose output (set logging level to INFO instead
# of default WARNING level). (boolean value)
# verbose=false

# Log output to standard error (boolean value)
# use_stderr=true

# Default file mode used when creating log files (string
# value)
# logfile_mode=0644

# format string to use for log messages with context (string
# value)
# logging_context_format_string=%(asctime)s.%(msecs)03d %(levelname)s
# %(name)s [%(request_id)s %(user)s %(tenant)s] %(instance)s%(message)s

# format string to use for log mes #logging_default_format_string=%(asctime)s.
# %(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s

# data to append to log format when level is DEBUG (string
# value)
# logging_debug_format_suffix=%(funcName)s %(pathname)s:%(lineno)d

# prefix each line of exception output with this format
# (string value)
# logging_exception_prefix=%(asctime)s.%(msecs)03d %(process)d TRACE %(name)s
# %(instance)s

# list of logger=LEVEL pairs (list value)
# default_log_levels=amqplib=WARN,sqlalchemy=WARN,boto=WARN,suds=INFO,
# keystone=INFO,eventlet.wsgi.server=WARNsages without context
# (string value)

# If an instance is passed with the log message, format it
# like this (string value)
# instance_format="[instance: %(uuid)s]"

# If an instance UUID is passed with the log message, format
# it like this (string value)
#instance_uuid_format="[instance: %(uuid)s] "

# Format string for %%(asctime)s in log records. Default:
# %(default)s (string value)
# log_date_format=%Y-%m-%d %H:%M:%S

# (Optional) Name of log file to output to. If not set,
# logging will go to stdout. (string value)
# log_file=<None>

# (Optional) The directory to keep log files in (will be
# prepended to --log-file) (string value)
# log_dir=<None>
# instance_uuid_format="[instance: %(uuid)s]"

# If this option is specified, the logging configuration file
# specified is used and overrides any other logging options
# specified. Please see the Python logging module
# documentation for details on logging configuration files.
# (string value)
# Use syslog for logging. (boolean value)
# use_syslog=false

# syslog facility to receive log lines (string value)
# syslog_log_facility=LOG_USER
# log_config=<None>

These common issues might occur during configuration, and the following potential solutions describe how to address the issues.

7.3.1.1 Issues with state_path and volumes_dir settings

7.3.1.1.1 Problem

The OpenStack Block Storage uses tgtd as the default iSCSI helper and implements persistent targets. This means that in the case of a tgt restart, or even a node reboot, your existing volumes on that node will be restored automatically with their original iSCSI Qualified Name (IQN).

By default, Block Storage uses a state_path variable, which if installing with Yum or APT should be set to /var/lib/cinder/. The next part is the volumes_dir variable, by default this appends a volumes directory to the state_path. The result is a file-tree: /var/lib/cinder/volumes/.

7.3.1.1.2 Solution

In order to ensure nodes are restored to their original IQN, the iSCSI target information needs to be stored in a file on creation that can be queried in case of restart of the tgt daemon. While the installer should handle all this, it can go wrong.

If you have trouble creating volumes and this directory does not exist you should see an error message in the cinder-volume log indicating that the volumes_dir does not exist, and it should provide information about which path it was looking for.

7.3.1.2 The persistent tgt include file

7.3.1.2.1 Problem

The Block Storage service may have issues locating the persistent tgt include file. Along with the volumes_dir option, the iSCSI target driver also needs to be configured to look in the correct place for the persistent tgt include `` file. This is an entry in the ``/etc/tgt/conf.d file that should have been set during the OpenStack installation.

7.3.1.2.2 Solution

If issues occur, verify that you have a /etc/tgt/conf.d/cinder.conf file. If the file is not present, create it with:

# echo 'include /var/lib/cinder/volumes/ *' >> /etc/tgt/conf.d/cinder.conf

7.3.1.3 No sign of attach call in the cinder-api log

7.3.1.3.1 Problem

The attach call is unavailable, or not appearing in the cinder-api log.

7.3.1.3.2 Solution

Adjust the nova.conf file, and make sure that your nova.conf has this entry:

volume_api_class=nova.volume.cinder.API

7.3.1.4 Failed to create iscsi target error in the cinder-volume.log file

7.3.1.4.1 Problem
2013-03-12 01:35:43 1248 TRACE cinder.openstack.common.rpc.amqp \
ISCSITargetCreateFailed: \
Failed to create iscsi target for volume \
volume-137641b2-af72-4a2f-b243-65fdccd38780.

You might see this error in cinder-volume.log after trying to create a volume that is 1 GB.

7.3.1.4.2 Solution

To fix this issue, change the content of the /etc/tgt/targets.conf file from include /etc/tgt/conf.d/*.conf to include /etc/tgt/conf.d/cinder_tgt.conf, as follows:

include /etc/tgt/conf.d/cinder_tgt.conf
include /etc/tgt/conf.d/cinder.conf
default-driver iscsi

Restart tgt and cinder-* services, so they pick up the new configuration.

7.3.2 Multipath call failed exit

7.3.2.1 Problem

Multipath call failed exit. This warning occurs in the Compute log if you do not have the optional multipath-tools package installed on the compute node. This is an optional package and the volume attachment does work without the multipath tools installed. If the multipath-tools package is installed on the compute node, it is used to perform the volume attachment. The IDs in your message are unique to your system.

WARNING nova.storage.linuxscsi [req-cac861e3-8b29-4143-8f1b-705d0084e571
    admin admin|req-cac861e3-8b29-4143-8f1b-705d0084e571 admin admin]
    Multipath call failed exit (96)

7.3.2.2 Solution

Run the following command on the compute node to install the multipath-tools packages.

# apt-get install multipath-tools

7.3.3 Addressing discrepancies in reported volume sizes for EqualLogic storage

7.3.3.1 Problem

There is a discrepancy between both the actual volume size in EqualLogic (EQL) storage and the image size in the Image service, with what is reported to OpenStack database. This could lead to confusion if a user is creating volumes from an image that was uploaded from an EQL volume (through the Image service). The image size is slightly larger than the target volume size; this is because EQL size reporting accounts for additional storage used by EQL for internal volume metadata.

To reproduce the issue follow the steps in the following procedure.

This procedure assumes that the EQL array is provisioned, and that appropriate configuration settings have been included in /etc/cinder/cinder.conf to connect to the EQL array.

Create a new volume. Note the ID and size of the volume. In the following example, the ID and size are 74cf9c04-4543-47ae-a937-a9b7c6c921e7 and 1, respectively:

$ openstack volume create volume1 --size 1

+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| attachments         | []                                   |
| availability_zone   | nova                                 |
| bootable            | false                                |
| consistencygroup_id | None                                 |
| created_at          | 2016-12-06T11:33:30.957318           |
| description         | None                                 |
| encrypted           | False                                |
| id                  | 74cf9c04-4543-47ae-a937-a9b7c6c921e7 |
| migration_status    | None                                 |
| multiattach         | False                                |
| name                | volume1                              |
| properties          |                                      |
| replication_status  | disabled                             |
| size                | 1                                    |
| snapshot_id         | None                                 |
| source_volid        | None                                 |
| status              | creating                             |
| type                | iscsi                                |
| updated_at          | None                                 |
| user_id             | c36cec73b0e44876a4478b1e6cd749bb     |
+---------------------+--------------------------------------+

Verify the volume size on the EQL array by using its command-line interface.

The actual size (VolReserve) is 1.01 GB. The EQL Group Manager should also report a volume size of 1.01 GB:

eql> volume select volume-74cf9c04-4543-47ae-a937-a9b7c6c921e7
eql (volume_volume-74cf9c04-4543-47ae-a937-a9b7c6c921e7)> show
_______________________________ Volume Information ________________________________
Name: volume-74cf9c04-4543-47ae-a937-a9b7c6c921e7
Size: 1GB
VolReserve: 1.01GB
VolReservelnUse: 0MB
ReplReservelnUse: 0MB
iSCSI Alias: volume-74cf9c04-4543-47ae-a937-a9b7c6c921e7
iSCSI Name: iqn.2001-05.com.equallogic:0-8a0906-19f91850c-067000000b4532cl-volume-74cf9c04-4543-47ae-a937-a9b7c6c921e7
ActualMembers: 1
Snap-Warn: 10%
Snap-Depletion: delete-oldest
Description:
Snap-Reserve: 100%
Snap-Reserve-Avail: 100% (1.01GB)
Permission: read-write
DesiredStatus: online
Status: online
Connections: O
Snapshots: O
Bind:
Type: not-replicated
ReplicationReserveSpace: 0MB

Create a new image from this volume:

$ openstack image create --volume volume1 \
  --disk-format raw --container-format bare image_from_volume1

+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| container_format    | bare                                 |
| disk_format         | raw                                  |
| display_description | None                                 |
| id                  | 850fd393-a968-4259-9c65-6b495cba5209 |
| image_id            | 3020a21d-ba37-4495-8899-07fc201161b9 |
| image_name          | image_from_volume1                   |
| is_public           | False                                |
| protected           | False                                |
| size                | 1                                    |
| status              | uploading                            |
| updated_at          | 2016-12-05T12:43:56.000000           |
| volume_type         | iscsi                                |
+---------------------+--------------------------------------+

When you uploaded the volume in the previous step, the Image service reported the volume's size as 1 (GB). However, when using openstack image show to show the image, the displayed size is 1085276160 bytes, or roughly 1.01 GB:

Property

Value

checksum container_format created_at disk_format id min_disk min_ram name owner protected size status tags updated_at virtual_size visibility

cd573cfaace07e7949bc0c46028904ff bare 2016-12-06T11:39:06Z raw 3020a21d-ba37-4495-8899-07fc201161b9 0 0 image_from_volume1 5669caad86a04256994cdf755df4d3c1 False 1085276160 active [] 2016-12-06T11:39:24Z None private

Create a new volume using the previous image (image_id 3020a21d-ba37-4495 -8899-07fc201161b9 in this example) as the source. Set the target volume size to 1 GB; this is the size reported by the cinder tool when you uploaded the volume to the Image service:

$ openstack volume create volume2 --size 1 --image 3020a21d-ba37-4495-8899-07fc201161b9
ERROR: Invalid input received: Size of specified image 2 is larger
than volume size 1. (HTTP 400) (Request-ID: req-4b9369c0-dec5-4e16-a114-c0cdl6bSd210)

The attempt to create a new volume based on the size reported by the cinder tool will then fail.

7.3.3.2 Solution

To work around this problem, increase the target size of the new image to the next whole number. In the problem example, you created a 1 GB volume to be used as volume-backed image, so a new volume using this volume-backed image should use a size of 2 GB:

$ openstack volume create volume2 --size 1 --image 3020a21d-ba37-4495-8899-07fc201161b9
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| attachments         | []                                   |
| availability_zone   | nova                                 |
| bootable            | false                                |
| consistencygroup_id | None                                 |
| created_at          | 2016-12-06T11:49:06.031768           |
| description         | None                                 |
| encrypted           | False                                |
| id                  | a70d6305-f861-4382-84d8-c43128be0013 |
| migration_status    | None                                 |
| multiattach         | False                                |
| name                | volume2                              |
| properties          |                                      |
| replication_status  | disabled                             |
| size                | 1                                    |
| snapshot_id         | None                                 |
| source_volid        | None                                 |
| status              | creating                             |
| type                | iscsi                                |
| updated_at          | None                                 |
| user_id             | c36cec73b0e44876a4478b1e6cd749bb     |
+---------------------+--------------------------------------+
Note
Note

The dashboard suggests a suitable size when you create a new volume based on a volume-backed image.

You can then check this new volume into the EQL array:

eql> volume select volume-64e8eb18-d23f-437b-bcac-b352afa6843a
eql (volume_volume-61e8eb18-d23f-437b-bcac-b352afa6843a)> show
______________________________ Volume Information _______________________________
Name: volume-64e8eb18-d23f-437b-bcac-b352afa6843a
Size: 2GB
VolReserve: 2.01GB
VolReserveInUse: 1.01GB
ReplReserveInUse: 0MB
iSCSI Alias: volume-64e8eb18-d23f-437b-bcac-b352afa6843a
iSCSI Name: iqn.2001-05.com.equallogic:0-8a0906-e3091850e-eae000000b7S32cl-volume-64e8eb18-d23f-437b-bcac-b3S2afa6Bl3a
ActualMembers: 1
Snap-Warn: 10%
Snap-Depletion: delete-oldest
Description:
Snap-Reserve: 100%
Snap-Reserve-Avail: 100% (2GB)
Permission: read-write
DesiredStatus: online
Status: online
Connections: 1
Snapshots: O
Bind:
Type: not-replicated
ReplicationReserveSpace: 0MB

7.3.4 Failed to Attach Volume, Missing sg_scan

7.3.4.1 Problem

Failed to attach volume to an instance, sg_scan file not found. This error occurs when the sg3-utils package is not installed on the compute node. The IDs in your message are unique to your system:

ERROR nova.compute.manager [req-cf2679fd-dd9e-4909-807f-48fe9bda3642 admin admin|req-cf2679fd-dd9e-4909-807f-48fe9bda3642 admin admin]
[instance: 7d7c92e0-49fa-4a8e-87c7-73f22a9585d5|instance:  7d7c92e0-49fa-4a8e-87c7-73f22a9585d5]
Failed to attach volume  4cc104c4-ac92-4bd6-9b95-c6686746414a at /dev/vdcTRACE nova.compute.manager
[instance:  7d7c92e0-49fa-4a8e-87c7-73f22a9585d5|instance: 7d7c92e0-49fa-4a8e-87c7-73f22a9585d5]
Stdout: '/usr/local/bin/nova-rootwrap: Executable not found: /usr/bin/sg_scan'

7.3.4.2 Solution

Run this command on the compute node to install the sg3-utils package:

# apt-get install sg3-utils

7.3.5 HTTP bad request in cinder volume log

7.3.5.1 Problem

These errors appear in the cinder-volume.log file:

2013-05-03 15:16:33 INFO [cinder.volume.manager] Updating volume status
2013-05-03 15:16:33 DEBUG [hp3parclient.http]
REQ: curl -i https://10.10.22.241:8080/api/v1/cpgs -X GET -H "X-Hp3Par-Wsapi-Sessionkey: 48dc-b69ed2e5
f259c58e26df9a4c85df110c-8d1e8451" -H "Accept: application/json" -H "User-Agent: python-3parclient"

2013-05-03 15:16:33 DEBUG [hp3parclient.http] RESP:{'content-length': 311, 'content-type': 'text/plain',
'status': '400'}

2013-05-03 15:16:33 DEBUG [hp3parclient.http] RESP BODY:Second simultaneous read on fileno 13 detected.
Unless you really know what you're doing, make sure that only one greenthread can read any particular socket.
Consider using a pools.Pool. If you do know what you're doing and want to disable this error,
call eventlet.debug.hub_multiple_reader_prevention(False)

2013-05-03 15:16:33 ERROR [cinder.manager] Error during VolumeManager._report_driver_status: Bad request (HTTP 400)
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/cinder/manager.py", line 167, in periodic_tasks task(self, context)
File "/usr/lib/python2.7/dist-packages/cinder/volume/manager.py", line 690, in _report_driver_status volume_stats =
self.driver.get_volume_stats(refresh=True)
File "/usr/lib/python2.7/dist-packages/cinder/volume/drivers/san/hp/hp_3par_fc.py", line 77, in get_volume_stats stats =
self.common.get_volume_stats(refresh, self.client)
File "/usr/lib/python2.7/dist-packages/cinder/volume/drivers/san/hp/hp_3par_common.py", line 421, in get_volume_stats cpg =
client.getCPG(self.config.hp3par_cpg)
File "/usr/lib/python2.7/dist-packages/hp3parclient/client.py", line 231, in getCPG cpgs = self.getCPGs()
File "/usr/lib/python2.7/dist-packages/hp3parclient/client.py", line 217, in getCPGs response, body = self.http.get('/cpgs')
File "/usr/lib/python2.7/dist-packages/hp3parclient/http.py", line 255, in get return self._cs_request(url, 'GET', **kwargs)
File "/usr/lib/python2.7/dist-packages/hp3parclient/http.py", line 224, in _cs_request **kwargs)
File "/usr/lib/python2.7/dist-packages/hp3parclient/http.py", line 198, in _time_request resp, body = self.request(url, method, **kwargs)
File "/usr/lib/python2.7/dist-packages/hp3parclient/http.py", line 192, in request raise exceptions.from_response(resp, body)
HTTPBadRequest: Bad request (HTTP 400)

7.3.5.2 Solution

You need to update your copy of the hp_3par_fc.py driver which contains the synchronization code.

7.3.6 Duplicate 3PAR host

7.3.6.1 Problem

This error may be caused by a volume being exported outside of OpenStack using a host name different from the system name that OpenStack expects. This error could be displayed with the iSCSI Qualified Name (IQN) if the host was exported using iSCSI:

Duplicate3PARHost: 3PAR Host already exists: Host wwn 50014380242B9750 \
already used by host cld4b5ubuntuW(id = 68. The hostname must be called\
'cld4b5ubuntu'.

7.3.6.2 Solution

Change the 3PAR host name to match the one that OpenStack expects. The 3PAR host constructed by the driver uses just the local host name, not the fully qualified domain name (FQDN) of the compute host. For example, if the FQDN was myhost.example.com, just myhost would be used as the 3PAR host name. IP addresses are not allowed as host names on the 3PAR storage server.

7.3.7 Failed to attach volume after detaching

7.3.7.1 Problem

Failed to attach a volume after detaching the same volume.

7.3.7.2 Solution

You must change the device name on the nova-attach command. The VM might not clean up after a nova-detach command runs. This example shows how the nova-attach command fails when you use the vdb, vdc, or vdd device names:

# ls -al /dev/disk/by-path/
total 0
drwxr-xr-x 2 root root 200 2012-08-29 17:33 .
drwxr-xr-x 5 root root 100 2012-08-29 17:33 ..
lrwxrwxrwx 1 root root 9 2012-08-29 17:33 pci-0000:00:04.0-virtio-pci-virtio0 -> ../../vda
lrwxrwxrwx 1 root root 10 2012-08-29 17:33 pci-0000:00:04.0-virtio-pci-virtio0-part1 -> ../../vda1
lrwxrwxrwx 1 root root 10 2012-08-29 17:33 pci-0000:00:04.0-virtio-pci-virtio0-part2 -> ../../vda2
lrwxrwxrwx 1 root root 10 2012-08-29 17:33 pci-0000:00:04.0-virtio-pci-virtio0-part5 -> ../../vda5
lrwxrwxrwx 1 root root 9 2012-08-29 17:33 pci-0000:00:06.0-virtio-pci-virtio2 -> ../../vdb
lrwxrwxrwx 1 root root 9 2012-08-29 17:33 pci-0000:00:08.0-virtio-pci-virtio3 -> ../../vdc
lrwxrwxrwx 1 root root 9 2012-08-29 17:33 pci-0000:00:09.0-virtio-pci-virtio4 -> ../../vdd
lrwxrwxrwx 1 root root 10 2012-08-29 17:33 pci-0000:00:09.0-virtio-pci-virtio4-part1 -> ../../vdd1

You might also have this problem after attaching and detaching the same volume from the same VM with the same mount point multiple times. In this case, restart the KVM host.

7.3.8 Failed to attach volume, systool is not installed

7.3.8.1 Problem

This warning and error occurs if you do not have the required sysfsutils package installed on the compute node:

WARNING nova.virt.libvirt.utils [req-1200f887-c82b-4e7c-a891-fac2e3735dbb\
admin admin|req-1200f887-c82b-4e7c-a891-fac2e3735dbb admin admin] systool\
is not installed
ERROR nova.compute.manager [req-1200f887-c82b-4e7c-a891-fac2e3735dbb admin\
admin|req-1200f887-c82b-4e7c-a891-fac2e3735dbb admin admin]
[instance: df834b5a-8c3f-477a-be9b-47c97626555c|instance: df834b5a-8c3f-47\
7a-be9b-47c97626555c]
Failed to attach volume 13d5c633-903a-4764-a5a0-3336945b1db1 at /dev/vdk.

7.3.8.2 Solution

Run the following command on the compute node to install the sysfsutils packages:

# apt-get install sysfsutils

7.3.9 Failed to connect volume in FC SAN

7.3.9.1 Problem

The compute node failed to connect to a volume in a Fibre Channel (FC) SAN configuration. The WWN may not be zoned correctly in your FC SAN that links the compute host to the storage array:

ERROR nova.compute.manager [req-2ddd5297-e405-44ab-aed3-152cd2cfb8c2 admin\
demo|req-2ddd5297-e405-44ab-aed3-152cd2cfb8c2 admin demo] [instance: 60ebd\
6c7-c1e3-4bf0-8ef0-f07aa4c3d5f3|instance: 60ebd6c7-c1e3-4bf0-8ef0-f07aa4c3\
d5f3]
Failed to connect to volume 6f6a6a9c-dfcf-4c8d-b1a8-4445ff883200 while\
attaching at /dev/vdjTRACE nova.compute.manager [instance: 60ebd6c7-c1e3-4\
bf0-8ef0-f07aa4c3d5f3|instance: 60ebd6c7-c1e3-4bf0-8ef0-f07aa4c3d5f3]
Traceback (most recent call last):…f07aa4c3d5f3\] ClientException: The\
server has either erred or is incapable of performing the requested\
operation.(HTTP 500)(Request-ID: req-71e5132b-21aa-46ee-b3cc-19b5b4ab2f00)

7.3.9.2 Solution

The network administrator must configure the FC SAN fabric by correctly zoning the WWN (port names) from your compute node HBAs.

7.3.10 Cannot find suitable emulator for x86_64

7.3.10.1 Problem

When you attempt to create a VM, the error shows the VM is in the BUILD then ERROR state.

7.3.10.2 Solution

On the KVM host, run cat /proc/cpuinfo. Make sure the vmx or svm flags are set.

Follow the instructions in the Enable KVM section in the OpenStack Configuration Reference to enable hardware virtualization support in your BIOS.

7.3.11 Non-existent host

7.3.11.1 Problem

This error could be caused by a volume being exported outside of OpenStack using a host name different from the system name that OpenStack expects. This error could be displayed with the iSCSI Qualified Name (IQN) if the host was exported using iSCSI.

2013-04-19 04:02:02.336 2814 ERROR cinder.openstack.common.rpc.common [-] Returning exception Not found (HTTP 404)
NON_EXISTENT_HOST - HOST '10' was not found to caller.

7.3.11.2 Solution

Host names constructed by the driver use just the local host name, not the fully qualified domain name (FQDN) of the Compute host. For example, if the FQDN was myhost.example.com, just myhost would be used as the 3PAR host name. IP addresses are not allowed as host names on the 3PAR storage server.

7.3.12 Non-existent VLUN

7.3.12.1 Problem

This error occurs if the 3PAR host exists with the correct host name that the OpenStack Block Storage drivers expect but the volume was created in a different domain.

HTTPNotFound: Not found (HTTP 404) NON_EXISTENT_VLUN - VLUN 'osv-DqT7CE3mSrWi4gZJmHAP-Q' was not found.

7.3.12.2 Solution

The hpe3par_domain configuration items either need to be updated to use the domain the 3PAR host currently resides in, or the 3PAR host needs to be moved to the domain that the volume was created in.

Print this page