Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
documentation.suse.com / SUSE Enterprise Storage 7.1 Documentation / Administration and Operations Guide / Ceph Dashboard / Manage RADOS Block Device
Applies to SUSE Enterprise Storage 7.1

6 Manage RADOS Block Device

To list all available RADOS Block Devices (RBDs), click Block › Images from the main menu.

The list shows brief information about the device, such as the device's name, the related pool name, namespace, size of the device, number and size of objects on the device, details on the provisioning of the details, and the parent.

List of RBD images
Figure 6.1: List of RBD images

6.1 Viewing details about RBDs

To view more detailed information about a device, click its row in the table:

RBD details
Figure 6.2: RBD details

6.2 Viewing RBD's configuration

To view detailed configuration of a device, click its row in the table and then the Configuration tab in the lower table:

RBD configuration
Figure 6.3: RBD configuration

6.3 Creating RBDs

To add a new device, click Create in the top left of the table heading and do the following on the Create RBD screen:

Adding a new RBD
Figure 6.4: Adding a new RBD
  1. Enter the name of the new device. Refer to Section 2.11, “Name limitations” for naming limitations.

  2. Select the pool with the rbd application assigned from which the new RBD device will be created.

  3. Specify the size of the new device.

  4. Specify additional options for the device. To fine-tune the device parameters, click Advanced and enter values for object size, stripe unit, or stripe count. To enter Quality of Service (QoS) limits, click Quality of Service and enter them.

  5. Confirm with Create RBD.

6.4 Deleting RBDs

To delete a device, select the device in the table row. Click the drop-down arrow next to the Create button and click Delete. Confirm the deletion with Delete RBD.

Tip
Tip: Moving RBDs to trash

Deleting an RBD is an irreversible action. If you Move to Trash instead, you can restore the device later on by selecting it on the Trash tab of the main table and clicking Restore in the top left of the table heading.

6.5 Creating RADOS Block Device snapshots

To create a RADOS Block Device snapshot, select the device in the table row and the detailed configuration content pane appears. Select the Snapshots tab and click Create in the top left of the table heading. Enter the snapshot's name and confirm with Create RBD Snapshot.

After selecting a snapshot, you can perform additional actions on the device, such as rename, protect, clone, copy, or delete. Rollback restores the device's state from the current snapshot.

RBD snapshots
Figure 6.5: RBD snapshots

6.6 RBD mirroring

RADOS Block Device images can be asynchronously mirrored between two Ceph clusters. You can use the Ceph Dashboard to configure replication of RBD images between two or more clusters. This capability is available in two modes:

Journal-based

This mode uses the RBD journaling image feature to ensure point-in-time, crash-consistent replication between clusters.

Snapshot-based

This mode uses periodically scheduled or manually created RBD image mirror-snapshots to replicate crash-consistent RBD images between clusters.

Mirroring is configured on a per-pool basis within peer clusters and can be configured on a specific subset of images within the pool or configured to automatically mirror all images within a pool when using journal-based mirroring only.

Mirroring is configured using the rbd command, which is installed by default in SUSE Enterprise Storage 7.1. The rbd-mirror daemon is responsible for pulling image updates from the remote, peer cluster and applying them to the image within the local cluster. See Section 6.6.2, “Enabling the rbd-mirror daemon” for more information on enabling the rbd-mirror daemon.

Depending on the need for replication, RADOS Block Device mirroring can be configured for either one- or two-way replication:

One-way Replication

When data is only mirrored from a primary cluster to a secondary cluster, the rbd-mirror daemon runs only on the secondary cluster.

Two-way Replication

When data is mirrored from primary images on one cluster to non-primary images on another cluster (and vice-versa), the rbd-mirror daemon runs on both clusters.

Important
Important

Each instance of the rbd-mirror daemon must be able to connect to both the local and remote Ceph clusters simultaneously, for example all monitor and OSD hosts. Additionally, the network must have sufficient bandwidth between the two data centers to handle mirroring workload.

Tip
Tip: General information

For general information and the command line approach to RADOS Block Device mirroring, refer to Section 20.4, “RBD image mirrors”.

6.6.1 Configuring primary and secondary clusters

A primary cluster is where the original pool with images is created. A secondary cluster is where the pool or images are replicated from the primary cluster.

Note
Note: Relative naming

The primary and secondary terms can be relative in the context of replication because they relate more to individual pools than to clusters. For example, in two-way replication, one pool can be mirrored from the primary cluster to the secondary one, while another pool can be mirrored from the secondary cluster to the primary one.

6.6.2 Enabling the rbd-mirror daemon

The following procedures demonstrate how to perform the basic administrative tasks to configure mirroring using the rbd command. Mirroring is configured on a per-pool basis within the Ceph clusters.

The pool configuration steps should be performed on both peer clusters. These procedures assume two clusters, named “primary” and “secondary”, are accessible from a single host for clarity.

The rbd-mirror daemon performs the actual cluster data replication.

  1. Rename ceph.conf and keyring files and copy them from the primary host to the secondary host:

    cephuser@secondary > cp /etc/ceph/ceph.conf /etc/ceph/primary.conf
    cephuser@secondary > cp /etc/ceph/ceph.admin.client.keyring \
     /etc/ceph/primary.client.admin.keyring
    cephuser@secondary > scp PRIMARY_HOST:/etc/ceph/ceph.conf \
     /etc/ceph/secondary.conf
    cephuser@secondary > scp  PRIMARY_HOST:/etc/ceph/ceph.client.admin.keyring \
     /etc/ceph/secondary.client.admin.keyring
  2. To enable mirroring on a pool with rbd, specify the mirror pool enable, the pool name, and the mirroring mode:

    cephuser@adm > rbd mirror pool enable POOL_NAME MODE
    Note
    Note

    The mirroring mode can either be image or pool. For example:

    cephuser@secondary > rbd --cluster primary mirror pool enable image-pool image
    cephuser@secondary > rbd --cluster secondary mirror pool enable image-pool image
  3. On the Ceph Dashboard, navigate to Block › Mirroring. The Daemons table to the left shows actively running rbd-mirror daemons and their health.

    Running rbd-mirror daemons
    Figure 6.6: Running rbd-mirror daemons

6.6.3 Disabling mirroring

To disable mirroring on a pool with rbd, specify the mirror pool disable command and the pool name:

cephuser@adm > rbd mirror pool disable POOL_NAME

When mirroring is disabled on a pool in this way, mirroring will also be disabled on any images (within the pool) for which mirroring was enabled explicitly.

6.6.4 Bootstrapping peers

In order for the rbd-mirror to discover its peer cluster, the peer needs to be registered to the pool and a user account needs to be created. This process can be automated with rbd by using the mirror pool peer bootstrap create and mirror pool peer bootstrap import commands.

To manually create a new bootstrap token with rbd, specify the mirror pool peer bootstrap create command, a pool name, along with an optional site name to describe the local cluster:

cephuser@adm > rbd mirror pool peer bootstrap create [--site-name local-site-name] pool-name

The output of mirror pool peer bootstrap create will be a token that should be provided to the mirror pool peer bootstrap import command. For example, on the primary cluster:

cephuser@adm > rbd --cluster primary mirror pool peer bootstrap create --site-name primary
  image-pool eyJmc2lkIjoiOWY1MjgyZGItYjg5OS00NTk2LTgwOTgtMzIwYzFmYzM5NmYzIiwiY2xpZW50X2lkIjoicmJkL \
  W1pcnJvci1wZWVyIiwia2V5IjoiQVFBUnczOWQwdkhvQmhBQVlMM1I4RmR5dHNJQU50bkFTZ0lOTVE9PSIsIm1vbl9ob3N0I \
  joiW3YyOjE5Mi4xNjguMS4zOjY4MjAsdjE6MTkyLjE2OC4xLjM6NjgyMV0ifQ==

To manually import the bootstrap token created by another cluster with the rbd command, specify the mirror pool peer bootstrap import command, the pool name, a file path to the created token (or ‘-‘ to read from standard input), along with an optional site name to describe the local cluster and a mirroring direction (defaults to rx-tx for bidirectional mirroring, but can also be set to rx-only for unidirectional mirroring):

cephuser@adm > rbd mirror pool peer bootstrap import [--site-name local-site-name] \
[--direction rx-only or rx-tx] pool-name token-path

For example, on the secondary cluster:

cephuser@adm > cat >>EOF < token
eyJmc2lkIjoiOWY1MjgyZGItYjg5OS00NTk2LTgwOTgtMzIwYzFmYzM5NmYzIiwiY2xpZW50X2lkIjoicmJkLW1pcn \
Jvci1wZWVyIiwia2V5IjoiQVFBUnczOWQwdkhvQmhBQVlMM1I4RmR5dHNJQU50bkFTZ0lOTVE9PSIsIm1vbl9ob3N0I \
joiW3YyOjE5Mi4xNjguMS4zOjY4MjAsdjE6MTkyLjE2OC4xLjM6NjgyMV0ifQ==
EOF
cephuser@adm > rbd --cluster secondary mirror pool peer bootstrap import --site-name secondary image-pool token

6.6.5 Removing cluster peer

To remove a mirroring peer Ceph cluster with the rbd command, specify the mirror pool peer remove command, the pool name, and the peer UUID (available from the rbd mirror pool info command):

cephuser@adm > rbd mirror pool peer remove pool-name peer-uuid

6.6.6 Configuring pool replication in the Ceph Dashboard

The rbd-mirror daemon needs to have access to the primary cluster to be able to mirror RBD images. Ensure you have followed the steps in Section 6.6.4, “Bootstrapping peers” before continuing.

  1. On both the primary and secondary cluster, create pools with an identical name and assign the rbd application to them. Refer to Section 5.1, “Adding a new pool” for more details on creating a new pool.

    Creating a pool with RBD application
    Figure 6.7: Creating a pool with RBD application
  2. On both the primary and secondary cluster's dashboards, navigate to Block › Mirroring. In the Pools table on the right, click the name of the pool to replicate, and after clicking Edit Mode, select the replication mode. In this example, we will work with a pool replication mode, which means that all images within a given pool will be replicated. Confirm with Update.

    Configuring the replication mode
    Figure 6.8: Configuring the replication mode
    Important
    Important: Error or warning on the primary cluster

    After updating the replication mode, an error or warning flag will appear in the corresponding right column. That is because the pool has no peer user for replication assigned yet. Ignore this flag for the primary cluster as we assign a peer user to the secondary cluster only.

  3. On the secondary cluster's Dashboard, navigate to Block › Mirroring. Add the pool mirror peer by selecting Add Peer. Provide the primary cluster's details:

    Adding peer credentials
    Figure 6.9: Adding peer credentials
    Cluster Name

    An arbitrary unique string that identifies the primary cluster, such as 'primary'. The cluster name needs to be different from the real secondary cluster's name.

    CephX ID

    The Ceph user ID that you created as a mirroring peer. In this example it is 'rbd-mirror-peer'.

    Monitor Addresses

    Comma-separated list of IP addresses of the primary cluster's Ceph Monitor nodes.

    CephX Key

    The key related to the peer user ID. You can retrieve it by running the following example command on the primary cluster:

    cephuser@adm > ceph auth print_key pool-mirror-peer-name

    Confirm with Submit.

    List of replicated pools
    Figure 6.10: List of replicated pools

6.6.7 Verifying that RBD image replication works

When the rbd-mirror daemon is running and RBD image replication is configured on the Ceph Dashboard, it is time to verify whether the replication actually works:

  1. On the primary cluster's Ceph Dashboard, create an RBD image so that its parent pool is the pool that you already created for replication purposes. Enable the Exclusive lock and Journaling features for the image. Refer to Section 6.3, “Creating RBDs” for details on how to create RBD images.

    New RBD image
    Figure 6.11: New RBD image
  2. After you create the image that you want to replicate, open the secondary cluster's Ceph Dashboard and navigate to Block › Mirroring. The Pools table on the right will reflect the change in the number of # Remote images and synchronize the number of # Local images.

    New RBD image synchronized
    Figure 6.12: New RBD image synchronized
    Tip
    Tip: Replication progress

    The Images table at the bottom of the page shows the status of replication of RBD images. The Issues tab includes possible problems, the Syncing tab displays the progress of image replication, and the Ready tab lists all images with successful replication.

    RBD images' replication status
    Figure 6.13: RBD images' replication status
  3. On the primary cluster, write data to the RBD image. On the secondary cluster's Ceph Dashboard, navigate to Block › Images and monitor whether the corresponding image's size is growing as the data on the primary cluster is written.

6.7 Managing iSCSI Gateways

Tip
Tip: More information on iSCSI Gateways

For more general information about iSCSI Gateways, refer to Chapter 22, Ceph iSCSI gateway.

To list all available gateways and mapped images, click Block › iSCSI from the main menu. An Overview tab opens, listing currently configured iSCSI Gateways and mapped RBD images.

The Gateways table lists each gateway's state, number of iSCSI targets, and number of sessions. The Images table lists each mapped image's name, related pool name backstore type, and other statistical details.

The Targets tab lists currently configured iSCSI targets.

List of iSCSI targets
Figure 6.14: List of iSCSI targets

To view more detailed information about a target, click the drop-down arrow on the target table row. A tree-structured schema opens, listing disks, portals, initiators, and groups. Click an item to expand it and view its detailed contents, optionally with a related configuration in the table on the right.

iSCSI target details
Figure 6.15: iSCSI target details

6.7.1 Adding iSCSI targets

To add a new iSCSI target, click Create in the top left of the Targets table and enter the required information.

Adding a new target
Figure 6.16: Adding a new target
  1. Enter the target address of the new gateway.

  2. Click Add portal and select one or multiple iSCSI portals from the list.

  3. Click Add image and select one or multiple RBD images for the gateway.

  4. If you need to use authentication to access the gateway, activate the ACL Authentication check box and enter the credentials. You can find more advanced authentication options after activating Mutual authentication and Discovery authentication.

  5. Confirm with Create Target.

6.7.2 Editing iSCSI targets

To edit an existing iSCSI target, click its row in the Targets table and click Edit in the top left of the table.

You can then modify the iSCSI target, add or delete portals, and add or delete related RBD images. You can also adjust authentication information for the gateway.

6.7.3 Deleting iSCSI targets

To delete an iSCSI target, select the table row and click the drop-down arrow next to the Edit button and select Delete. Activate Yes, I am sure and confirm with Delete iSCSI target.

6.8 RBD Quality of Service (QoS)

Tip
Tip: For more information

For more general information and a description of RBD QoS configuration options, refer to Section 20.6, “QoS settings”.

The QoS options can be configured at different levels.

  • Globally

  • On a per-pool basis

  • On a per-image basis

The global configuration is at the top of the list and will be used for all newly created RBD images and for those images that do not override these values on the pool or RBD image layer. An option value specified globally can be overridden on a per-pool or per-image basis. Options specified on a pool will be applied to all RBD images of that pool unless overridden by a configuration option set on an image. Options specified on an image will override options specified on a pool and will override options specified globally.

This way it is possible to define defaults globally, adapt them for all RBD images of a specific pool, and override the pool configuration for individual RBD images.

6.8.1 Configuring options globally

To configure the RADOS Block Device options globally, select Cluster › Configuration from the main menu.

  1. To list all available global configuration options, next to Level, choose Advanced from the drop-down menu.

  2. Filter the results of the table by filtering for rbd_qos in the search field. This lists all available configuration options for QoS.

  3. To change a value, click the row in the table, then select Edit at the top left of the table. The Edit dialog contains six different fields for specifying values. The RBD configuration option values are required in the mgr text box.

    Note
    Note

    Unlike the other dialogs, this one does not allow you to specify the value in convenient units. You need to set these values in either bytes or IOPS, depending on the option you are editing.

6.8.2 Configuring options on a new pool

To create a new pool and configure RBD configuration options on it, click Pools › Create. Select replicated as pool type. You will then need to add the rbd application tag to the pool to be able to configure the RBD QoS options.

Note
Note

It is not possible to configure RBD QoS configuration options on an erasure coded pool. To configure the RBD QoS options for erasure coded pools, you need to edit the replicated metadata pool of an RBD image. The configuration will then be applied to the erasure coded data pool of that image.

6.8.3 Configuring options on an existing pool

To configure RBD QoS options on an existing pool, click Pools, then click the pool's table row and select Edit at the top left of the table.

You should see the RBD Configuration section in the dialog, followed by a Quality of Service section.

Note
Note

If you see neither the RBD Configuration nor the Quality of Service section, you are likely either editing an erasure coded pool, which cannot be used to set RBD configuration options, or the pool is not configured to be used by RBD images. In the latter case, assign the rbd application tag to the pool and the corresponding configuration sections will show up.

6.8.4 Configuration options

Click Quality of Service + to expand the configuration options. A list of all available options will show up. The units of the configuration options are already shown in the text boxes. In case of any bytes per second (BPS) option, you are free to use shortcuts such as '1M' or '5G'. They will be automatically converted to '1 MB/s' and '5 GB/s' respectively.

By clicking the reset button to the right of each text box, any value set on the pool will be removed. This does not remove configuration values of options configured globally or on an RBD image.

6.8.5 Creating RBD QoS options with a new RBD image

To create an RBD image with RBD QoS options set on that image, select Block › Images and then click Create. Click Advanced... to expand the advanced configuration section. Click Quality of Service + to open all available configuration options.

6.8.6 Editing RBD QoS options on existing images

To edit RBD QoS options on an existing image, select Block › Images, then click the pool's table row, and lastly click Edit. The edit dialog will show up. Click Advanced... to expand the advanced configuration section. Click Quality of Service + to open all available configuration options.

6.8.7 Changing configuration options when copying or cloning images

If an RBD image is cloned or copied, the values set on that particular image will be copied too, by default. If you want to change them while copying or cloning, you can do so by specifying the updated configuration values in the copy/clone dialog, the same way as when creating or editing an RBD image. Doing so will only set (or reset) the values for the RBD image that is copied or cloned. This operation changes neither the source RBD image configuration, nor the global configuration.

If you choose to reset the option value on copying/cloning, no value for that option will be set on that image. This means that any value of that option specified for the parent pool will be used if the parent pool has the value configured. Otherwise, the global default will be used.