Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
Applies to SUSE Linux Enterprise High Availability Extension 11 SP4

15 DRBD

The distributed replicated block device (DRBD*) allows you to create a mirror of two block devices that are located at two different sites across an IP network. When used with OpenAIS, DRBD supports distributed high-availability Linux clusters. This chapter shows you how to install and set up DRBD.

15.1 Conceptual Overview

DRBD replicates data on the primary device to the secondary device in a way that ensures that both copies of the data remain identical. Think of it as a networked RAID 1. It mirrors data in real-time, so its replication occurs continuously. Applications do not need to know that in fact their data is stored on different disks.

Important
Important: Unencrypted Data

The data traffic between mirrors is not encrypted. For secure data exchange, you should deploy a Virtual Private Network (VPN) solution for the connection.

DRBD is a Linux Kernel module and sits between the I/O scheduler at the lower end and the file system at the upper end, see Figure 15.1, “Position of DRBD within Linux”. To communicate with DRBD, users use the high-level command drbdadm. For maximum flexibility DRBD comes with the low-level tool drbdsetup.

Position of DRBD within Linux
Figure 15.1: Position of DRBD within Linux

DRBD allows you to use any block device supported by Linux, usually:

  • partition or complete hard disk

  • software RAID

  • Logical Volume Manager (LVM)

  • Enterprise Volume Management System (EVMS)

By default, DRBD uses the TCP ports 7788 and higher for communication between DRBD nodes. Make sure that your firewall does not prevent communication on the used ports.

You must set up the DRBD devices before creating file systems on them. Everything pertaining to user data should be done solely via the /dev/drbd_N device and not on the raw device, as DRBD uses the last part of the raw device for metadata. Using the raw device will cause inconsistent data.

With udev integration, you will also get symbolic links in the form /dev/drbd/by-res/RESOURCES which are easier to use and provide safety against misremembering the devices' minor number.

For example, if the raw device is 1024 MB in size, the DRBD device has only 1023 MB available for data, with about 70 KB hidden and reserved for the metadata. Any attempt to access the remaining kilobytes via /dev/drbdN fails because it is not available for user data.

15.2 Installing DRBD Services

To install the needed packages for DRBD, install the High Availability Extension Add-On product on both SUSE Linux Enterprise Server machines in your networked cluster as described in Part I, “Installation and Setup”. Installing High Availability Extension also installs the DRBD program files.

If you do not need the complete cluster stack but just want to use DRBD, refer to Table 15.1, “DRBD RPM Packages”. It contains a list of all RPM packages for DRBD. Recently, the drbd package has been split into separate packages.

Table 15.1: DRBD RPM Packages

Filename

Explanation

drbd

Convenience package, split into other

drbd-bash-completion

Programmable bash completion support for drbdadm

drbd-heartbeat

Heartbeat resource agent for DRBD (only needed for Heartbeat)

drbd-kmp-default

Kernel module for DRBD (needed)

drbd-kmp-xen

Xen Kernel module for DRBD

drbd-udev

udev integration scripts for DRBD, managing symlinks to DRBD devices in /dev/drbd/by-res and /dev/drbd/by-disk

drbd-utils

Management utilities for DRBD (needed)

drbd-pacemaker

Pacemaker resource agent for DRBD

drbd-xen

Xen block device management script for DRBD

yast2-drbd

YaST DRBD Configuration (recommended)

To simplify the work with drbdadm, use the Bash completion support in the RPM package drbd-bash-completion. If you want to enable it in your current shell session, insert the following command:

root # source /etc/bash_completion.d/drbdadm.sh

To use it permanently for root, create, or extend a file /root/.bashrc and insert the previous line.

15.3 Configuring the DRBD Service

Note
Note: Adjustments Needed

The following procedure uses the server names alice and bob, and the cluster resource name r0. It sets up alice as the primary node and /dev/sda1 for storage. Make sure to modify the instructions to use your own nodes and file names.

Before you start configuring DRBD, make sure the block devices in your Linux nodes are ready and partitioned (if needed). The following procedure assumes you have two nodes, alice and bob, and that they should use the TCP port 7788. Make sure this port is open in your firewall.

To set up DRBD manually, proceed as follows:

Procedure 15.1: Manually Configuring DRBD
  1. Put your cluster in maintenance mode, if the cluster is already using DRBD:

    root # crm configure property maintenance-mode=true

    If you skip this step when your cluster uses already DRBD, a syntax error in the live configuration will lead to a service shutdown.

  2. Log in as user root.

  3. Change DRBD's configuration files:

    1. Open the file /etc/drbd.conf and insert the following lines, if they do not exist yet:

      include "drbd.d/global_common.conf";
      include "drbd.d/*.res";

      Beginning with DRBD 8.3 the configuration file is split into separate files, located under the directory /etc/drbd.d/.

    2. Open the file /etc/drbd.d/global_common.conf. It contains already some pre-defined values. Go to the startup section and insert these lines:

      startup {
          # wfc-timeout degr-wfc-timeout outdated-wfc-timeout
          # wait-after-sb;
          wfc-timeout 100;
          degr-wfc-timeout 120;
      }

      These options are used to reduce the timeouts when booting, see http://www.drbd.org/users-guide-emb/re-drbdconf.html for more details.

    3. Create the file /etc/drbd.d/r0.res, change the lines according to your situation, and save it:

      resource r0 { 1
        device /dev/drbd0; 2
        disk /dev/sda1; 3
        meta-disk internal; 4
        on alice { 5
          address  192.168.1.10:7788; 6
        }
        on bob { 5
          address 192.168.1.11:7788; 6
        }
        syncer {
          rate  7M; 7
        }
      }

      1

      Name that allows some association to the service that needs them. For example, nfs, http, mysql_0, postgres_wal, etc.

      2

      The device name for DRBD and its minor number.

      In the example above, the minor number 0 is used for DRBD. The udev integration scripts will give you a symbolic link /dev/drbd/by-res/nfs/0. Alternatively, omit the device node name in the configuration and use the following line instead:

      drbd0 minor 0 (/dev/ is optional) or /dev/drbd0

      3

      The raw device that is replicated between nodes. Note, in this example the devices are the same on both nodes. If you need different devices, move the disk parameter into the on host.

      4

      The meta-disk parameter usually contains the value internal, but it is possible to specify an explicit device to hold the meta data. See http://www.drbd.org/users-guide-emb/ch-internals.html#s-metadata for more information.

      5

      The on section states which host this configuration statement applies to.

      6

      The IP address and port number of the respective node. Each resource needs an individual port, usually starting with 7788.

      7

      The synchronization rate. Set it to one third of the lower of the disk- and network bandwidth. It only limits the resynchronization, not the replication.

  4. Check the syntax of your configuration file(s). If the following command returns an error, verify your files:

    root # drbdadm dump all
  5. If you have configured Csync2 (which should be the default), the DRBD configuration files are already included in the list of files which need to be synchronized. To synchronize them, use:

    root # csync2 -xv /etc/drbd.d/

    If you do not have Csync2 (or do not want to use it), copy the DRBD configuration files manually to the other node:

    root # scp /etc/drbd.conf bob:/etc/
    scp /etc/drbd.d/*  bob:/etc/drbd.d/
  6. Initialize the meta data on both systems by entering the following on each node:

    root # drbdadm create-md r0
    root # rcdrbd start

    If your disk already contains a file system that you do not need anymore, destroy the file system structure with the following command and repeat this step:

    root #  dd if=/dev/zero of=/dev/sda1 count=16 bs=1M
  7. Watch the DRBD status by entering the following on each node:

    root # rcdrbd status

    You should get something like this:

    [... version string omitted ...]
    m:res  cs         ro                   ds                         p  mounted  fstype
    0:r0   Connected  Secondary/Secondary  Inconsistent/Inconsistent  C
  8. Start the resync process on your intended primary node (alice in this case):

    root # drbdadm -- --overwrite-data-of-peer primary r0
  9. Check the status again with rcdrbd status and you get:

    ...
    m:res  cs         ro                 ds                 p  mounted  fstype
    0:r0   Connected  Primary/Secondary  UpToDate/UpToDate  C

    The status in the ds row (disk status) must be UpToDate on both nodes.

  10. Create your file system on top of your DRBD device, for example:

    root # mkfs.ext3 /dev/drbd/by-res/r0/0
  11. Mount the file system and use it:

    root # mount /dev/drbd /mnt/
  12. Reset the cluster's maintenance mode flag:

    root # crm configure property maintenance-mode=false

Alternatively, to use YaST to configure DRBD, proceed as follows:

Procedure 15.2: Using YaST to Configure DRBD
  1. Start YaST and select the configuration module High Availability › DRBD. If you already have a DRBD configuration, YaST warns you. YaST will change your configuration and will save your old DRBD configuration files as *.YaSTsave.

    Leave the booting flag in Start-up Configuration › Booting as it is (by default it is off); do not change that as Pacemaker manages this service.

  2. The actual configuration of the resource is done in Resource Configuration (see Figure 15.2, “Resource Configuration”).

    Resource Configuration
    Figure 15.2: Resource Configuration

    Press Add to create a new resource. The following parameters need to be set twice:

    Resource Name

    The name of the resource (mandatory)

    Name

    The host name of the relevant node

    Address:Port

    The IP address and port number (default 7788) for the respective node

    Device

    The block device path that is used to access the replicated data. If the device contains a minor number, the associated block device is usually named /dev/drbdX, where X is the device minor number. If the device does not contain a minor number, make sure to add minor 0 after the device name.

    Disk

    The device that is replicated between both nodes.

    Meta-disk

    The Meta-disk is either set to the value internal or specifies an explicit device extended by an index to hold the meta data needed by DRBD.

    A real device may also be used for multiple drbd resources. For example, if your Meta-Disk is /dev/sda6[0] for the first resource, you may use /dev/sda6[1] for the second resource. However, there must be at least 128 MB space for each resource available on this disk. The fixed metadata size limits the maximum data size that you can replicate.

    All of these options are explained in the examples in the /usr/share/doc/packages/drbd-utils/drbd.conf file and in the man page of drbd.conf(5).

  3. If you have configured Csync2 (which should be the default), the DRBD configuration files are already included in the list of files which need to be synchronized. To synchronize them, use:

    root # csync2 -xv /etc/drbd.d/

    If you do not have Csync2 (or do not want to use it), copy the DRBD configuration files manually to the other node (here, another node with the name bob):

    root # scp /etc/drbd.conf bob:/etc/
    scp /etc/drbd.d/*  bob:/etc/drbd.d/
  4. Initialize and start the DRBD service on both systems by entering the following on each node:

    root # drbdadm create-md r0
    root # rcdrbrd start
  5. Configure alice as the primary node by entering the following on alice:

    root # drbdsetup /dev/drbd0 primary --overwrite-data-of-peer
  6. Check the DRBD service status by entering the following on each node:

    rcdrbd status

    Before proceeding, wait until the block devices on both nodes are fully synchronized. Repeat the rcdrbd status command to follow the synchronization progress.

  7. After the block devices on both nodes are fully synchronized, format the DRBD device on the primary with your preferred file system. Any Linux file system can be used. It is recommended to use the /dev/drbd/by-res/RESOURCE name.

15.4 Testing the DRBD Service

If the install and configuration procedures worked as expected, you are ready to run a basic test of the DRBD functionality. This test also helps with understanding how the software works.

  1. Test the DRBD service on alice.

    1. Open a terminal console, then log in as root.

    2. Create a mount point on alice, such as /srv/r0:

      root # mkdir -p /srv/r0
    3. Mount the drbd device:

      root # mount -o rw /dev/drbd0 /srv/r0
    4. Create a file from the primary node:

      root # touch /srv/r0/from_alice
    5. Unmount the disk on alice:

      root # umount /srv/r0
    6. Downgrade the DRBD service on alice by typing the following command on alice:

      root # drbdadm secondary
  2. Test the DRBD service on bob.

    1. Open a terminal console, then log in as root on bob.

    2. On bob, promote the DRBD service to primary:

      root # drbdadm primary
    3. On bob, check to see if bob is primary:

      root # rcdrbd status
    4. On bob, create a mount point such as /srv/r0mount:

      root # mkdir /srv/r0mount
    5. On bob, mount the DRBD device:

      root # mount -o rw /dev/drbd_r0 /srv/r0mount
    6. Verify that the file you created on alice exists:

      root # ls /srv/r0

      The /srv/r0mount/from_alice file should be listed.

  3. If the service is working on both nodes, the DRBD setup is complete.

  4. Set up alice as the primary again.

    1. Dismount the disk on bob by typing the following command on bob:

      root # umount /srv/r0
    2. Downgrade the DRBD service on bob by typing the following command on bob:

      root # drbdadm secondary
    3. On alice, promote the DRBD service to primary:

      root # drbdadm primary
    4. On alice, check to see if alice is primary:

      rcdrbd status
  5. To get the service to automatically start and fail over if the server has a problem, you can set up DRBD as a high availability service with OpenAIS. For information about installing and configuring OpenAIS for SUSE Linux Enterprise 11 see Part II, “Configuration and Administration”.

15.5 Tuning DRBD

There are several ways to tune DRBD:

  1. Use an external disk for your metadata. This might help, at the cost of maintenance ease.

  2. Create a udev rule to change the read-ahead of the DRBD device. Save the following line in the file /etc/udev/rules.d/82-dm-ra.rules and change the read_ahead_kb value to your workload:

    ACTION=="add", KERNEL=="dm-*", ATTR{bdi/read_ahead_kb}="4100"

    This line only works if you use LVM.

  3. Tune your network connection, by changing the receive and send buffer settings via sysctl.

  4. Change the max-buffers, max-epoch-size or both in the DRBD configuration.

  5. Increase the al-extents value, depending on your IO patterns.

  6. If you have a hardware RAID controller with a BBU (Battery Backup Unit), you might benefit from setting no-disk-flushes, no-disk-barrier and/or no-md-flushes.

  7. Enable read-balancing depending on your workload. See http://www.linbit.com/en/read-balancing/ for more details.

15.6 Troubleshooting DRBD

The DRBD setup involves many different components and problems may arise from different sources. The following sections cover several common scenarios and recommend various solutions.

15.6.1 Configuration

If the initial DRBD setup does not work as expected, there is probably something wrong with your configuration.

To get information about the configuration:

  1. Open a terminal console, then log in as root.

  2. Test the configuration file by running drbdadm with the -d option. Enter the following command:

    root # drbdadm -d adjust r0

    In a dry run of the adjust option, drbdadm compares the actual configuration of the DRBD resource with your DRBD configuration file, but it does not execute the calls. Review the output to make sure you know the source and cause of any errors.

  3. If there are errors in the /etc/drbd.d/* and drbd.conf files, correct them before continuing.

  4. If the partitions and settings are correct, run drbdadm again without the -d option.

    root # drbdadm adjust r0

    This applies the configuration file to the DRBD resource.

15.6.2 Host Names

For DRBD, host names are case-sensitive (Node0 would be a different host than node0), and compared to the host name as stored in the Kernel (see the uname -n output).

If you have several network devices and want to use a dedicated network device, the host name will likely not resolve to the used IP address. In this case, use the parameter disable-ip-verification.

15.6.3 TCP Port 7788

If your system cannot connect to the peer, this might be a problem with your local firewall. By default, DRBD uses the TCP port 7788 to access the other node. Make sure that this port is accessible on both nodes.

15.6.4 DRBD Devices Broken after Reboot

In cases when DRBD does not know which of the real devices holds the latest data, it changes to a split brain condition. In this case, the respective DRBD subsystems come up as secondary and do not connect to each other. In this case, the following message can be found in the logging data:

Split-Brain detected, dropping connection!

To resolve this situation, enter the following on the node which has data to be discarded:

root # drbdadm secondary r0 
root # drbdadm -- --discard-my-data connect r0

On the node which has the latest data enter the following:

root # drbdadm connect r0

That resolves the issue by overwriting one node's data with the peer's data, therefore getting a consistent view on both nodes.

15.7 For More Information

The following open source resources are available for DRBD:

  • The project home page http://www.drbd.org.

  • See Article “Highly Available NFS Storage with DRBD and Pacemaker”.

  • http://clusterlabs.org/wiki/DRBD_HowTo_1.0 by the Linux Pacemaker Cluster Stack Project.

  • The following man pages for DRBD are available in the distribution: drbd(8), drbddisk(8), drbdsetup(8), drbdsetup(8), drbdadm(8), drbd.conf(5).

  • Find a commented example configuration for DRBD at /usr/share/doc/packages/drbd/drbd.conf

  • Furthermore, for easier storage administration across your cluster, see the recent announcement about the DRBD-Manager at http://www.linbit.com/en/drbd-manager.

Print this page