Highly Available NFS Storage with DRBD and Pacemaker #
This document describes how to set up highly available NFS storage in a two-node cluster, using the following components: DRBD* (Distributed Replicated Block Device), LVM (Logical Volume Manager), and Pacemaker as cluster resource manager.
The method described in this version of the guide is outdated and may cause issues in some setups. For more information, see https://www.suse.com/support/kb/doc/?id=000020396.
The process for configuring highly available NFS storage has been improved in version 15 SP3.
Copyright © 2006–2024 SUSE LLC and contributors. All rights reserved.
この文書は、GNUフリー文書ライセンスのバージョン1.2または(オプションとして)バージョン1.3の条項に従って、複製、頒布、および/または改変が許可されています。ただし、この著作権表示およびライセンスは変更せずに記載すること。ライセンスバージョン1.2のコピーは、“GNUフリー文書ライセンス”セクションに含まれています。
SUSEの商標については、http://www.suse.com/company/legal/を参照してください。その他の製品名および会社名は、各社の商標または登録商標です。商標記号(®、 ™など)は、SUSEおよび関連会社の商標を示します。アスタリスク(*)は、第三者の商標を示します。
本書のすべての情報は、細心の注意を払って編集されています。しかし、このことは絶対に正確であることを保証するものではありません。SUSE LLC、その関係者、著者、翻訳者のいずれも誤りまたはその結果に対して一切責任を負いかねます。
1 Usage Scenario #
This document will help you set up a highly available NFS server. The cluster used for the highly available NFS storage has the following properties:
Two nodes:
alice
(IP:192.168.1.1
) andbob
(IP:192.168.1.2
), connected to each other via network.Two floating, virtual IP addresses (
192.168.1.10
and192.168.1.11
), allowing clients to connect to the service no matter which physical node it is running on. One IP address is used for cluster administration with Hawk2, the other IP address is used exclusively for the NFS exports.SBD used as a STONITH fencing device to avoid split-brain scenarios. STONITH is mandatory for the HA cluster.
Failover of resources from one node to the other if the active host breaks down (active/passive setup).
Local storage on each host. The data is synchronized between the hosts using DRBD on top of LVM.
A file system exported through NFS.
After installing and setting up the basic two-node cluster, and extending it with storage and cluster resources for NFS, you will have a highly available NFS storage server.
2 Installing a Basic Two-Node Cluster #
Before you proceed, install and set up a basic two-node cluster. This task is described in インストールおよびセットアップクイックスタート. The Installation and Setup Quick Start describes how to use the crm shell to set up a cluster with minimal effort.
3 Creating an LVM Device #
LVM (Logical Volume Manager) enables flexible distribution of hard disk space over several file systems.
To prepare your disks for LVM, do the following:
Create an LVM volume group, replacing
/dev/disk/by-id/DEVICE_ID
with your corresponding device for LVM:#
pvcreate
/dev/disk/by-id/DEVICE_IDCreate an LVM Volume Group
nfs
that includes this physical volume:#
vgcreate
nfs /dev/disk/by-id/DEVICE_IDCreate one or more logical volumes in the volume group
nfs
. This example assumes a 20 gigabyte volume, namedwork
:#
lvcreate
-n work -L 20G nfsActivate the volume group:
#
vgchange
-ay nfs
After you have successfully executed the above steps, your system
will make visible the following device: /dev/VOLGROUP/LOGICAL_VOLUME
.
In this case it will be /dev/nfs/work
.
4 Creating a DRBD Device #
This section describes how to set up a DRBD device on top of LVM. The configuration of LVM as a back-end of DRBD has some benefits:
Easier setup than with LVM on top of DRBD.
Easier administration in case the LVM disks need to be resized or more disks are added to the volume group.
As the LVM volume group is named nfs
, the
DRBD resource uses the same name.
4.1 Creating DRBD Configuration #
For consistency reasons, it is highly recommended to follow this advice:
Use the directory
/etc/drbd.d/
for your configuration.Name the file according to the purpose of the resource.
Put your resource configuration in a file with a
.res
extension. In the following examples, the file/etc/drbd.d/nfs.res
is used.
Proceed as follows:
Create the file
/etc/drbd.d/nfs.res
with the following contents:resource nfs { device /dev/drbd0; 1 disk /dev/nfs/work; 2 meta-disk internal; 3 net { protocol C; 4 fencing resource-and-stonith; 5 } handlers { 6 fence-peer "/usr/lib/drbd/crm-fence-peer.9.sh"; after-resync-target "/usr/lib/drbd/crm-unfence-peer.9.sh"; # ... } connection-mesh { 7 hosts alice bob; } on alice { 8 address 192.168.1.1:7790; node-id 0; } on bob { address 192.168.1.2:7790; node-id 1; } }
The DRBD device that applications are supposed to access.
The lower-level block device used by DRBD to store the actual data. This is the LVM device that was created in Section 3, “Creating an LVM Device”.
Where the metadata format is stored. Using
internal
, the metadata is stored together with the user data on the same device. See the man page for further information.The protocol to use for this connection. Protocol
C
is the default option. It provides better data availability and does not consider a write to be complete until it has reached all local and remote disks.Specifies the fencing policy
resource-and-stonith
at the DRBD level. This policy immediately suspends active I/O operations until STONITH completes.Enables resource-level fencing to prevent Pacemaker from starting a service with outdated data. If the DRBD replication link becomes disconnected, the
crm-fence-peer.9.sh
script stops the DRBD resource from being promoted to another node until the replication link becomes connected again and DRBD completes its synchronization process.Defines all nodes of a mesh. The
hosts
parameter contains all host names that share the same DRBD setup.Contains the IP address and a unique identifier for each node.
Open
/etc/csync2/csync2.cfg
and check whether the following two lines exist:include /etc/drbd.conf; include /etc/drbd.d/*.res;
If not, add them to the file.
Copy the file to the other nodes:
#
csync2
-xvFor information about Csync2, refer to 4.5項 「すべてのノードへの設定の転送」.
4.2 Activating the DRBD Device #
After you have prepared your DRBD configuration, proceed as follows:
If you use a firewall in your cluster, open port
7790
in your firewall configuration.The first time you do this, execute the following commands on both nodes (in our example,
alice
andbob
):#
drbdadm
create-md nfs#
drbdadm
up nfsThis initializes the metadata storage and creates the
/dev/drbd0
device.If the DRBD devices on all nodes have the same data, skip the initial resynchronization. Use the following command:
#
drbdadm
new-current-uuid --clear-bitmap nfs/0Make
alice
primary:#
drbdadm
primary --force nfsCheck the DRBD status:
#
drbdadm
status nfsThis returns the following message:
nfs role:Primary disk:UpToDate alice role:Secondary peer-disk:UpToDate
After the synchronization is complete, you can access the DRBD resource
on the block device /dev/drbd0
. Use this device
for creating your file system.
Find more information about DRBD in 第20章 「DRBD」.
4.3 Creating the File System #
After you have finished Section 4.2, “Activating the DRBD Device”,
you should see a DRBD device on /dev/drbd0
:
#
mkfs.ext3
/dev/drbd0
5 Adjusting Pacemaker's Configuration #
A resource might fail back to its original node when that node is back online and in the cluster. To prevent a resource from failing back to the node that it was running on, or to specify a different node for the resource to fail back to, change its resource stickiness value. You can either specify resource stickiness when you are creating a resource or afterward.
To adjust the option, open the crm shell as root
(or any
non-root
user that is part of the
haclient
group) and run the
following commands:
#
crm
configurecrm(live)configure#
rsc_defaults
resource-stickiness="200"crm(live)configure#
commit
For more information about global cluster options, refer to 6.2項 「クォーラムの判断」.
6 Creating Cluster Resources #
The following sections cover the configuration of the required resources for a highly available NFS cluster. The configuration steps use the crm shell. The following list shows the necessary cluster resources:
- DRBD Primitive and Promotable Clone Resources
These resources are used to replicate data. The promotable clone resource is switched from and to the Primary and Secondary roles as deemed necessary by the cluster resource manager.
- NFS Kernel Server Resource
With this resource, Pacemaker ensures that the NFS server daemons are always available.
- NFS Exports
One or more NFS exports, typically corresponding to the file system.
The following configuration examples assume that
192.168.1.11
is the virtual IP address to use for an NFS server which serves clients in the192.168.1.x/24
subnet.The service exports data served from
/srv/nfs/work
.Into this export directory, the cluster will mount
ext3
file systems from the DRBD device/dev/drbd0
. This DRBD device sits on top of an LVM logical volume with the namenfs
.
6.1 DRBD Primitive and Promotable Clone Resource #
To configure these resources, run the following commands from the crm shell:
crm(live)#
configure
crm(live)configure#
primitive
drbd_nfs \ ocf:linbit:drbd \ params drbd_resource="nfs" \ op monitor interval="15" role="Master" \ op monitor interval="30" role="Slave"crm(live)configure#
ms
ms-drbd_nfs drbd_nfs \ meta master-max="1" master-node-max="1" clone-max="2" \ clone-node-max="1" notify="true"crm(live)configure#
commit
This will create a Pacemaker promotable clone resource corresponding to the
DRBD resource nfs
. Pacemaker should now activate your
DRBD resource on both nodes and promote it to the master role on one of
them.
Check the state of the cluster with the crm status
command, or run drbdadm status
.
6.2 NFS Kernel Server Resource #
In the crm shell, the resource for the NFS server
daemons must be configured as a clone of a
systemd
resource type.
crm(live)configure#
primitive
nfsserver \ systemd:nfs-server \ op monitor interval="30s"crm(live)configure#
clone
cl-nfsserver nfsserver \ meta interleave=truecrm(live)configure#
commit
After you have committed this configuration, Pacemaker should start the NFS Kernel server processes on both nodes.
6.3 File System Resource #
Configure the file system type resource as follows (but do not commit this configuration yet):
crm(live)configure#
primitive
fs_work \ ocf:heartbeat:Filesystem \ params device=/dev/drbd0 \ directory=/srv/nfs/work \ fstype=ext3 \ op monitor interval="10s"Combine these resources into a Pacemaker resource group:
crm(live)configure#
group
g-nfs fs_workAdd the following constraints to make sure that the group is started on the same node on which the DRBD promotable clone resource is in the master role:
crm(live)configure#
order
o-drbd_before_nfs Mandatory: \ ms-drbd_nfs:promote g-nfs:startcrm(live)configure#
colocation
col-nfs_on_drbd inf: \ g-nfs ms-drbd_nfs:MasterCommit this configuration:
crm(live)configure#
commit
After these changes have been committed, Pacemaker mounts the DRBD device
to /srv/nfs/work
on the same node. Confirm this with
mount
(or by looking at /proc/mounts
).
6.4 NFS Export Resources #
When your DRBD, LVM, and file system resources are working properly,
continue with the resources managing your NFS exports. To create highly
available NFS export resources, use the exportfs
resource type.
To export the /srv/nfs/work
directory to clients,
use the following primitive:
Create NFS exports with the following commands:
crm(live)configure#
primitive
exportfs_work \ ocf:heartbeat:exportfs \ params directory="/srv/nfs/work" \ options="rw,mountpoint" \ clientspec="192.168.1.0/24" \ wait_for_leasetime_on_stop=true \ fsid=100 \ op monitor interval="30s"After you have created these resources, append them to the existing
g-nfs
resource group:crm(live)configure#
modgroup
g-nfs add exportfs_workCommit this configuration:
crm(live)configure#
commit
Pacemaker will export the NFS virtual file system root and the two other exports.
Confirm that the NFS exports are set up properly:
#
exportfs
-v /srv/nfs/work IP_ADDRESS_OF_CLIENT(OPTIONS)
6.5 Virtual IP Address for NFS Exports #
The initial installation creates an administrative virtual IP address for Hawk2. Although you could use this IP address for your NFS exports too, create another one exclusively for NFS exports. This makes it easier to apply security restrictions later. Use the following commands in the crm shell:
crm(live)configure#
primitive
vip_nfs IPaddr2 \ params ip=192.168.1.11 cidr_netmask=24 \ op monitor interval=10 timeout=20crm(live)configure#
modgroup
g-nfs add vip_nfscrm(live)configure#
commit
7 Using the NFS Service #
This section outlines how to use the highly available NFS service from an NFS client.
To connect to the NFS service, make sure to use the virtual IP address to connect to the cluster rather than a physical IP configured on one of the cluster nodes' network interfaces. For compatibility reasons, use the full path of the NFS export on the server.
In its simplest form, the command to mount the NFS export looks like this:
#
mount
-t nfs 192.168.1.11:/srv/nfs/work /home/work
To configure a specific transport protocol (proto
)
and maximum read and write request sizes (rsize
and
wsize
), use:
#
mount
-o rsize=32768,wsize=32768 \ 192.168.1.11:/srv/nfs/work /home/work
In case you need to be compatible with NFS version 3, include the value
vers=3
after the -o
option.
For further NFS mount options, consult the nfs
man page.