Highly Available NFS Storage with DRBD and Pacemaker #
This document describes how to set up highly available NFS storage in a two-node cluster, using the following components: DRBD* (Distributed Replicated Block Device), LVM (Logical Volume Manager), and Pacemaker as cluster resource manager.
Copyright © 2006–2024 SUSE LLC and contributors.All rights reserved.
この文書は、GNUフリー文書ライセンスのバージョン1.2または(オプションとして)バージョン1.3の条項に従って、複製、配布、および/または改変が許可されています。ただし、この著作権表示およびライセンスは変更せずに記載すること。ライセンスバージョン1.2のコピーは、“GNUフリー文書ライセンス”セクションに含まれています。
SUSEの商標については、http://www.suse.com/company/legal/を参照してください。その他の第三者のすべての商標は、各社の所有に帰属します。商標記号(®、 ™など)は、SUSEおよび関連会社の商標を示します。アスタリスク(*)は、第三者の商標を示します。
本書のすべての情報は、細心の注意を払って編集されています。しかし、このことは絶対に正確であることを保証するものではありません。SUSE LLC、その関係者、著者、翻訳者のいずれも誤りまたはその結果に対して一切責任を負いかねます。
1 Usage scenario #
This document helps you set up a highly available NFS server. The cluster used for the highly available NFS storage has the following properties:
Two nodes:
alice
(IP:192.168.1.1
) andbob
(IP:192.168.1.2
), connected to each other via network.Two floating, virtual IP addresses (
192.168.1.10
and192.168.1.11
), allowing clients to connect to a service no matter which physical node it is running on. One IP address is used for cluster administration with Hawk2, and the other IP address is used exclusively for the NFS exports.SBD used as a STONITH fencing device to avoid split-brain scenarios. STONITH is mandatory for the HA cluster.
Failover of resources from one node to the other if the active host breaks down (active/passive setup).
Local storage on each node. The data is synchronized between the nodes using DRBD on top of LVM.
A file system exported through NFS and a separate file system used to track the NFS client states.
After installing and setting up the basic two-node cluster, and extending it with storage and cluster resources for NFS, you will have a highly available NFS storage server.
2 Preparing a two-node cluster #
Before you can set up highly available NFS storage, you must prepare a High Availability cluster:
Install and set up a basic two-node cluster as described in Installation and Setup Quick Start.
On both nodes, install the package nfs-kernel-server:
#
zypper install nfs-kernel-server
3 Creating LVM devices #
LVM (Logical Volume Manager) enables flexible distribution of storage space across several file systems.
Use crm cluster run
to run these commands on both nodes at once.
Create an LVM physical volume, replacing
/dev/disk/by-id/DEVICE_ID
with your corresponding device for LVM:#
crm cluster run "pvcreate /dev/disk/by-id/DEVICE_ID"
Create an LVM volume group
nfs
that includes this physical volume:#
crm cluster run "vgcreate nfs /dev/disk/by-id/DEVICE_ID"
Create a logical volume named
share
in the volume groupnfs
:#
crm cluster run "lvcreate -n share -L 20G nfs"
This volume is for the NFS exports.
Create a logical volume named
state
in the volume groupnfs
:#
crm cluster run "lvcreate -n state -L 8G nfs"
This volume is for the NFS client states. The 8 GB volume size used in this example should support several thousand concurrent NFS clients.
Activate the volume group:
#
crm cluster run "vgchange -ay nfs"
You should now see the following devices on the system:
/dev/nfs/share
and /dev/nfs/state
.
4 Creating DRBD devices #
This section describes how to set up DRBD devices on top of LVM. Using LVM as a back-end of DRBD has the following benefits:
Easier setup than with LVM on top of DRBD.
Easier administration in case the LVM disks need to be resized or more disks are added to the volume group.
The following procedures result in two DRBD devices: one device for the NFS exports, and a second device to track the NFS client states.
4.1 Creating the DRBD configuration #
DRBD configuration files are kept in the /etc/drbd.d/
directory and must end with a .res
extension. In this procedure, the configuration file is named
/etc/drbd.d/nfs.res
.
Create the file
/etc/drbd.d/nfs.res
with the following contents:resource nfs { volume 0 { 1 device /dev/drbd0; 2 disk /dev/nfs/state; 3 meta-disk internal; 4 } volume 1 { device /dev/drbd1; disk /dev/nfs/share; meta-disk internal; } net { protocol C; 5 fencing resource-and-stonith; 6 } handlers { 7 fence-peer "/usr/lib/drbd/crm-fence-peer.9.sh"; after-resync-target "/usr/lib/drbd/crm-unfence-peer.9.sh"; } connection-mesh { 8 hosts alice bob; } on alice { 9 address 192.168.1.1:7790; node-id 0; } on bob { 9 address 192.168.1.2:7790; node-id 1; } }
The volume number for each DRBD device you want to create.
The DRBD device that applications will access.
The lower-level block device used by DRBD to store the actual data. This is the LVM device that was created in Section 3, “Creating LVM devices”.
Where the metadata is stored. Using
internal
, the metadata is stored together with the user data on the same device. See the man page for further information.The protocol to use for this connection. Protocol
C
is the default option. It provides better data availability and does not consider a write to be complete until it has reached all local and remote disks.Specifies the fencing policy
resource-and-stonith
at the DRBD level. This policy immediately suspends active I/O operations until STONITH completes.Enables resource-level fencing to prevent Pacemaker from starting a service with outdated data. If the DRBD replication link becomes disconnected, the
crm-fence-peer.9.sh
script stops the DRBD resource from being promoted to another node until the replication link becomes connected again and DRBD completes its synchronization process.Defines all nodes of a mesh. The
hosts
parameter contains all host names that share the same DRBD setup.Contains the IP address and a unique identifier for each node.
Open
/etc/csync2/csync2.cfg
and check whether the following two lines exist:include /etc/drbd.conf; include /etc/drbd.d;
If not, add them to the file.
Copy the file to the other nodes:
#
csync2 -xv
For information about Csync2, see 4.5項 「すべてのノードへの設定の転送」.
4.2 Activating the DRBD devices #
After preparing the DRBD configuration, activate the devices:
If you use a firewall in the cluster, open port
7790
in the firewall configuration.Initialize the metadata storage:
#
crm cluster run "drbdadm create-md nfs"
Create the DRBD devices:
#
crm cluster run "drbdadm up nfs"
The devices do not have data yet, so you can run these commands to skip the initial synchronization:
#
drbdadm new-current-uuid --clear-bitmap nfs/0
#
drbdadm new-current-uuid --clear-bitmap nfs/1
Make
alice
primary:#
drbdadm primary --force nfs
Check the DRBD status of
nfs
:#
drbdadm status nfs
This returns the following message:
nfs role:Primary volume:0 disk:UpToDate volume:1 disk:UpToDate bob role:Secondary volume:0 peer-disk:UpToDate volume:1 peer-disk:UpToDate
You can access the DRBD resources on the block devices
/dev/drbd0
and /dev/drbd1
.
4.3 Creating the file systems #
After activating the DRBD devices, create file systems on them:
Create an
ext4
file system on/dev/drbd0
:#
mkfs.ext4 /dev/drbd0
Create an
ext4
file system on/dev/drbd1
:#
mkfs.ext4 /dev/drbd1
5 Creating cluster resources #
The following procedures describe how to configure the resources required for a highly available NFS cluster.
- DRBD primitive and promotable clone resources
These resources are used to replicate data. The promotable clone resource is switched to and from the primary and secondary roles as deemed necessary by the cluster resource manager.
- File system resources
These resources manage the file system that will be exported, and the file system that will track NFS client states.
- NFS kernel server resource
This resource manages the NFS server daemon.
- NFS exports
This resource is used to export the directory
/srv/nfs/share
to clients.- Virtual IP address
The initial installation creates an administrative virtual IP address for Hawk2. Create another virtual IP address exclusively for NFS exports. This makes it easier to apply security restrictions later.
The following configuration examples assume that
192.168.1.11
is the virtual IP address to use for an NFS server which serves clients in the192.168.1.x/24
subnet.The service exports data served from
/srv/nfs/share
.Into this export directory, the cluster mounts an
ext4
file system from the DRBD device/dev/drbd1
. This DRBD device sits on top of an LVM logical volume named/dev/nfs/share
.The DRBD device
/dev/drbd0
is used to share the NFS client states from/var/lib/nfs
. This DRBD device sits on top of an LVM logical volume named/dev/nfs/state
.
5.1 Creating DRBD primitive and promotable clone resources #
Create a cluster resource to manage the DRBD devices, and a promotable clone to allow this resource to run on both nodes:
Start the
crm
interactive shell:#
crm configure
Create a primitive for the DRBD configuration
nfs
:crm(live)configure#
primitive drbd-nfs ocf:linbit:drbd \ params drbd_resource="nfs" \ op monitor interval=15 role=Promoted \ op monitor interval=30 role=Unpromoted
Create a promotable clone for the
drbd-nfs
primitive:crm(live)configure#
clone cl-drbd-nfs drbd-nfs \ meta promotable="true" promoted-max="1" promoted-node-max="1" \ clone-max="2" clone-node-max="1" notify="true" interleave=true
Commit this configuration:
crm(live)configure#
commit
Pacemaker activates the DRBD resources on both nodes and promotes
them to the primary role on one of the nodes. Check the state of the
cluster with the crm status
command, or run
drbdadm status
.
5.2 Creating file system resources #
Create cluster resources to manage the file systems for export and state tracking:
Create a primitive for the NFS client states on
/dev/drbd0
:crm(live)configure#
primitive fs-nfs-state Filesystem \ params device=/dev/drbd0 directory=/var/lib/nfs fstype=ext4
Create a primitive for the file system to be exported on
/dev/drbd1
:crm(live)configure#
primitive fs-nfs-share Filesystem \ params device=/dev/drbd1 directory=/srv/nfs/share fstype=ext4
Do not commit this configuration until after you add the colocation and order constraints.
Add both of these resources to a resource group named
g-nfs
:crm(live)configure#
group g-nfs fs-nfs-state fs-nfs-share
Resources start in the order they are added to the group and stop in reverse order.
Add a colocation constraint to make sure that the resource group always starts on the node where the DRBD promotable clone is in the primary role:
crm(live)configure#
colocation col-nfs-on-drbd inf: g-nfs cl-drbd-nfs:Promoted
Add an order constraint to make sure the DRBD promotable clone always starts before the resource group:
crm(live)configure#
order o-drbd-before-nfs Mandatory: cl-drbd-nfs:promote g-nfs:start
Commit this configuration:
crm(live)configure#
commit
Pacemaker mounts /dev/drbd0
to /var/lib/nfs
,
and /dev/drbd1
to srv/nfs/share
. Confirm this
with mount
, or by looking at /proc/mounts
.
5.3 Creating an NFS kernel server resource #
Create a cluster resource to manage the NFS server daemon:
Create a primitive to manage the NFS server daemon:
crm(live)configure#
primitive nfsserver nfsserver \ params nfs_server_scope=SUSE nfs_shared_infodir="/var/lib/nfs"
The
nfs_server_scope
must be the same on all nodes in the cluster that run the NFS server, but this is not set by default. All clusters using SUSE software can use the same scope, so we recommend setting the value toSUSE
.Warning: Low lease time can cause loss of file stateNFS clients regularly renew their state with the NFS server. If the lease time is too low, system or network delays can cause the timer to expire before the renewal is complete. This can lead to I/O errors and loss of file state.
NFSV4LEASETIME
is set on the NFS server in the file/etc/sysconfig/nfs
. The default is 90 seconds. If lowering the lease time is necessary, we recommend a value of 60 or higher. We strongly discourage values lower than 30.Append this resource to the existing
g-nfs
resource group:crm(live)configure#
modgroup g-nfs add nfsserver
Commit this configuration:
crm(live)configure#
commit
5.4 Creating an NFS export resource #
Create a cluster resource to manage the NFS exports:
Create a primitive for the NFS exports:
crm(live)configure#
primitive exportfs-nfs exportfs \ params directory="/srv/nfs/share" \ options="rw,mountpoint" clientspec="192.168.1.0/24" fsid=101 \
1op monitor interval=30s timeout=90s
2The
fsid
must be unique for each NFS export resource.The value of
op monitor timeout
must be higher than the value ofstonith-timeout
. To find thestonith-timeout
value, runcrm configure show
and look under theproperty
section.Important: Do not setwait_for_leasetime_on_stop=true
Setting this option to
true
in a highly available NFS setup can cause unnecessary delays and loss of locks.The default value for
wait_for_leasetime_on_stop
isfalse
. There is no need to set it totrue
when/var/lib/nfs
andnfsserver
are configured as described in this guide.Append this resource to the existing
g-nfs
resource group:crm(live)configure#
modgroup g-nfs add exportfs-nfs
Commit this configuration:
crm(live)configure#
commit
Confirm that the NFS exports are set up properly:
#
exportfs -v
/srv/nfs/share IP_ADDRESS_OF_CLIENT(OPTIONS)
5.5 Creating a virtual IP address for NFS exports #
Create a cluster resource to manage the virtual IP address for the NFS exports:
Create a primitive for the virtual IP address:
crm(live)configure#
primitive vip-nfs IPaddr2 params ip=192.168.1.11
Append this resource to the existing
g-nfs
resource group:crm(live)configure#
modgroup g-nfs add vip-nfs
Commit this configuration:
crm(live)configure#
commit
Leave the
crm
interactive shell:crm(live)configure#
quit
Check the status of the cluster. The resources in the
g-nfs
group should appear in the following order:#
crm status
[...] Full List of Resources [...] * Resource Group: g-nfs: * fs-nfs-state (ocf:heartbeat:Filesystem): Started alice * fs-nfs-share (ocf:heartbeat:Filesystem): Started alice * nfsserver (ocf:heartbeat:nfsserver): Started alice * exportfs-nfs (ocf:heartbeat:exportfs): Started alice * vip-nfs (ocf:heartbeat:IPaddr2): Started alice
6 Using the NFS service #
This section outlines how to use the highly available NFS service from an NFS client.
To connect to the NFS service, make sure to use the virtual IP address to connect to the cluster rather than a physical IP configured on one of the cluster nodes' network interfaces. For compatibility reasons, use the full path of the NFS export on the server.
The command to mount the NFS export looks like this:
#
mount 192.168.1.11:/srv/nfs/share /home/share
If you need to configure other mount options, such as a specific transport protocol
(proto
), maximum read and write request sizes (rsize
and wsize
), or a specific NFS version (vers
),
use the -o
option. For example:
#
mount -o proto=tcp,rsize=32768,wsize=32768,vers=3 \ 192.168.1.11:/srv/nfs/share /home/share
For further NFS mount options, see the nfs
man page.
Loopback mounts are only supported for NFS version 3, not NFS version 4. For more information, see https://www.suse.com/support/kb/doc/?id=000018709.
7 Adding more NFS shares to the cluster #
If you need to increase the available storage, you can add more NFS shares to the cluster.
In this example, a new DRBD device named /dev/drbd2
sits
on top of an LVM logical volume named /dev/nfs/share2
.
8 For more information #
For more details about the steps in this guide, see https://www.suse.com/support/kb/doc/?id=000020396.
For more information about NFS and LVM, see Storage Administration Guide for SUSE Linux Enterprise Server.
For more information about DRBD, see 第21章 「DRBD」.
For more information about cluster resources, see 6.3項 「クラスタリソース」.