Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
documentation.suse.com / SUSE Linux Enterprise High Availability Documentation / Administration Guide / Storage and Data Replication / GFS2
Applies to SUSE Linux Enterprise High Availability 12 SP5

17 GFS2

Global File System 2 or GFS2 is a shared disk file system for Linux computer clusters. GFS2 allows all nodes to have direct concurrent access to the same shared block storage. GFS2 has no disconnected operating-mode, and no client or server roles. All nodes in a GFS2 cluster function as peers. GFS2 supports up to 32 cluster nodes. Using GFS2 in a cluster requires hardware to allow access to the shared storage, and a lock manager to control access to the storage.

SUSE recommends OCFS2 over GFS2 for your cluster environments if performance is one of your major requirements. Our tests have revealed that OCFS2 performs better as compared to GFS2 in such settings.

Important
Important: GFS2 support

SUSE only supports GFS2 in read-only mode. Write operations are not supported.

17.1 GFS2 Packages and Management Utilities

To use GFS2, make sure gfs2-utils and a matching gfs2-kmp-* package for your Kernel is installed on each node of the cluster.

The gfs2-utils package provides the following utilities for management of GFS2 volumes. For syntax information, see their man pages.

Table 17.1: GFS2 Utilities

GFS2 Utility

Description

fsck.gfs2

Checks the file system for errors and optionally repairs errors.

gfs2_jadd

Adds additional journals to a GFS2 file system.

gfs2_grow

Grow a GFS2 file system.

mkfs.gfs2

Create a GFS2 file system on a device, usually a shared device or partition.

tunegfs2

Allows viewing and manipulating the GFS2 file system parameters such as UUID, label, lockproto and locktable.

17.2 Configuring GFS2 Services and a STONITH Resource

Before you can create GFS2 volumes, you must configure DLM and a STONITH resource.

Procedure 17.1: Configuring a STONITH Resource
Note
Note: STONITH Device Needed

You need to configure a fencing device. Without a STONITH mechanism (like external/sbd) in place the configuration will fail.

  1. Start a shell and log in as root or equivalent.

  2. Create an SBD partition as described in Procedure 10.3, “Initializing the SBD Devices”.

  3. Run crm configure.

  4. Configure external/sbd as the fencing device:

    crm(live)configure# primitive sbd_stonith stonith:external/sbd \
        params pcmk_delay_max=30 meta target-role="Started"
  5. Review your changes with show.

  6. If everything is correct, submit your changes with commit and leave the crm live configuration with exit.

For details on configuring the resource group for DLM, see Procedure 15.1, “Configuring a Base Group for DLM”.

17.3 Creating GFS2 Volumes

After you have configured DLM as cluster resources as described in Section 17.2, “Configuring GFS2 Services and a STONITH Resource”, configure your system to use GFS2 and create GFS2 volumes.

Note
Note: GFS2 Volumes for Application and Data Files

We recommend that you generally store application files and data files on different GFS2 volumes. If your application volumes and data volumes have different requirements for mounting, it is mandatory to store them on different volumes.

Before you begin, prepare the block devices you plan to use for your GFS2 volumes. Leave the devices as free space.

Then create and format the GFS2 volume with the mkfs.gfs2 as described in Procedure 17.2, “Creating and Formatting a GFS2 Volume”. The most important parameters for the command are listed in Table 17.2, “Important GFS2 Parameters”. For more information and the command syntax, refer to the mkfs.gfs2 man page.

Table 17.2: Important GFS2 Parameters

GFS2 Parameter

Description and Recommendation

Lock Protocol Name (-p)

The name of the locking protocol to use. Acceptable locking protocols are lock_dlm (for shared storage) or if you are using GFS2 as a local file system (1 node only), you can specify the lock_nolock protocol. If this option is not specified, lock_dlm protocol will be assumed.

Lock Table Name (-t)

The lock table field appropriate to the lock module you are using. It is clustername:fsname. clustername must match that in the cluster configuration file, /etc/corosync/corosync.conf. Only members of this cluster are permitted to use this file system. fsname is a unique file system name used to distinguish this GFS2 file system from others created (1 to 16 characters).

Number of Journals (-j)

The number of journals for gfs2_mkfs to create. You need at least one journal per machine that will mount the file system. If this option is not specified, one journal will be created.

Procedure 17.2: Creating and Formatting a GFS2 Volume

Execute the following steps only on one of the cluster nodes.

  1. Open a terminal window and log in as root.

  2. Check if the cluster is online with the command crm status.

  3. Create and format the volume using the mkfs.gfs2 utility. For information about the syntax for this command, refer to the mkfs.gfs2 man page.

    For example, to create a new GFS2 file system that supports up to 32 cluster nodes, use the following command:

    # mkfs.gfs2 -t hacluster:mygfs2 -p lock_dlm -j 32 /dev/disk/by-id/DEVICE_ID

    The hacluster name relates to the entry cluster_name in the file /etc/corosync/corosync.conf (this is the default).

17.4 Mounting GFS2 Volumes

You can either mount a GFS2 volume manually or with the cluster manager, as described in Procedure 17.4, “Mounting a GFS2 Volume with the Cluster Manager”.

Procedure 17.3: Manually Mounting a GFS2 Volume
  1. Open a terminal window and log in as root.

  2. Check if the cluster is online with the command crm status.

  3. Mount the volume from the command line, using the mount command.

Warning
Warning: Manually Mounted GFS2 Devices

If you mount the GFS2 file system manually for testing purposes, make sure to unmount it again before starting to use it by means of cluster resources.

Procedure 17.4: Mounting a GFS2 Volume with the Cluster Manager

To mount a GFS2 volume with the High Availability software, configure an OCF file system resource in the cluster. The following procedure uses the crm shell to configure the cluster resources. Alternatively, you can also use Hawk2 to configure the resources.

  1. Start a shell and log in as root or equivalent.

  2. Run crm configure.

  3. Configure Pacemaker to mount the GFS2 file system on every node in the cluster:

    crm(live)configure# primitive gfs2-1 ocf:heartbeat:Filesystem \
      params device="/dev/disk/by-id/DEVICE_ID" directory="/mnt/shared" fstype="gfs2" \
      op monitor interval="20" timeout="40" \
      op start timeout="60" op stop timeout="60" \
      meta target-role="Stopped"
  4. Create a base group that consists of the dlm primitive you created in Procedure 15.1, “Configuring a Base Group for DLM” and the gfs2-1 primitive. Clone the group:

    crm(live)configure# group g-storage dlm  gfs2-1
         clone cl-storage g-storage \
         meta interleave="true"

    Because of the base group's internal colocation and ordering, Pacemaker will only start the gfs2-1 resource on nodes that also have a dlm resource already running.

  5. Review your changes with show.

  6. If everything is correct, submit your changes with commit and leave the crm live configuration with exit.