Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
documentation.suse.com / SUSE Linux Enterprise High Availability Documentation / Administration Guide / Storage and data replication / GFS2
Applies to SUSE Linux Enterprise High Availability 15 SP7

26 GFS2

Global File System 2 or GFS2 is a shared disk file system for Linux computer clusters. GFS2 allows all nodes to have direct concurrent access to the same shared block storage. GFS2 has no disconnected operating mode, and no client or server roles. All nodes in a GFS2 cluster function as peers. GFS2 supports up to 32 cluster nodes. Using GFS2 in a cluster requires hardware to allow access to the shared storage, and a lock manager to control access to the storage.

26.1 GFS2 packages and management utilities

To use GFS2, make sure gfs2-utils and a matching gfs2-kmp-* package for your Kernel is installed on each node of the cluster.

The gfs2-utils package provides the following utilities for management of GFS2 volumes. For syntax information, see their man pages.

fsck.gfs2

Checks the file system for errors and optionally repairs errors.

gfs2_jadd

Adds additional journals to a GFS2 file system.

gfs2_grow

Grow a GFS2 file system.

mkfs.gfs2

Create a GFS2 file system on a device, usually a shared device or partition.

tunegfs2

Allows viewing and manipulating the GFS2 file system parameters such as UUID, label, lockproto and locktable.

26.2 Configuring GFS2 services and a STONITH resource

Before you can create GFS2 volumes, you must configure DLM and a STONITH resource.

Procedure 26.1: Configuring a STONITH resource
Note
Note: STONITH device needed

You need to configure a fencing device. Without a STONITH mechanism (like external/sbd) in place the configuration fails.

  1. Start a shell and log in as root or equivalent.

  2. Create an SBD partition as described in Procedure 17.3, “Initializing the SBD devices”.

  3. Run crm configure.

  4. Configure external/sbd as the fencing device:

    crm(live)configure# primitive sbd_stonith stonith:external/sbd \
        params pcmk_delay_max=30 meta target-role="Started"
  5. Review your changes with show.

  6. If everything is correct, submit your changes with commit and leave the crm live configuration with quit.

For details on configuring the resource for DLM, see Section 24.2, “Configuring DLM cluster resources”.

26.3 Creating GFS2 volumes

After you have configured DLM as cluster resources as described in Section 26.2, “Configuring GFS2 services and a STONITH resource”, configure your system to use GFS2 and create GFS2 volumes.

Note
Note: GFS2 volumes for application and data files

We recommend that you generally store application files and data files on different GFS2 volumes. If your application volumes and data volumes have different requirements for mounting, it is mandatory to store them on different volumes.

Before you begin, prepare the block devices you plan to use for your GFS2 volumes. Leave the devices as free space.

Then create and format the GFS2 volume with the mkfs.gfs2 as described in Procedure 26.2, “Creating and formatting a GFS2 volume”. The most important parameters for the command are listed below. For more information and the command syntax, refer to the mkfs.gfs2 man page.

Lock Protocol Name (-p)

The name of the locking protocol to use. Acceptable locking protocols are lock_dlm (for shared storage) or, if you are using GFS2 as a local file system (1 node only), you can specify the lock_nolock protocol. If this option is not specified, lock_dlm protocol is assumed.

Lock Table Name (-t)

The lock table field appropriate to the lock module you are using. It is clustername:fsname. The clustername value must match that in the cluster configuration file, /etc/corosync/corosync.conf. Only members of this cluster are permitted to use this file system. The fsname value is a unique file system name used to distinguish this GFS2 file system from others created (1 to 16 characters).

Number of Journals (-j)

The number of journals for gfs2_mkfs to create. You need at least one journal per machine that will mount the file system. If this option is not specified, one journal is created.

Procedure 26.2: Creating and formatting a GFS2 volume

Execute the following steps only on one of the cluster nodes.

  1. Open a terminal window and log in as root.

  2. Check if the cluster is online with the command crm status.

  3. Create and format the volume using the mkfs.gfs2 utility. For information about the syntax for this command, refer to the mkfs.gfs2 man page.

    For example, to create a new GFS2 file system that supports up to 32 cluster nodes, use the following command:

    # mkfs.gfs2 -t CLUSTERNAME:FSNAME -p lock_dlm -j 32 /dev/disk/by-id/DEVICE_ID

    CLUSTERNAME must be the same as the entry cluster_name in the file /etc/corosync/corosync.conf. The default is hacluster.

    FSNAME is used to identify this file system and must therefore be unique.

    Always use a stable device name for devices shared between cluster nodes.

26.4 Mounting GFS2 volumes

You can either mount a GFS2 volume manually or with the cluster manager, as described in Procedure 26.4, “Mounting a GFS2 volume with the cluster manager”.

To mount multiple GFS2 volumes, see Procedure 26.5, “Mounting multiple GFS2 volumes with the cluster resource manager”.

Procedure 26.3: Manually mounting a GFS2 volume
  1. Open a terminal window and log in as root.

  2. Check if the cluster is online with the command crm status.

  3. Mount the volume from the command line, using the mount command.

Warning
Warning: Manually mounted GFS2 devices

If you mount the GFS2 file system manually for testing purposes, make sure to unmount it again before starting to use it via cluster resources.

Procedure 26.4: Mounting a GFS2 volume with the cluster manager

To mount a GFS2 volume with the High Availability software, configure a file system resource in the cluster. The following procedure uses the crm shell to configure the cluster resources. Alternatively, you can also use Hawk2 to configure the resources as described in Section 26.5, “Configuring GFS2 resources with Hawk2”.

  1. Start a shell and log in as root or equivalent.

  2. Run crm configure.

  3. Configure Pacemaker to mount the GFS2 file system on every node in the cluster:

    crm(live)configure# primitive gfs2-1 ocf:heartbeat:Filesystem \
      params device="/dev/disk/by-id/DEVICE_ID" directory="/mnt/shared" fstype="gfs2" \
      op monitor interval="20" timeout="40" \
      op start timeout="60" op stop timeout="60" \
      meta target-role="Started"
  4. Add the gfs2-1 primitive to the g-storage group you created in Procedure 24.1, “Configuring a base group for DLM”.

    crm(live)configure# modgroup g-storage add gfs2-1

    Because of the base group's internal colocation and ordering, the gfs2-1 resource can only start on nodes that also have a dlm resource already running.

    Important
    Important: Do not use a group for multiple GFS2 resources

    Adding multiple GFS2 resources to a group creates a dependency between the GFS2 volumes. For example, if you created a group with crm configure group g-storage dlm gfs2-1 gfs2-2, then stopping gfs2-1 also stops gfs2-2, and starting gfs2-2 also starts gfs2-1.

    To use multiple GFS2 resources in the cluster, use colocation and order constraints as described in Procedure 26.5, “Mounting multiple GFS2 volumes with the cluster resource manager”.

  5. Review your changes with show.

  6. If everything is correct, submit your changes with commit and leave the crm live configuration with quit.

Procedure 26.5: Mounting multiple GFS2 volumes with the cluster resource manager

To mount multiple GFS2 volumes in the cluster, configure a file system resource for each volume, and colocate them with the dlm resource you created in Procedure 24.2, “Configuring an independent DLM resource”.

Important

Do not add multiple GFS2 resources to a group with DLM. This creates a dependency between the GFS2 volumes. For example, if gfs2-1 and gfs2-2 are in the same group, then stopping gfs2-1 also stops gfs2-2.

  1. Log in to a node as root or equivalent.

  2. Run crm configure.

  3. Create the primitive for the first GFS2 volume:

    crm(live)configure# primitive gfs2-1 Filesystem \
      params directory="/srv/gfs2-1" fstype=gfs2 device="/dev/disk/by-id/DEVICE_ID1" \
      op monitor interval=20 timeout=40 \
      op start timeout=60 interval=0 \
      op stop timeout=60 interval=0
  4. Create the primitive for the second GFS2 volume:

    crm(live)configure# primitive gfs2-2 Filesystem \
      params directory="/srv/gfs2-2" fstype=gfs2 device="/dev/disk/by-id/DEVICE_ID2" \
      op monitor interval=20 timeout=40 \
      op start timeout=60 interval=0 \
      op stop timeout=60 interval=0
  5. Clone the GFS2 resources so that they can run on all nodes:

    crm(live)configure# clone cl-gfs2-1 gfs2-1 meta interleave=true
    crm(live)configure# clone cl-gfs2-2 gfs2-2 meta interleave=true
  6. Add a colocation constraint for both GFS2 resources so that they can only run on nodes where DLM is also running:

    crm(live)configure# colocation col-gfs2-with-dlm inf: ( cl-gfs2-1 cl-gfs2-2 ) cl-dlm
  7. Add an order constraint for both GFS2 resources so that they can only start after DLM is already running:

    crm(live)configure# order o-dlm-before-gfs2 Mandatory: cl-dlm ( cl-gfs2-1 cl-gfs2-2 )
  8. Review your changes with show.

  9. If everything is correct, submit your changes with commit and leave the crm live configuration with quit.

26.5 Configuring GFS2 resources with Hawk2

Instead of configuring the DLM and the file system resource for GFS2 manually with the crm shell, you can also use the GFS2 template in Hawk2's Setup Wizard.

Important
Important: Differences between manual configuration and Hawk2

The GFS2 template in the Setup Wizard does not include the configuration of a STONITH resource. If you use the wizard, you still need to create an SBD device on the shared storage and configure a STONITH resource as described in Procedure 26.1, “Configuring a STONITH resource”.

Using the GFS2 template in the Hawk2 Setup Wizard also leads to a slightly different resource configuration than the manual configuration described in Procedure 24.1, “Configuring a base group for DLM” and Procedure 26.4, “Mounting a GFS2 volume with the cluster manager”.

Procedure 26.6: Configuring GFS2 resources with Hawk2's Wizard
  1. Log in to Hawk2:

    https://HAWKSERVER:7630/
  2. In the left navigation bar, select Configuration › Wizards.

  3. Expand the File System category and select GFS2 File System (Cloned).

  4. Follow the instructions on the screen. If you need information about an option, click it to display a short help text in Hawk2. After the last configuration step, Verify the values you have entered.

    The wizard displays the configuration snippet that will be applied to the CIB and any additional changes, if required.

    A summary screen showing the changes to be applied to the CIB for the GFS2 resource.
    Figure 26.1: Hawk2 summary screen of GFS2 CIB changes
  5. Check the proposed changes. If everything is according to your wishes, apply the changes.

    A message on the screen shows if the action has been successful.

26.6 Migrating from OCFS2 to GFS2

OCFS2 is deprecated in SUSE Linux Enterprise High Availability 15 SP7. It will not be supported in future releases.

Note
Note: No reflink support in GFS2

Unlike OCFS2, GFS2 does not support the reflink feature.

This procedure shows one method of migrating from OCFS2 to GFS2. It assumes you have a single OCFS2 volume that is part of the g-storage group.

The steps for preparing new block storage and backing up data depend on your specific setup. See the relevant documentation if you need more details.

You only need to perform this procedure on one of the cluster nodes.

Warning
Warning: Test this procedure first

Thoroughly test this procedure in a test environment before performing it in a production environment.

Procedure 26.7: Migrating from OCFS2 to GFS2
  1. Prepare a block device for the GFS2 volume.

    To check the disk space required, run df -h on the OCFS2 mount point. For example:

    # df -h /mnt/shared/
    Filesystem     Size  Used Avail Use% Mounted on
    /dev/sdb        10G  2.3G  7.8G  23% /mnt/shared

    Make a note of the disk name under Filesystem. This will be useful later to help check if the migration worked.

    Note
    Note: OCFS2 disk usage

    Because some OCFS2 system files can hold disk space instead of returning it to the global bitmap file, the actual disk usage might be less than the amount shown in the df -h output.

  2. Install the GFS2 packages on all nodes in the cluster. You can do this on all nodes at once with the following command:

    # crm cluster run "zypper install -y gfs2-utils gfs2-kmp-default"
  3. Create and format the GFS2 volume using the mkfs.gfs2 utility. For information about the syntax for this command, refer to the mkfs.gfs2 man page.

    Tip
    Tip: Key differences between mkfs.ofs2 and mkfs.gfs2
    • OCFS2 uses -C to specify the cluster size and -b to specify the block size. GFS2 also specifies the block size with -b, but has no cluster size setting so does not use -C.

    • OCFS2 specifies the number of nodes with -N. GFS2 specifies the number of nodes with -j.

    For example, to create a new GFS2 file system that supports up to 32 cluster nodes, use the following command:

    # mkfs.gfs2 -t CLUSTERNAME:FSNAME -p lock_dlm -j 32 /dev/disk/by-id/DEVICE_ID

    CLUSTERNAME must be the same as the entry cluster_name in the file /etc/corosync/corosync.conf. The default is hacluster.

    FSNAME is used to identify this file system and must therefore be unique.

    Always use a stable device name for devices shared between cluster nodes.

  4. Put the cluster into maintenance mode:

    # crm maintenance on
  5. Back up the OCFS2 volume's data.

  6. Start the crm shell in interactive mode:

    # crm configure
  7. Delete the OCFS2 resource:

    crm(live)# delete ocfs2-1
  8. Mount the GFS2 file system on every node in the cluster, using the same mount point that was used for the OCFS2 file system:

    crm(live)# primitive gfs2-1 ocf:heartbeat:Filesystem \
    params device="/dev/disk/by-id/DEVICE_ID" directory="/mnt/shared" fstype="gfs2" \
    op monitor interval="20" timeout="40" \
    op start timeout="60" op stop timeout="60" \
    meta target-role="Started"
  9. Add the GFS2 primitive to the g-storage group:

    crm(live)# modgroup g-storage add gfs2-1
  10. Review your changes with show.

  11. If everything is correct, submit your changes with commit and leave the crm live configuration with quit.

  12. Take the cluster out of maintenance mode:

    # crm maintenance off
  13. Check the status of the cluster, with expanded details about the group g-storage:

    # crm status detail

    The group should now include the primitive resource gfs2-1.

  14. Run df -h on the mount point to make sure the disk name changed:

    # df -h /mnt/shared/
    Filesystem     Size  Used Avail Use% Mounted on
    /dev/sdc        10G  290M  9.8G   3% /mnt/shared

    If the output shows the wrong disk, the new gfs2-1 resource might be restarting. This issue should resolve itself if you wait a short time and then run the command again.

  15. Restore the data from the backup to the GFS2 volume.

    Note
    Note: GFS2 disk usage

    Even after restoring the data, the GFS2 volume might not use as much disk space as the OCFS2 volume.

  16. To make sure that the data appears correctly, check the contents of the mount point. For example:

    # ls -l /mnt/shared/

    You can also run this command on other nodes to make sure the data is being shared correctly.

  17. If required, you can now remove the OCFS2 disk.

Documentation survey