Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
Applies to SUSE Enterprise Storage 5.5 (SES 5 & SES 5.5)

12 Installation of NFS Ganesha Edit source

NFS Ganesha provides NFS access to either the Object Gateway or the CephFS. In SUSE Enterprise Storage 5.5, NFS versions 3 and 4 are supported. NFS Ganesha runs in the user space instead of the kernel space and directly interacts with the Object Gateway or CephFS.

Warning
Warning: Cross Protocol Access

Native CephFS and NFS clients are not restricted by file locks obtained via Samba, and vice-versa. Applications that rely on cross protocol file locking may experience data corruption if CephFS backed Samba share paths are accessed via other means.

12.1 Preparation Edit source

12.1.1 General Information Edit source

To successfully deploy NFS Ganesha, you need to add a role-ganesha to your /srv/pillar/ceph/proposals/policy.cfg. For details, see Section 4.5.1, “The policy.cfg File”. NFS Ganesha also needs either a role-rgw or a role-mds present in the policy.cfg.

Although it is possible to install and run the NFS Ganesha server on an already existing Ceph node, we recommend running it on a dedicated host with access to the Ceph cluster. The client hosts are typically not part of the cluster, but they need to have network access to the NFS Ganesha server.

To enable the NFS Ganesha server at any point after the initial installation, add the role-ganesha to the policy.cfg and re-run at least DeepSea stages 2 and 4. For details, see Section 4.3, “Cluster Deployment”.

NFS Ganesha is configured via the file /etc/ganesha/ganesha.conf that exists on the NFS Ganesha node. However, this file is overwritten each time DeepSea stage 4 is executed. Therefore we recommend to edit the template used by Salt, which is the file /srv/salt/ceph/ganesha/files/ganesha.conf.j2 on the Salt master. For details about the configuration file, see Section 16.2, “Configuration”.

12.1.2 Summary of Requirements Edit source

The following requirements need to be met before DeepSea stages 2 and 4 can be executed to install NFS Ganesha:

  • At least one node needs to be assigned the role-ganesha.

  • You can define only one role-ganesha per minion.

  • NFS Ganesha needs either an Object Gateway or CephFS to work.

  • If NFS Ganesha is supposed to use the Object Gateway to interface with the cluster, the /srv/pillar/ceph/rgw.sls on the Salt master needs to be populated.

  • The kernel based NFS needs to be disabled on minions with the role-ganesha role.

12.2 Example Installation Edit source

This procedure provides an example installation that uses both the Object Gateway and CephFS File System Abstraction Layers (FSAL) of NFS Ganesha.

  1. If you have not done so, execute DeepSea stages 0 and 1 before continuing with this procedure.

    root@master # salt-run state.orch ceph.stage.0
    root@master # salt-run state.orch ceph.stage.1
  2. After having executed stage 1 of DeepSea, edit the /srv/pillar/ceph/proposals/policy.cfg and add the line

    role-ganesha/cluster/NODENAME

    Replace NODENAME with the name of a node in your cluster.

    Also make sure that a role-mds and a role-rgw are assigned.

  3. Create a file with '.yml' extension in the /srv/salt/ceph/rgw/users/users.d directory and insert the following content:

    - { uid: "demo", name: "Demo", email: "demo@demo.nil" }
    - { uid: "demo1", name: "Demo1", email: "demo1@demo.nil" }

    These users are later created as Object Gateway users, and API keys are generated. On the Object Gateway node, you can later run radosgw-admin user list to list all created users and radosgw-admin user info --uid=demo to obtain details about single users.

    DeepSea makes sure that Object Gateway and NFS Ganesha both receive the credentials of all users listed in the rgw section of the rgw.sls.

    The exported NFS uses these user names on the first level of the file system, in this example the paths /demo and /demo1 would be exported.

  4. Execute at least stages 2 and 4 of DeepSea. Running stage 3 in between is recommended.

    root@master # salt-run state.orch ceph.stage.2
    root@master # salt-run state.orch ceph.stage.3 # optional but recommended
    root@master # salt-run state.orch ceph.stage.4
  5. Verify that NFS Ganesha is working by mounting the NFS share from a client node:

    root # mount -o sync -t nfs GANESHA_NODE:/ /mnt
    root # ls /mnt
    cephfs  demo  demo1

    /mnt should contain all exported paths. Directories for CephFS and both Object Gateway users should exist. For each bucket a user owns, a path /mnt/USERNAME/BUCKETNAME would be exported.

12.3 High Availability Active-Passive Configuration Edit source

This section provides an example of how to set up a two-node active-passive configuration of NFS Ganesha servers. The setup requires the SUSE Linux Enterprise High Availability Extension. The two nodes are called earth and mars.

For details about SUSE Linux Enterprise High Availability Extension, see https://documentation.suse.com/sle-ha/12-SP5/.

12.3.1 Basic Installation Edit source

In this setup earth has the IP address 192.168.1.1 and mars has the address 192.168.1.2.

Additionally, two floating virtual IP addresses are used, allowing clients to connect to the service independent of which physical node it is running on. 192.168.1.10 is used for cluster administration with Hawk2 and 192.168.2.1 is used exclusively for the NFS exports. This makes it easier to apply security restrictions later.

The following procedure describes the example installation. More details can be found at https://documentation.suse.com/sle-ha/12-SP5/single-html/SLE-HA-install-quick/.

  1. Prepare the NFS Ganesha nodes on the Salt master:

    1. Run DeepSea stages 0 and 1.

      root@master # salt-run state.orch ceph.stage.0
      root@master # salt-run state.orch ceph.stage.1
    2. Assign the nodes earth and mars the role-ganesha in the /srv/pillar/ceph/proposals/policy.cfg:

      role-ganesha/cluster/earth*.sls
      role-ganesha/cluster/mars*.sls
    3. Run DeepSea stages 2 to 4.

      root@master # salt-run state.orch ceph.stage.2
      root@master # salt-run state.orch ceph.stage.3
      root@master # salt-run state.orch ceph.stage.4
  2. Register the SUSE Linux Enterprise High Availability Extension on earth and mars.

    root # SUSEConnect -r ACTIVATION_CODE -e E_MAIL
  3. Install ha-cluster-bootstrap on both nodes:

    root # zypper in ha-cluster-bootstrap
    1. Initialize the cluster on earth:

      root@earth # ha-cluster-init
    2. Let mars join the cluster:

      root@mars # ha-cluster-join -c earth
  4. Check the status of the cluster. You should see two nodes added to the cluster:

    root@earth # crm status
  5. On both nodes, disable the automatic start of the NFS Ganesha service at boot time:

    root # systemctl disable nfs-ganesha
  6. Start the crm shell on earth:

    root@earth # crm configure

    The next commands are executed in the crm shell.

  7. On earth, run the crm shell to execute the following commands to configure the resource for NFS Ganesha daemons as clone of systemd resource type:

    crm(live)configure# primitive nfs-ganesha-server systemd:nfs-ganesha \
    op monitor interval=30s
    crm(live)configure# clone nfs-ganesha-clone nfs-ganesha-server meta interleave=true
    crm(live)configure# commit
    crm(live)configure# status
        2 nodes configured
        2 resources configured
    
        Online: [ earth mars ]
    
        Full list of resources:
             Clone Set: nfs-ganesha-clone [nfs-ganesha-server]
             Started:  [ earth mars ]
  8. Create a primitive IPAddr2 with the crm shell:

    crm(live)configure# primitive ganesha-ip IPaddr2 \
    params ip=192.168.2.1 cidr_netmask=24 nic=eth0 \
    op monitor interval=10 timeout=20
    
    crm(live)# status
    Online: [ earth mars  ]
    Full list of resources:
     Clone Set: nfs-ganesha-clone [nfs-ganesha-server]
         Started: [ earth mars ]
     ganesha-ip    (ocf::heartbeat:IPaddr2):    Started earth
  9. To set up a relationship between the NFS Ganesha server and the floating Virtual IP, we use collocation and ordering.

    crm(live)configure# colocation ganesha-ip-with-nfs-ganesha-server inf: ganesha-ip nfs-ganesha-clone
    crm(live)configure# order ganesha-ip-after-nfs-ganesha-server Mandatory: nfs-ganesha-clone ganesha-ip
  10. Use the mount command from the client to ensure that cluster setup is complete:

    root # mount -t nfs -v -o sync,nfsvers=4 192.168.2.1:/ /mnt

12.3.2 Clean Up Resources Edit source

In the event of an NFS Ganesha failure at one of the node, for example earth, fix the issue and clean up the resource. Only after the resource is cleaned up can the resource fail back to earth in case NFS Ganesha fails at mars.

To clean up the resource:

root@earth # crm resource cleanup nfs-ganesha-clone earth
root@earth # crm resource cleanup ganesha-ip earth

12.3.3 Setting Up Ping Resource Edit source

It may happen that the server is unable to reach the client because of a network issue. A ping resource can detect and mitigate this problem. Configuring this resource is optional.

  1. Define the ping resource:

    crm(live)configure# primitive ganesha-ping ocf:pacemaker:ping \
            params name=ping dampen=3s multiplier=100 host_list="CLIENT1 CLIENT2" \
            op monitor interval=60 timeout=60 \
            op start interval=0 timeout=60 \
            op stop interval=0 timeout=60

    host_list is a list of IP addresses separated by space characters. The IP addresses will be pinged regularly to check for network outages. If a client must always have access to the NFS server, add it to host_list.

  2. Create a clone:

    crm(live)configure# clone ganesha-ping-clone ganesha-ping \
            meta interleave=true
  3. The following command creates a constraint for the NFS Ganesha service. It forces the service to move to another node when host_list is unreachable.

    crm(live)configure# location nfs-ganesha-server-with-ganesha-ping
            nfs-ganesha-clone \
            rule -inf: not_defined ping or ping lte 0

12.3.4 Setting Up PortBlock Resource Edit source

When a service goes down, the TCP connection that is in use by NFS Ganesha is required to be closed otherwise it continues to run until a system-specific timeout occurs. This timeout can take upwards of 3 minutes.

To shorten the timeout time, the TCP connection needs to be reset. We recommend configuring portblock to reset stale TCP connections.

You can choose to use portblock with or without the tickle_dir parameters that could unblock and reconnect clients to the new service faster. We recommend to have tickle_dir as the shared CephFS mount between two HA nodes (where NFS Ganesha services are running).

Note
Note

Configuring the following resource is optional.

  1. On earth, run the crm shell to execute the following commands to configure the resource for NFS Ganesha daemons:

    root@earth # crm configure
  2. Configure the block action for portblock and omit the tickle_dir option if you have not configured a shared directory:

    crm(live)configure#  primitive nfs-ganesha-block ocf:portblock \
    protocol=tcp portno=2049 action=block ip=192.168.2.1 op monitor depth="0" timeout="10" interval="10" tickle_dir="/tmp/ganesha/tickle/"
  3. Configure the unblock action for portblock and omit the reset_local_on_unblock_stop option if you have not configured a shared directory:

    crm(live)configure#  primitive nfs-ganesha-unblock ocf:portblock \
    protocol=tcp portno=2049 action=unblock ip=192.168.2.1 op monitor depth="0" timeout="10" interval="10" reset_local_on_unblock_stop=true tickle_dir="/tmp/ganesha/tickle/"
  4. Configure the IPAddr2 resource with portblock:

    crm(live)configure#  colocation ganesha-portblock inf: ganesha-ip nfs-ganesha-block nfs-ganesha-unblock
    crm(live)configure#  edit ganesha-ip-after-nfs-ganesha-server
    order ganesha-ip-after-nfs-ganesha-server Mandatory: nfs-ganesha-block nfs-ganesha-clone ganesha-ip nfs-ganesha-unblock
  5. Save your changes:

    crm(live)configure#  commit
  6. Your configuration should look like this:

    crm(live)configure#  show
    "
    node 1084782956: nfs1
    node 1084783048: nfs2
    primitive ganesha-ip IPaddr2 \
            params ip=192.168.2.1 cidr_netmask=24 nic=eth0 \
            op monitor interval=10 timeout=20
    primitive nfs-ganesha-block portblock \
            params protocol=tcp portno=2049 action=block ip=192.168.2.1 \
            tickle_dir="/tmp/ganesha/tickle/" op monitor timeout=10 interval=10 depth=0
    primitive nfs-ganesha-server systemd:nfs-ganesha \
            op monitor interval=30s
    primitive nfs-ganesha-unblock portblock \
            params protocol=tcp portno=2049 action=unblock ip=192.168.2.1 \
            reset_local_on_unblock_stop=true tickle_dir="/tmp/ganesha/tickle/" \
            op monitor timeout=10 interval=10 depth=0
    clone nfs-ganesha-clone nfs-ganesha-server \
            meta interleave=true
    location cli-prefer-ganesha-ip ganesha-ip role=Started inf: nfs1
    order ganesha-ip-after-nfs-ganesha-server Mandatory: nfs-ganesha-block nfs-ganesha-clone ganesha-ip nfs-ganesha-unblock
    colocation ganesha-ip-with-nfs-ganesha-server inf: ganesha-ip nfs-ganesha-clone
    colocation ganesha-portblock inf: ganesha-ip nfs-ganesha-block nfs-ganesha-unblock
    property cib-bootstrap-options: \
            have-watchdog=false \
            dc-version=1.1.16-6.5.1-77ea74d \
            cluster-infrastructure=corosync \
            cluster-name=hacluster \
            stonith-enabled=false \
            placement-strategy=balanced \
            last-lrm-refresh=1544793779
    rsc_defaults rsc-options: \
            resource-stickiness=1 \
            migration-threshold=3
    op_defaults op-options: \
            timeout=600 \
            record-pending=true
    "

    In this example /tmp/ganesha/ is the CephFS mount on both nodes (nfs1 and nfs2):

    172.16.1.11:6789:/ganesha on /tmp/ganesha type ceph (rw,relatime,name=admin,secret=...hidden...,acl,wsize=16777216)

    The tickle directory has been initially created.

12.3.5 NFS Ganesha HA and DeepSea Edit source

DeepSea does not support configuring NFS Ganesha HA. To prevent DeepSea from failing after NFS Ganesha HA was configured, exclude starting and stopping the NFS Ganesha service from DeepSea Stage 4:

  1. Copy /srv/salt/ceph/ganesha/default.sls to /srv/salt/ceph/ganesha/ha.sls.

  2. Remove the .service entry from /srv/salt/ceph/ganesha/ha.sls so that it looks as follows:

    include:
    - .keyring
    - .install
    - .configure
  3. Add the following line to /srv/pillar/ceph/stack/global.yml:

    ganesha_init: ha

To prevent DeepSea from restarting NFS Ganesha service on stage 4:

  1. Copy /srv/salt/ceph/stage/ganesha/default.sls to /srv/salt/ceph/stage/ganesha/ha.sls.

  2. Remove the line - ...restart.ganesha.lax from the /srv/salt/ceph/stage/ganesha/ha.sls so it looks as follows:

    include:
      - .migrate
      - .core
  3. Add the following line to /srv/pillar/ceph/stack/global.yml:

    stage_ganesha: ha

12.4 More Information Edit source

More information can be found in Chapter 16, NFS Ganesha: Export Ceph Data via NFS.

Print this page