Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
Applies to SUSE Enterprise Storage 5.5 (SES 5 & SES 5.5)

16 NFS Ganesha: Export Ceph Data via NFS Edit source

NFS Ganesha is an NFS server (refer to Sharing File Systems with NFS ) that runs in a user address space instead of as part of the operating system kernel. With NFS Ganesha, you can plug in your own storage mechanism—such as Ceph—and access it from any NFS client.

S3 buckets are exported to NFS on a per-user basis, for example via the path GANESHA_NODE:/USERNAME/BUCKETNAME.

A CephFS is exported by default via the path GANESHA_NODE:/cephfs.

Note
Note: NFS Ganesha Performance

Due to increased protocol overhead and additional latency caused by extra network hops between the client and the storage, accessing Ceph via an NFS Gateway may significantly reduce application performance when compared to native CephFS or Object Gateway clients.

16.1 Installation Edit source

For installation instructions, see Chapter 12, Installation of NFS Ganesha.

16.2 Configuration Edit source

For a list of all parameters available within the configuration file, see:

  • man ganesha-config

  • man ganesha-ceph-config for CephFS File System Abstraction Layer (FSAL) options.

  • man ganesha-rgw-config for Object Gateway FSAL options.

This section includes information to help you configure the NFS Ganesha server to export the cluster data accessible via Object Gateway and CephFS.

NFS Ganesha configuration is controlled by /etc/ganesha/ganesha.conf. Note that changes to this file are overwritten when DeepSea Stage 4 is executed. To persistently change the settings, edit the file /srv/salt/ceph/ganesha/files/ganesha.conf.j2 located on the Salt master.

16.2.1 Export Section Edit source

This section describes how to configure the EXPORT sections in the ganesha.conf.

EXPORT
{
  Export_Id = 1;
  Path = "/";
  Pseudo = "/";
  Access_Type = RW;
  Squash = No_Root_Squash;
  [...]
  FSAL {
    Name = CEPH;
  }
}

16.2.1.1 Export Main Section Edit source

Export_Id

Each export needs to have a unique 'Export_Id' (mandatory).

Path

Export path in the related CephFS pool (mandatory). This allows subdirectories to be exported from the CephFS.

Pseudo

Target NFS export path (mandatory for NFSv4). It defines under which NFS export path the exported data is available.

Example: with the value /cephfs/ and after executing

root # mount GANESHA_IP:/cephfs/ /mnt/

The CephFS data is available in the directory /mnt/cephfs/ on the client.

Access_Type

'RO' for read-only access, 'RW' for read-write access, and 'None' for no access.

Tip
Tip: Limit Access to Clients

If you leave Access_Type = RW in the main EXPORT section and limit access to a specific client in the CLIENT section, other clients will be able to connect anyway. To disable access to all clients and enable access for specific clients only, set Access_Type = None in the EXPORT section and then specify less restrictive access mode for one or more clients in the CLIENT section:

EXPORT {

	FSAL {
 access_type = "none";
 [...]
 }

 CLIENT {
		clients = 192.168.124.9;
		access_type = "RW";
		[...]
 }
[...]
}
Squash

NFS squash option.

FSAL

Exporting 'File System Abstraction Layer'. See Section 16.2.1.2, “FSAL Subsection”.

16.2.1.2 FSAL Subsection Edit source

EXPORT
{
  [...]
  FSAL {
    Name = CEPH;
  }
}
Name

Defines which back-end NFS Ganesha uses. Allowed values are CEPH for CephFS or RGW for Object Gateway. Depending on the choice, a role-mds or role-rgw must be defined in the policy.cfg.

16.2.2 RGW Section Edit source

RGW {
  ceph_conf = "/etc/ceph/ceph.conf";
  name = "name";
  cluster = "ceph";
}
ceph_conf

Points to the ceph.conf file. When deploying with DeepSea, it is not necessary to change this value.

name

The name of the Ceph client user used by NFS Ganesha.

cluster

Name of the Ceph cluster. SUSE Enterprise Storage 5.5 currently only supports one cluster name, which is ceph by default.

16.2.3 Changing Default NFS Ganesha Ports Edit source

NFS Ganesha uses the port 2049 for NFS and 875 for the rquota support by default. To change the default port numbers, use the NFS_Port and RQUOTA_Port options inside the NFS_CORE_PARAM section, for example:

NFS_CORE_PARAM
{
 NFS_Port = 2060;
 RQUOTA_Port = 876;
}

16.3 Custom NFS Ganesha Roles Edit source

Custom NFS Ganesha roles for cluster nodes can be defined. These roles are then assigned to nodes in the policy.cfg. The roles allow for:

  • Separated NFS Ganesha nodes for accessing Object Gateway and CephFS.

  • Assigning different Object Gateway users to NFS Ganesha nodes.

Having different Object Gateway users enables NFS Ganesha nodes to access different S3 buckets. S3 buckets can be used for access control. Note: S3 buckets are not to be confused with Ceph buckets used in the CRUSH Map.

16.3.1 Different Object Gateway Users for NFS Ganesha Edit source

The following example procedure for the Salt master shows how to create two NFS Ganesha roles with different Object Gateway users. In this example, the roles gold and silver are used, for which DeepSea already provides example configuration files.

  1. Open the file /srv/pillar/ceph/stack/global.yml with the editor of your choice. Create the file if it does not exist.

  2. The file needs to contain the following lines:

    rgw_configurations:
      - rgw
      - silver
      - gold
    ganesha_configurations:
      - silver
      - gold

    These roles can later be assigned in the policy.cfg.

  3. Create a file /srv/salt/ceph/rgw/users/users.d/gold.yml and add the following content:

    - { uid: "gold1", name: "gold1", email: "gold1@demo.nil" }

    Create a file /srv/salt/ceph/rgw/users/users.d/silver.yml and add the following content:

    - { uid: "silver1", name: "silver1", email: "silver1@demo.nil" }
  4. Now, templates for the ganesha.conf need to be created for each role. The original template of DeepSea is a good start. Create two copies:

    root # cd /srv/salt/ceph/ganesha/files/
    root # cp ganesha.conf.j2 silver.conf.j2
    root # cp ganesha.conf.j2 gold.conf.j2
  5. The new roles require keyrings to access the cluster. To provide access, copy the ganesha.j2:

    root # cp ganesha.j2 silver.j2
    root # cp ganesha.j2 gold.j2
  6. Copy the keyring for the Object Gateway:

    root # cd /srv/salt/ceph/rgw/files/
    root # cp rgw.j2 silver.j2
    root # cp rgw.j2 gold.j2
  7. Object Gateway also needs the configuration for the different roles:

    root # cd /srv/salt/ceph/configuration/files/
    root # cp ceph.conf.rgw silver.conf
    root # cp ceph.conf.rgw gold.conf
  8. Assign the newly created roles to cluster nodes in the /srv/pillar/ceph/proposals/policy.cfg:

    role-silver/cluster/NODE1.sls
    role-gold/cluster/NODE2.sls

    Replace NODE1 and NODE2 with the names of the nodes to which you want to assign the roles.

  9. Execute DeepSea Stages 0 to 4.

16.3.2 Separating CephFS and Object Gateway FSAL Edit source

The following example procedure for the Salt master shows how to create 2 new different roles that use CephFS and Object Gateway:

  1. Open the file /srv/pillar/ceph/rgw.sls with the editor of your choice. Create the file if it does not exist.

  2. The file needs to contain the following lines:

    rgw_configurations:
      ganesha_cfs:
        users:
          - { uid: "demo", name: "Demo", email: "demo@demo.nil" }
      ganesha_rgw:
        users:
          - { uid: "demo", name: "Demo", email: "demo@demo.nil" }
    
    ganesha_configurations:
      - ganesha_cfs
      - ganesha_rgw

    These roles can later be assigned in the policy.cfg.

  3. Now, templates for the ganesha.conf need to be created for each role. The original template of DeepSea is a good start. Create two copies:

    root # cd /srv/salt/ceph/ganesha/files/
    root # cp ganesha.conf.j2 ganesha_rgw.conf.j2
    root # cp ganesha.conf.j2 ganesha_cfs.conf.j2
  4. Edit the ganesha_rgw.conf.j2 and remove the section:

    {% if salt.saltutil.runner('select.minions', cluster='ceph', roles='mds') != [] %}
            [...]
    {% endif %}
  5. Edit the ganesha_cfs.conf.j2 and remove the section:

    {% if salt.saltutil.runner('select.minions', cluster='ceph', roles=role) != [] %}
            [...]
    {% endif %}
  6. The new roles require keyrings to access the cluster. To provide access, copy the ganesha.j2:

    root # cp ganesha.j2 ganesha_rgw.j2
    root # cp ganesha.j2 ganesha_cfs.j2

    The line caps mds = "allow *" can be removed from the ganesha_rgw.j2.

  7. Copy the keyring for the Object Gateway:

    root # cp /srv/salt/ceph/rgw/files/rgw.j2 \
    /srv/salt/ceph/rgw/files/ganesha_rgw.j2
  8. Object Gateway needs the configuration for the new role:

    root # cp /srv/salt/ceph/configuration/files/ceph.conf.rgw \
    /srv/salt/ceph/configuration/files/ceph.conf.ganesha_rgw
  9. Assign the newly created roles to cluster nodes in the /srv/pillar/ceph/proposals/policy.cfg:

    role-ganesha_rgw/cluster/NODE1.sls
    role-ganesha_cfs/cluster/NODE1.sls

    Replace NODE1 and NODE2 with the names of the nodes to which you want to assign the roles.

  10. Execute DeepSea Stages 0 to 4.

16.3.3 Supported Operations Edit source

The RGW NFS interface supports most operations on files and directories, with the following restrictions:

  • Links including symbolic links are not supported.

  • NFS access control lists (ACLs) are not supported. Unix user and group ownership and permissions are supported.

  • Directories may not be moved or renamed. You may move files between directories.

  • Only full, sequential write I/O is supported. Therefore, write operations are forced to be uploads. Many typical I/O operations, such as editing files in place, will necessarily fail as they perform non-sequential stores. There are file utilities that apparently write sequentially (for example some versions of GNU tar), but may fail due to infrequent non-sequential stores. When mounting via NFS, application's sequential I/O can generally be forced to sequential writes to the NFS server via synchronous mounting (the -o sync option). NFS clients that cannot mount synchronously (for example Microsoft Windows*) will not be able to upload files.

  • NFS RGW supports read-write operations only for block size smaller than 4MB.

16.4 Starting or Restarting NFS Ganesha Edit source

To enable and start the NFS Ganesha service, run:

root # systemctl enable nfs-ganesha
root # systemctl start nfs-ganesha

Restart NFS Ganesha with:

root # systemctl restart nfs-ganesha

When NFS Ganesha is started or restarted, it has a grace timeout of 90 seconds for NFS v4. During the grace period, new requests from clients are actively rejected. Hence, clients may face a slowdown of requests when NFS is in grace state.

16.5 Setting the Log Level Edit source

You change the default debug level NIV_EVENT by editing the file /etc/sysconfig/nfs-ganesha. Replace NIV_EVENT with NIV_DEBUG or NIV_FULL_DEBUG. Increasing the log verbosity can produce large amounts of data in the log files.

OPTIONS="-L /var/log/ganesha/ganesha.log -f /etc/ganesha/ganesha.conf -N NIV_EVENT"

A restart of the service is required when changing the log level.

Note
Note

NFS Ganesha uses Ceph client libraries to connect to the Ceph cluster. By default, client libraries do not log errors or any other output. To see more details about NFS Ganesha interacting with the Ceph cluster (for example, connection issues details) logging needs to be explicitly defined in the ceph.conf configuration file under the [client] section. For example:

[client]
	log_file = "/var/log/ceph/ceph-client.log"

16.6 Verifying the Exported NFS Share Edit source

When using NFS v3, you can verify whether the NFS shares are exported on the NFS Ganesha server node:

root # showmount -e
/ (everything)

16.7 Mounting the Exported NFS Share Edit source

To mount the exported NFS share (as configured in Section 16.2, “Configuration”) on a client host, run:

root # mount -t nfs -o rw,noatime,sync \
 nfs_ganesha_server_hostname:/ /path/to/local/mountpoint

16.8 Additional Resources Edit source

The original NFS Ganesha documentation can be found at https://github.com/nfs-ganesha/nfs-ganesha/wiki/Docs.

Print this page