Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
Applies to SUSE Enterprise Storage 6

19 NFS Ganesha: Export Ceph Data via NFS Edit source

NFS Ganesha is an NFS server (refer to Sharing File Systems with NFS ) that runs in a user address space instead of as part of the operating system kernel. With NFS Ganesha, you can plug in your own storage mechanism—such as Ceph—and access it from any NFS client.

S3 buckets are exported to NFS on a per-user basis, for example via the path GANESHA_NODE:/USERNAME/BUCKETNAME.

A CephFS is exported by default via the path GANESHA_NODE:/cephfs.

Note
Note: NFS Ganesha Performance

Because of increased protocol overhead and additional latency caused by extra network hops between the client and the storage, accessing Ceph via an NFS Gateway may significantly reduce application performance when compared to native CephFS or Object Gateway clients.

19.1 Installation Edit source

For installation instructions, see Chapter 12, Installation of NFS Ganesha.

19.2 Configuration Edit source

For a list of all parameters available within the configuration file, see:

  • man ganesha-config

  • man ganesha-ceph-config for CephFS File System Abstraction Layer (FSAL) options.

  • man ganesha-rgw-config for Object Gateway FSAL options.

This section includes information to help you configure the NFS Ganesha server to export the cluster data accessible via Object Gateway and CephFS.

Important
Important: Restart the NFS Ganesha Service

When you configure NFS Ganesha via the Ceph Dashboard, the nfs-ganesha.service service is restarted automatically for the changes to take effect.

When you configure NFS Ganesha manually, you need to restart the nfs-ganesha.service service on the NFS Ganesha node to re-read the new configuration:

root@minion > systemctl restart nfs-ganesha.service

NFS Ganesha configuration consists of two parts: service configuration and exports configuration. The service configuration is controlled by /etc/ganesha/ganesha.conf. Note that changes to this file are overwritten when DeepSea stage 4 is executed. To persistently change the settings, edit the file /srv/salt/ceph/ganesha/files/ganesha.conf.j2 located on the Salt master. The exports configuration is stored in the Ceph cluster as RADOS objects.

19.2.1 Service Configuration Edit source

The service configuration is stored in /etc/ganesha/ganesha.conf and controls all NFS Ganesha daemon settings, including where the exports configuration are stored in the Ceph cluster. Note that changes to this file are overwritten when DeepSea stage 4 is executed. To persistently change the settings, edit the file /srv/salt/ceph/ganesha/files/ganesha.conf.j2 located on the Salt master.

19.2.1.1 RADOS_URLS Section Edit source

The RADOS_URLS section configures the Ceph cluster access for reading NFS Ganesha configuration from RADOS objects.

RADOS_URLS {
  Ceph_Conf = /etc/ceph/ceph.conf;

  UserId = "ganesha.MINION_ID";
  watch_url = "rados://RADOS_POOL/ganesha/conf-MINION_ID";
}
Ceph_Conf

Ceph configuration file path location.

UserId

The cephx user ID.

watch_url

The RADOS object URL to watch for reload notifications.

19.2.1.2 RGW Section Edit source

RGW {
  ceph_conf = "/etc/ceph/ceph.conf";
  name = "name";
  cluster = "ceph";
}
ceph_conf

Points to the ceph.conf file. When deploying with DeepSea, it is not necessary to change this value.

name

The name of the Ceph client user used by NFS Ganesha.

cluster

The name of the Ceph cluster. SUSE Enterprise Storage 6 currently only supports one cluster name, which is ceph by default.

19.2.1.3 RADOS Object URL Edit source

%url rados://RADOS_POOL/ganesha/conf-MINION_ID

NFS Ganesha supports reading the configuration from a RADOS object. The %url directive allows to specify a RADOS URL that identifies the location of the RADOS object.

A RADOS URL can be of two forms: rados://<POOL>/<OBJECT> or rados://<POOL>/<NAMESPACE>/<OBJECT>, where POOL is the RADOS pool where the object is stored, NAMESPACE the pool namespace where the object is stored, and OBJECT the object name.

To support the Ceph Dashboard's NFS Ganesha management capabilities, you need to follow a convention on the name of the RADOS object for each service daemon. The name of the object must be of the form conf-MINION_ID where MINION_ID corresponds to the Salt minion ID of the node where this service is running.

DeepSea already takes care of correctly generating this URL, and you do not need to make any change.

19.2.1.4 Changing Default NFS Ganesha Ports Edit source

NFS Ganesha uses the port 2049 for NFS and 875 for the rquota support by default. To change the default port numbers, use the NFS_Port and RQUOTA_Port options inside the NFS_CORE_PARAM section, for example:

NFS_CORE_PARAM
{
NFS_Port = 2060;
RQUOTA_Port = 876;
}

19.2.2 Exports Configuration Edit source

Exports configuration is stored as RADOS objects in the Ceph cluster. Each export block is stored in its own RADOS object named export-ID, where ID must match the Export_ID attribute of the export configuration. The association between exports and NFS Ganesha services is done through the conf-MINION_ID objects. Each service object contains a list of RADOS URLs for each export exported by that service. An export block looks like the following:

EXPORT
{
  Export_Id = 1;
  Path = "/";
  Pseudo = "/";
  Access_Type = RW;
  Squash = No_Root_Squash;
  [...]
  FSAL {
    Name = CEPH;
  }
}

To create the RADOS object for the above export block, we first need to store the export block code in a file. Then we can use the RADOS CLI tool to store the contents of the previously saved file in a RADOS object.

cephadm@adm > rados -p POOL -N NAMESPACE put export-EXPORT_ID EXPORT_FILE

After creating the export object, we can associate the export with a service instance by adding the corresponding RADOS URL of the export object to the service object. The following sections describe how to configure an export block.

19.2.2.1 Export Main Section Edit source

Export_Id

Each export needs to have a unique 'Export_Id' (mandatory).

Path

Export path in the related CephFS pool (mandatory). This allows subdirectories to be exported from the CephFS.

Pseudo

Target NFS export path (mandatory for NFSv4). It defines under which NFS export path the exported data is available.

Example: with the value /cephfs/ and after executing

root # mount GANESHA_IP:/cephfs/ /mnt/

The CephFS data is available in the directory /mnt/cephfs/ on the client.

Access_Type

'RO' for read-only access, 'RW' for read-write access, and 'None' for no access.

Tip
Tip: Limit Access to Clients

If you leave Access_Type = RW in the main EXPORT section and limit access to a specific client in the CLIENT section, other clients will be able to connect anyway. To disable access to all clients and enable access for specific clients only, set Access_Type = None in the EXPORT section and then specify less restrictive access mode for one or more clients in the CLIENT section:

EXPORT {

	FSAL {
 access_type = "none";
 [...]
 }

 CLIENT {
		clients = 192.168.124.9;
		access_type = "RW";
		[...]
 }
[...]
}
Squash

NFS squash option.

FSAL

Exporting 'File System Abstraction Layer'. See Section 19.2.2.2, “FSAL Subsection”.

19.2.2.2 FSAL Subsection Edit source

EXPORT
{
  [...]
  FSAL {
    Name = CEPH;
  }
}
Name

Defines which back-end NFS Ganesha uses. Allowed values are CEPH for CephFS or RGW for Object Gateway. Depending on the choice, a role-mds or role-rgw must be defined in the policy.cfg.

19.2.3 Obtaining Exports Configuration Edit source

To obtain existing exports configuration, follow these steps:

  1. Find the RADOS pool name and namespace for NFS Ganesha exports. The following command outputs a string of the POOL_NAME/NAMESPACE form.

    cephadm@adm > ceph dashboard get-ganesha-clusters-rados-pool-namespace
    cephfs_data/ganesha
  2. By using the obtained pool name and namespace, list the RADOS objects available on that pool:

    cephadm@adm > rados -p cephfs_data -N ganesha ls
    conf-osd-node1
    export-1
    conf-osd-node2
    Tip
    Tip

    To see how each node is configured, view its content using the following command:

    cephadm@adm > cat conf-osd-node1
    %url "rados://cephfs_data/ganesha/export-1"
    cat conf-osd-node2
    %url "rados://cephfs_data/ganesha/export-1"

    In this case, both nodes will use rados://cephfs_data/ganesha/export-1. If there are multiple configurations, each node can use a different configuration.

  3. Each export configuration is stored in a single object with the name export-ID. Use the following command to obtain the contents of the object and save it to /tmp/export-1:

    cephadm@adm > rados -p cephfs_data -N ganesha get export-1 /tmp/export-1
    cephadm@adm > cat /tmp/export-1
    EXPORT {
        export_id = 1;
        path = "/";
        pseudo = "/cephfs";
        access_type = "RW";
        squash = "no_root_squash";
        protocols = 3, 4;
        transports = "UDP", "TCP";
        FSAL {
            name = "CEPH";
            user_id = "admin";
            filesystem = "cephfs";
            secret_access_key = "SECRET_KEY";
        }
    
        CLIENT {
            clients = 192.168.3.105;
            access_type = "RW";
            squash = "no_root_squash";
        }
    }

19.3 Custom NFS Ganesha Roles Edit source

Custom NFS Ganesha roles for cluster nodes can be defined. These roles are then assigned to nodes in the policy.cfg. The roles allow for:

  • Separated NFS Ganesha nodes for accessing Object Gateway and CephFS.

  • Assigning different Object Gateway users to NFS Ganesha nodes.

Having different Object Gateway users enables NFS Ganesha nodes to access different S3 buckets. S3 buckets can be used for access control. Note: S3 buckets are not to be confused with Ceph buckets used in the CRUSH Map.

19.3.1 Different Object Gateway Users for NFS Ganesha Edit source

The following example procedure for the Salt master shows how to create two NFS Ganesha roles with different Object Gateway users. In this example, the roles gold and silver are used, for which DeepSea already provides example configuration files.

  1. Open the file /srv/pillar/ceph/stack/global.yml with the editor of your choice. Create the file if it does not exist.

  2. The file needs to contain the following lines:

    rgw_configurations:
      - rgw
      - silver
      - gold
    ganesha_configurations:
      - silver
      - gold

    These roles can later be assigned in the policy.cfg.

  3. Create a file /srv/salt/ceph/rgw/users/users.d/gold.yml and add the following content:

    - { uid: "gold1", name: "gold1", email: "gold1@demo.nil" }

    Create a file /srv/salt/ceph/rgw/users/users.d/silver.yml and add the following content:

    - { uid: "silver1", name: "silver1", email: "silver1@demo.nil" }
  4. Now, templates for the ganesha.conf need to be created for each role. The original template of DeepSea is a good start. Create two copies:

    root@master # cd /srv/salt/ceph/ganesha/files/
    root@master # cp ganesha.conf.j2 silver.conf.j2
    root@master # cp ganesha.conf.j2 gold.conf.j2
  5. The new roles require keyrings to access the cluster. To provide access, copy the ganesha.j2:

    root@master # cp ganesha.j2 silver.j2
    root@master # cp ganesha.j2 gold.j2
  6. Copy the keyring for the Object Gateway:

    root@master # cd /srv/salt/ceph/rgw/files/
    root@master # cp rgw.j2 silver.j2
    root@master # cp rgw.j2 gold.j2
  7. Object Gateway also needs the configuration for the different roles:

    root@master # cd /srv/salt/ceph/configuration/files/
    root@master # cp ceph.conf.rgw silver.conf
    root@master # cp ceph.conf.rgw gold.conf
  8. Assign the newly created roles to cluster nodes in the /srv/pillar/ceph/proposals/policy.cfg:

    role-silver/cluster/NODE1.sls
    role-gold/cluster/NODE2.sls

    Replace NODE1 and NODE2 with the names of the nodes to which you want to assign the roles.

  9. Execute DeepSea Stages 0 to 4.

19.3.2 Separating CephFS and Object Gateway FSAL Edit source

The following example procedure for the Salt master shows how to create 2 new different roles that use CephFS and Object Gateway:

  1. Open the file /srv/pillar/ceph/rgw.sls with the editor of your choice. Create the file if it does not exist.

  2. The file needs to contain the following lines:

    rgw_configurations:
      ganesha_cfs:
        users:
          - { uid: "demo", name: "Demo", email: "demo@demo.nil" }
      ganesha_rgw:
        users:
          - { uid: "demo", name: "Demo", email: "demo@demo.nil" }
    
    ganesha_configurations:
      - ganesha_cfs
      - ganesha_rgw

    These roles can later be assigned in the policy.cfg.

  3. Now, templates for the ganesha.conf need to be created for each role. The original template of DeepSea is a good start. Create two copies:

    root@master # cd /srv/salt/ceph/ganesha/files/
    root@master # cp ganesha.conf.j2 ganesha_rgw.conf.j2
    root@master # cp ganesha.conf.j2 ganesha_cfs.conf.j2
  4. Edit the ganesha_rgw.conf.j2 and remove the section:

    {% if salt.saltutil.runner('select.minions', cluster='ceph', roles='mds') != [] %}
            [...]
    {% endif %}
  5. Edit the ganesha_cfs.conf.j2 and remove the section:

    {% if salt.saltutil.runner('select.minions', cluster='ceph', roles=role) != [] %}
            [...]
    {% endif %}
  6. The new roles require keyrings to access the cluster. To provide access, copy the ganesha.j2:

    root@master # cp ganesha.j2 ganesha_rgw.j2
    root@master # cp ganesha.j2 ganesha_cfs.j2

    The line caps mds = "allow *" can be removed from the ganesha_rgw.j2.

  7. Copy the keyring for the Object Gateway:

    root@master # cp /srv/salt/ceph/rgw/files/rgw.j2 \
    /srv/salt/ceph/rgw/files/ganesha_rgw.j2
  8. Object Gateway needs the configuration for the new role:

    root@master # cp /srv/salt/ceph/configuration/files/ceph.conf.rgw \
    /srv/salt/ceph/configuration/files/ceph.conf.ganesha_rgw
  9. Assign the newly created roles to cluster nodes in the /srv/pillar/ceph/proposals/policy.cfg:

    role-ganesha_rgw/cluster/NODE1.sls
    role-ganesha_cfs/cluster/NODE1.sls

    Replace NODE1 and NODE2 with the names of the nodes to which you want to assign the roles.

  10. Execute DeepSea Stages 0 to 4.

19.3.3 Supported Operations Edit source

The RGW NFS interface supports most operations on files and directories, with the following restrictions:

  • Links including symbolic links are not supported.

  • NFS access control lists (ACLs) are not supported. Unix user and group ownership and permissions are supported.

  • Directories may not be moved or renamed. You may move files between directories.

  • Only full, sequential write I/O is supported. Therefore, write operations are forced to be uploads. Many typical I/O operations, such as editing files in place, will necessarily fail as they perform non-sequential stores. There are file utilities that apparently write sequentially (for example, some versions of GNU tar), but may fail because of infrequent non-sequential stores. When mounting via NFS, an application's sequential I/O can generally be forced to perform sequential writes to the NFS server via synchronous mounting (the -o sync option). NFS clients that cannot mount synchronously (for example, Microsoft Windows*) will not be able to upload files.

  • NFS RGW supports read-write operations only for block sizes smaller than 4 MB.

19.4 Starting or Restarting NFS Ganesha Edit source

To enable and start the NFS Ganesha service, run:

root@minion > systemctl enable nfs-ganesha
root@minion > systemctl start nfs-ganesha

Restart NFS Ganesha with:

root@minion > systemctl restart nfs-ganesha

When NFS Ganesha is started or restarted, it has a grace timeout of 90 seconds for NFS v4. During the grace period, new requests from clients are actively rejected. Hence, clients may face a slowdown of requests when NFS is in grace state.

19.5 Setting the Log Level Edit source

You change the default debug level NIV_EVENT by editing the file /etc/sysconfig/ganesha. Replace NIV_EVENT with NIV_DEBUG or NIV_FULL_DEBUG. Increasing the log verbosity can produce large amounts of data in the log files.

OPTIONS="-L /var/log/ganesha/ganesha.log -f /etc/ganesha/ganesha.conf -N NIV_EVENT"

A restart of the service is required when changing the log level.

Note
Note

NFS Ganesha uses Ceph client libraries to connect to the Ceph cluster. By default, client libraries do not log errors or any other output. To see more details about NFS Ganesha interacting with the Ceph cluster (for example, connection issues details) logging needs to be explicitly defined in the ceph.conf configuration file under the [client] section. For example:

[client]
	log_file = "/var/log/ceph/ceph-client.log"

19.6 Verifying the Exported NFS Share Edit source

When using NFS v3, you can verify whether the NFS shares are exported on the NFS Ganesha server node:

root@minion > showmount -e
/ (everything)

19.7 Mounting the Exported NFS Share Edit source

To mount the exported NFS share (as configured in Section 19.2, “Configuration”) on a client host, run:

root # mount -t nfs -o rw,noatime,sync \
 nfs_ganesha_server_hostname:/ /path/to/local/mountpoint
Print this page