Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
Applies to SUSE Linux Enterprise Desktop 15 SP1

32 Sharing File Systems with NFS Edit source

The Network File System (NFS) is a protocol that allows access to files on a server in a manner similar to accessing local files.

SUSE Linux Enterprise Desktop installs NFS v4.2, which introduces support for sparse files, file pre-allocation, server-side clone and copy, application data block (ADB), and labeled NFS for mandatory access control (MAC) (requires MAC on both client and server).

32.1 Overview Edit source

The Network File System (NFS) is a standardized, well-proven and widely supported network protocol that allows files to be shared between separate hosts.

The Network Information Service (NIS) can be used to have a centralized user management in the network. Combining NFS and NIS allows using file and directory permissions for access control in the network. NFS with NIS makes a network transparent to the user.

In the default configuration, NFS completely trusts the network and thus any machine that is connected to a trusted network. Any user with administrator privileges on any computer with physical access to any network the NFS server trusts can access any files that the server makes available.

Often, this level of security is perfectly satisfactory, such as when the network that is trusted is truly private, often localized to a single cabinet or machine room, and no unauthorized access is possible. In other cases the need to trust a whole subnet as a unit is restrictive and there is a need for more fine-grained trust. To meet the need in these cases, NFS supports various security levels using the Kerberos infrastructure. Kerberos requires NFSv4, which is used by default. For details, see Chapter 7, Network Authentication with Kerberos.

The following are terms used in the YaST module.


A directory exported by an NFS server, which clients can integrate into their systems.

NFS Client

The NFS client is a system that uses NFS services from an NFS server over the Network File System protocol. The TCP/IP protocol is already integrated into the Linux kernel; there is no need to install any additional software.

NFS Server

The NFS server provides NFS services to clients. A running server depends on the following daemons: nfsd (worker), idmapd (ID-to-name mapping for NFSv4, needed for certain scenarios only), statd (file locking), and mountd (mount requests).


NFSv3 is the version 3 implementation, the old stateless NFS that supports client authentication.


NFSv4 is the new version 4 implementation that supports secure user authentication via Kerberos. NFSv4 requires one single port only and thus is better suited for environments behind a firewall than NFSv3.

The protocol is specified as http://tools.ietf.org/html/rfc3530.


Parallel NFS, a protocol extension of NFSv4. Any pNFS clients can directly access the data on an NFS server.

32.2 Installing NFS Server Edit source

For installing and configuring an NFS server, see the SUSE Linux Enterprise Server documentation.

32.3 Configuring Clients Edit source

To configure your host as an NFS client, you do not need to install additional software. All needed packages are installed by default.

32.3.1 Importing File Systems with YaST Edit source

Authorized users can mount NFS directories from an NFS server into the local file tree using the YaST NFS client module. Proceed as follows:

Procedure 32.1: Importing NFS Directories
  1. Start the YaST NFS client module.

  2. Click Add in the NFS Shares tab. Enter the host name of the NFS server, the directory to import, and the mount point at which to mount this directory locally.

  3. When using NFSv4, select Enable NFSv4 in the NFS Settings tab. Additionally, the NFSv4 Domain Name must contain the same value as used by the NFSv4 server. The default domain is localdomain.

  4. To use Kerberos authentication for NFS, GSS security must be enabled. Select Enable GSS Security.

  5. Enable Open Port in Firewall in the NFS Settings tab if you use a Firewall and want to allow access to the service from remote computers. The firewall status is displayed next to the check box.

  6. Click OK to save your changes.

The configuration is written to /etc/fstab and the specified file systems are mounted. When you start the YaST configuration client at a later time, it also reads the existing configuration from this file.

Tip: NFS as a Root File System

On (diskless) systems, where the root partition is mounted via network as an NFS share, you need to be careful when configuring the network device with which the NFS share is accessible.

When shutting down or rebooting the system, the default processing order is to turn off network connections, then unmount the root partition. With NFS root, this order causes problems as the root partition cannot be cleanly unmounted as the network connection to the NFS share is already not activated. To prevent the system from deactivating the relevant network device, open the network device configuration tab as described in Section, “Activating the Network Device” and choose On NFSroot in the Device Activation pane.

32.3.2 Importing File Systems Manually Edit source

The prerequisite for importing file systems manually from an NFS server is a running RPC port mapper. The nfs service takes care to start it properly; thus, start it by entering systemctl start nfs as root. Then remote file systems can be mounted in the file system like local partitions using mount:


To import user directories from the nfs.example.com machine, for example, use:

tux > sudo mount nfs.example.com:/home /home Using the Automount Service Edit source

The autofs daemon can be used to mount remote file systems automatically. Add the following entry to the /etc/auto.master file:

/nfsmounts /etc/auto.nfs

Now the /nfsmounts directory acts as the root for all the NFS mounts on the client if the auto.nfs file is filled appropriately. The name auto.nfs is chosen for the sake of convenience—you can choose any name. In auto.nfs add entries for all the NFS mounts as follows:

localdata -fstype=nfs server1:/data
nfs4mount -fstype=nfs4 server2:/

Activate the settings with systemctl start autofs as root. In this example, /nfsmounts/localdata, the /data directory of server1, is mounted with NFS and /nfsmounts/nfs4mount from server2 is mounted with NFSv4.

If the /etc/auto.master file is edited while the service autofs is running, the automounter must be restarted for the changes to take effect with systemctl restart autofs. Manually Editing /etc/fstab Edit source

A typical NFSv3 mount entry in /etc/fstab looks like this:

nfs.example.com:/data /local/path nfs rw,noauto 0 0

For NFSv4 mounts, use nfs4 instead of nfs in the third column:

nfs.example.com:/data /local/pathv4 nfs4 rw,noauto 0 0

The noauto option prevents the file system from being mounted automatically at start-up. If you want to mount the respective file system manually, it is possible to shorten the mount command specifying the mount point only:

tux > sudo mount /local/path
Note: Mounting at Start-Up

If you do not enter the noauto option, the init scripts of the system will handle the mount of those file systems at start-up.

32.3.3 Parallel NFS (pNFS) Edit source

NFS is one of the oldest protocols, developed in the '80s. As such, NFS is usually sufficient if you want to share small files. However, when you want to transfer big files or many clients want to access data, an NFS server becomes a bottleneck and has a significant impact on the system performance. This is because of files quickly getting bigger, whereas the relative speed of your Ethernet has not fully kept up.

When you request a file from a regular NFS server, the server looks up the file metadata, collects all the data and transfers it over the network to your client. However, the performance bottleneck becomes apparent no matter how small or big the files are:

  • With small files most of the time is spent collecting the metadata.

  • With big files most of the time is spent on transferring the data from server to client.

pNFS, or parallel NFS, overcomes this limitation as it separates the file system metadata from the location of the data. As such, pNFS requires two types of servers:

  • A metadata or control server that handles all the non-data traffic

  • One or more storage server(s) that hold(s) the data

The metadata and the storage servers form a single, logical NFS server. When a client wants to read or write, the metadata server tells the NFSv4 client which storage server to use to access the file chunks. The client can access the data directly on the server.

SUSE Linux Enterprise Desktop supports pNFS on the client side only. Configuring pNFS Client With YaST Edit source

Proceed as described in Procedure 32.1, “Importing NFS Directories”, but click the pNFS (v4.2) check box and optionally NFSv4 share. YaST will do all the necessary steps and will write all the required options in the file /etc/exports. Configuring pNFS Client Manually Edit source

Refer to Section 32.3.2, “Importing File Systems Manually” to start. Most of the configuration is done by the NFSv4 server. For pNFS, the only difference is to add the minorversion option and the metadata server MDS_SERVER to your mount command:

tux > sudo mount -t nfs4 -o minorversion=1 MDS_SERVER MOUNTPOINT

To help with debugging, change the value in the /proc file system:

tux > sudo echo 32767 > /proc/sys/sunrpc/nfsd_debug
tux > sudo echo 32767 > /proc/sys/sunrpc/nfs_debug

32.4 For More Information Edit source

In addition to the man pages of exports, nfs, and mount, information about configuring an NFS server and client is available in /usr/share/doc/packages/nfsidmap/README. For further documentation online refer to the following Web sites:

Print this page