19 Sharing file systems with NFS #
The Network File System (NFS) is a protocol that allows access to files on a server in a manner similar to accessing local files.
SUSE Linux Enterprise Server installs NFS v4.2, which introduces support for sparse files, file pre-allocation, server-side clone and copy, application data block (ADB), and labeled NFS for mandatory access control (MAC) (requires MAC on both client and server).
19.1 Overview #
The Network File System (NFS) is a standardized, well-proven and widely supported network protocol that allows sharing files between separate hosts.
The Network Information Service (NIS) can be used to have centralized user management in the network. Combining NFS and NIS allows using file and directory permissions for access control in the network. NFS with NIS makes a network transparent to the user.
In the default configuration, NFS completely trusts the network and thus any machine that is connected to a trusted network. Any user with administrator privileges on any computer with physical access to any network the NFS server trusts can access any files that the server makes available.
Often, this level of security is perfectly satisfactory, such as when the network that is trusted is truly private, often localized to a single cabinet or machine room, and no unauthorized access is possible. In other cases, the need to trust a whole subnet as a unit is restrictive, and there is a need for more fine-grained trust. To meet the need in these cases, NFS supports various security levels using the Kerberos infrastructure. Kerberos requires NFSv4, which is used by default. For details, see Chapter 6, Network authentication with Kerberos.
The following are terms used in the YaST module.
- Exports
A directory exported by an NFS server, which clients can integrate into their systems.
- NFS client
The NFS client is a system that uses NFS services from an NFS server over the Network File System protocol. The TCP/IP protocol is already integrated into the Linux kernel; there is no need to install any additional software.
- NFS server
The NFS server provides NFS services to clients. A running server depends on the following daemons:
nfsd
(worker),idmapd
(ID-to-name mapping for NFSv4, needed for certain scenarios only),statd
(file locking), andmountd
(mount requests).- NFSv3
NFSv3 is the version 3 implementation, the “old” stateless NFS that supports client authentication.
- NFSv4
NFSv4 is the new version 4 implementation that supports secure user authentication via Kerberos. NFSv4 requires one single port only and thus is better suited for environments behind a firewall than NFSv3.
The protocol is specified as https://datatracker.ietf.org/doc/rfc7531/.
- pNFS
Parallel NFS, a protocol extension of NFSv4. Any pNFS clients can directly access the data on an NFS server.
In principle, all exports can be made using IP addresses only. To avoid
timeouts, you need a working DNS system. DNS is necessary at least for
logging purposes, because the
mountd
daemon does reverse
lookups.
19.2 Installing NFS server #
The NFS server is not part of the default installation. To install the NFS server using YaST, choose
› , select , and enable the option in the section. Click to install the required packages.The pattern does not include the YaST module for the NFS Server. After the pattern installation is complete, install the module by running:
>
sudo
zypper in yast2-nfs-server
Like NIS, NFS is a client/server system. However, a machine can be both—it can supply file systems over the network (export) and mount file systems from other hosts (import).
Mounting NFS volumes locally on the exporting server is not supported on SUSE Linux Enterprise Server.
19.3 Configuring NFS server #
Configuring an NFS server can be done either through YaST or manually. For authentication, NFS can also be combined with Kerberos.
19.3.1 Exporting file systems with YaST #
With YaST, turn a host in your network into an NFS server—a server that exports directories and files to all hosts granted access to it or to all members of a group. Thus, the server can also provide applications without installing the applications locally on every host.
To set up such a server, proceed as follows:
Start YaST and select Figure 19.1, “NFS server configuration tool”. You may be prompted to install additional software.
› ; seeFigure 19.1: NFS server configuration tool #Click the
radio button.If
firewalld
is active on your system, configure it separately for NFS (see Section 19.5, “Operating an NFS server and clients behind a firewall”). YaST does not yet have complete support forfirewalld
, so ignore the "Firewall not configurable" message and continue.Check whether you want to Note: NFSv2.
. If you deactivate NFSv4, YaST will only support NFSv3. For information about enabling NFSv2, seeIf NFSv4 is selected, additionally enter the appropriate NFSv4 domain name. This parameter is used by the
idmapd
daemon that is required for Kerberos setups or if clients cannot work with numeric user names. Leave it aslocaldomain
(the default) if you do not runidmapd
or do not have any special requirements. For more information on theidmapd
daemon, see/etc/idmapd.conf
.Important: NFSv4 Domain NameNote that the domain name needs to be configured on all NFSv4 clients as well. Only clients that share the same domain name as the server can access the server. The default domain name for server and clients is
localdomain
.
Click
if you need secure access to the server. A prerequisite for this is to have Kerberos installed on your domain and to have both the server and the clients kerberized. Click to proceed with the next configuration dialog.Click
in the upper half of the dialog to export your directory.If you have not configured the allowed hosts already, another dialog for entering the client information and options pops up automatically. Enter the host wild card (usually you can leave the default settings as they are).
There are four possible types of host wild cards that can be set for each host: a single host (name or IP address), netgroups, wild cards (such as
*
indicating all machines can access the server), and IP networks.For more information about these options, see the
exports
man page.Click
to complete the configuration.
19.3.2 Exporting file systems manually #
The configuration files for the NFS export service are
/etc/exports
and
/etc/sysconfig/nfs
. In addition to these files,
/etc/idmapd.conf
is needed for the NFSv4 server
configuration with kerberized NFS or if the clients cannot work with
numeric user names.
To start or restart the services, run the command systemctl
restart nfs-server
. This also restarts the RPC port mapper
that is required by the NFS server.
To make sure the NFS server always starts at boot time, run
sudo systemctl enable nfs-server
.
NFSv4 is the latest version of the NFS protocol available on SUSE Linux Enterprise Server. Configuring directories for export with NFSv4 is now the same as with NFSv3.
On
SUSE Linux Enterprise Server 11, the bind mount in
/etc/exports
was mandatory. It is still
supported, but now deprecated.
/etc/exports
The
/etc/exports
file contains a list of entries. Each entry indicates a directory that is shared and how it is shared. A typical entry in/etc/exports
consists of:/SHARED/DIRECTORY HOST(OPTION_LIST)
For example:
/nfs_exports/public *(rw,sync,root_squash,wdelay) /nfs_exports/department1 *.department1.example.com(rw,sync,root_squash,wdelay) /nfs_exports/team1 192.168.1.0/24(rw,sync,root_squash,wdelay) /nfs_exports/tux 192.168.1.2(rw,sync,root_squash)
In this example, the following values for HOST are used:
*
: exports to all clients on the network*.department1.example.com
: only exports to clients on the *.department1.example.com domain192.168.1.0/24
: only exports to clients with IP adresses in the range of 192.168.1.0/24192.168.1.2
: only exports to the machine with the IP address 192.168.1.2
In addition to the examples above, you can also restrict exports to netgroups (
@my-hosts
) defined in/etc/netgroup
. For a detailed explanation of all options and their meanings, refer to theman
page of/etc/exports
: (man exports
).In case you have modified
/etc/exports
while the NFS server was running, you need to restart it for the changes to become active:sudo systemctl restart nfs-server
./etc/sysconfig/nfs
The
/etc/sysconfig/nfs
file contains a few parameters that determine NFSv4 server daemon behavior. It is important to set the parameterNFS4_SUPPORT
toyes
(default).NFS4_SUPPORT
determines whether the NFS server supports NFSv4 exports and clients.In case you have modified
/etc/sysconfig/nfs
while the NFS server was running, you need to restart it for the changes to become active:sudo systemctl restart nfs-server
.Tip: Mount optionsOn SUSE Linux Enterprise Server 11, the
--bind
mount in/etc/exports
was mandatory. It is still supported, but now deprecated. Configuring directories for export with NFSv4 is now the same as with NFSv3.Note: NFSv2If NFS clients still depend on NFSv2, enable it on the server in
/etc/sysconfig/nfs
by setting:NFSD_OPTIONS="-V2" MOUNTD_OPTIONS="-V2"
After restarting the service, check whether version 2 is available with the command:
>
cat /proc/fs/nfsd/versions +2 +3 +4 +4.1 +4.2/etc/idmapd.conf
The
idmapd
daemon is only required if Kerberos authentication is used or if clients cannot work with numeric user names. Linux clients can work with numeric user names since Linux kernel 2.6.39. Theidmapd
daemon does the name-to-ID mapping for NFSv4 requests to the server and replies to the client.If required,
idmapd
needs to run on the NFSv4 server. Name-to-ID mapping on the client will be done bynfsidmap
provided by the package nfs-client.Make sure that there is a uniform way in which user names and IDs (UIDs) are assigned to users across machines that might be sharing file systems using NFS. This can be achieved by using NIS, LDAP, or any uniform domain authentication mechanism in your domain.
The parameter
Domain
must be set in/etc/idmapd.conf
. It must be the same for the server and all NFSv4 clients that access this server. Clients in a different NFSv4 domain cannot access the server. Sticking with the default domainlocaldomain
is recommended. If you need to choose a different name, you may want to go with the FQDN of the host, minus the host name. A sample configuration file looks like the following:[General] Verbosity = 0 Pipefs-Directory = /var/lib/nfs/rpc_pipefs Domain = localdomain [Mapping] Nobody-User = nobody Nobody-Group = nobody
To start the
idmapd
daemon, runsystemctl start nfs-idmapd
. In case you have modified/etc/idmapd.conf
while the daemon was running, you need to restart it for the changes to become active:systemctl restart nfs-idmapd
.For more information, see the man pages of
idmapd
andidmapd.conf
(man idmapd
andman idmapd.conf
).
19.3.3 NFS with Kerberos #
To use Kerberos authentication for NFS, Generic Security Services (GSS) must be enabled. Select
in the initial YaST NFS Server dialog. You must have a working Kerberos server to use this feature. YaST does not set up the server but only uses the provided functionality. To use Kerberos authentication in addition to the YaST configuration, complete at least the following steps before running the NFS configuration:Make sure that both the server and the client are in the same Kerberos domain. They must access the same KDC (Key Distribution Center) server and share their
krb5.keytab
file (the default location on any machine is/etc/krb5.keytab
). For more information about Kerberos, see Chapter 6, Network authentication with Kerberos.Start the gssd service on the client with
systemctl start rpc-gssd.service
.Start the svcgssd service on the server with
systemctl start rpc-svcgssd.service
.
Kerberos authentication also requires the
idmapd
daemon to run on the
server. For more information, refer to
/etc/idmapd.conf
.
For more information about configuring kerberized NFS, refer to the links in Section 19.7, “More information”.
19.4 Configuring clients #
To configure your host as an NFS client, you do not need to install additional software. All needed packages are installed by default.
19.4.1 Importing file systems with YaST #
Authorized users can mount NFS directories from an NFS server into the local file tree using the YaST NFS client module. Proceed as follows:
Start the YaST NFS client module.
Click
in the tab. Enter the host name of the NFS server, the directory to import, and the mount point at which to mount this directory locally.When using NFSv4, select
in the tab. Additionally, the must contain the same value as used by the NFSv4 server. The default domain islocaldomain
.To use Kerberos authentication for NFS, GSS security must be enabled. Select
.If
firewalld
is active on your system, configure it separately for NFS (see Section 19.5, “Operating an NFS server and clients behind a firewall”). YaST does not yet have complete support forfirewalld
, so ignore the “Firewall not configurable” message and continue.Click
to save your changes.
The configuration is written to /etc/fstab
and the
specified file systems are mounted. When you start the YaST
configuration client at a later time, it also reads the existing
configuration from this file.
On (diskless) systems where the root partition is mounted via network as an NFS share, you need to be careful when configuring the network device with which the NFS share is accessible.
When shutting down or rebooting the system, the default processing order is to turn off network connections then unmount the root partition. With NFS root, this order causes problems as the root partition cannot be cleanly unmounted as the network connection to the NFS share is already deactivated. To prevent the system from deactivating the relevant network device, open the network device configuration tab as described in Abschnitt 23.4.1.2.5, „Aktivieren des Netzwerkgeräts“ and choose in the pane.
19.4.2 Importing file systems manually #
The prerequisite for importing file systems manually from an NFS server
is a running RPC port mapper. The nfs
service takes
care to start it properly; thus, start it by entering
systemctl start nfs
as
root
. Then remote file
systems can be mounted in the file system just like local partitions,
using the mount
:
>
sudo
mount HOST:REMOTE-PATH LOCAL-PATH
To import user directories from the nfs.example.com
machine, for example, use:
>
sudo
mount nfs.example.com:/home /home
To define a count of TCP connections that the clients make to the NFS
server, you can use the nconnect
option of the
mount
command. You can specify any number between 1
and 16, where 1 is the default value if the mount option has not been
specified.
The nconnect
setting is applied only during the
first mount process to the particular NFS server. If the same client
executes the mount command to the same NFS server, all already
established connections will be shared—no new connection will be
established. To change the nconnect
setting, you
have to unmount all client connections
to the particular NFS server. Then you can define a new value for the
nconnect
option.
You can find the value of nconnect
that is in
currently in effect in the output of the mount
, or
in the file /proc/mounts
. If there is no value for
the mount option, then the option has not been used during mounting and
the default value of 1 is in use.
nconnect
As you can close and open connections after the first mount, the
actual count of connections does not necessarily have to be the same
as the value of nconnect
.
19.4.2.1 Using the automount service #
The autofs daemon can be used to mount remote file systems
automatically. Add the following entry to the
/etc/auto.master
file:
/nfsmounts /etc/auto.nfs
Now the /nfsmounts
directory acts as the root
for all the NFS mounts on the client if the
auto.nfs
file is filled appropriately. The name
auto.nfs
is chosen for the sake of
convenience—you can choose any name. In
auto.nfs
add entries for all the NFS mounts as
follows:
localdata -fstype=nfs server1:/data nfs4mount -fstype=nfs4 server2:/
Activate the settings with systemctl start autofs
as root
. In this example,
/nfsmounts/localdata
, the
/data
directory of
server1
, is mounted with NFS and
/nfsmounts/nfs4mount
from
server2
is mounted with NFSv4.
If the /etc/auto.master
file is edited while the
service autofs is running, the automounter must be restarted for the
changes to take effect with systemctl restart
autofs
.
19.4.2.2 Manually editing /etc/fstab
#
A typical NFSv3 mount entry in /etc/fstab
looks
like this:
nfs.example.com:/data /local/path nfs rw,noauto 0 0
For NFSv4 mounts, use nfs4
instead of
nfs
in the third column:
nfs.example.com:/data /local/pathv4 nfs4 rw,noauto 0 0
The noauto
option prevents the file system from
being mounted automatically at start-up. If you want to mount the
respective file system manually, it is possible to shorten the mount
command specifying the mount point only:
>
sudo
mount /local/path
If you do not enter the noauto
option, the init
scripts of the system will handle the mount of those file systems
at start-up. In that case, you may consider adding the option
_netdev
, which prevents scripts from trying to
mount the share before the network is available.
19.4.3 Parallel NFS (pNFS) #
NFS is one of the oldest protocols, developed in the 1980s. As such, NFS is usually sufficient if you want to share small files. However, when you want to transfer big files or many clients want to access data, an NFS server becomes a bottleneck and has a significant impact on the system performance. This is because files are quickly getting bigger, whereas the relative speed of Ethernet has not fully kept pace.
When you request a file from a regular NFS server, the server looks up the file metadata, collects all the data, and transfers it over the network to your client. However, the performance bottleneck becomes apparent no matter how small or big the files are:
With small files, most of the time is spent collecting the metadata.
With big files, most of the time is spent on transferring the data from server to client.
pNFS, or parallel NFS, overcomes this limitation as it separates the file system metadata from the location of the data. As such, pNFS requires two types of servers:
A metadata or control server that handles all the non-data traffic
One or more storage server(s) that hold(s) the data
The metadata and the storage servers form a single, logical NFS server. When a client wants to read or write, the metadata server tells the NFSv4 client which storage server to use to access the file chunks. The client can access the data directly on the server.
SUSE Linux Enterprise Server supports pNFS on the client side only.
19.4.3.1 Configuring pNFS client with YaST #
Proceed as described in Procedure 19.2, “Importing NFS directories”, but
click the check box and optionally
. YaST will do all the necessary
steps and will write all the required options in the file
/etc/exports
.
19.4.3.2 Configuring pNFS client manually #
Refer to Section 19.4.2, “Importing file systems manually” to start. Most of the
configuration is done by the NFSv4 server. For pNFS, the only
difference is to add the nfsvers
option and the
metadata server MDS_SERVER to your
mount
command:
>
sudo
mount -t nfs4 -o nfsvers=4.2 MDS_SERVER MOUNTPOINT
To help with debugging, change the value in the
/proc
file system:
>
sudo
echo 32767 > /proc/sys/sunrpc/nfsd_debug>
sudo
echo 32767 > /proc/sys/sunrpc/nfs_debug
19.5 Operating an NFS server and clients behind a firewall #
Communication between an NFS server and its clients happens via Remote Procedure Calls (RPC). Several RPC services, such as the mount daemon or the file locking service, are part of the Linux NFS implementation. If the server and the clients run behind a firewall, these services and the firewall(s) need to be configured to not block the client-server communication.
An NFS 4 server is backwards-compatible with NFS version 3, and firewall configurations vary for both versions. If any of your clients use NFS 3 to mount shares, configure your firewall to allow both NFS 4 and NFS 3.
19.5.1 NFS 4.x #
NFS 4 requires TCP port 2049 to be open on the server side only. To
open this port on the firewall, enable the nfs
service in firewalld on the NFS server:
>
sudo
firewall-cmd --permanent --add-service=nfs --zone=ACTIVE_ZONE firewall-cmd --reload
Replace ACTIVE_ZONE with the firewall zone used on the NFS server.
No additional firewall configuration on the client side is needed when using NFSv4. By default mount defaults to the highest supported NFS version, so if your client supports NFSv4, shares will automatically be mounted as version 4.2.
19.5.2 NFS 3 #
NFS 3 requires the following services:
portmapper
nfsd
mountd
lockd
statd
These services are operated by rpcbind
, which,
by default, dynamically assigns ports. To allow access to these
services behind a firewall, they need to be configured to run on a
static port first. These ports need to be opened in the firewall(s) afterwards.
portmapper
On SUSE Linux Enterprise Server,
portmapper
is already configured to run on a static port.Port
111
Protocol(s)
TCP, UDP
Runs on
Client, Server
>
sudo
firewall-cmd --add-service=rpc-bind --permanent --zone=ACTIVE_ZONEnfsd
On SUSE Linux Enterprise Server,
nfsd
is already configured to run on a static port.Port
2049
Protocol(s)
TCP, UDP
Runs on
Server
>
sudo
firewall-cmd --add-service=nfs3 --permanent --zone=ACTIVE_ZONEmountd
On SUSE Linux Enterprise Server,
mountd
is already configured to run on a static port.Port
20048
Protocol(s)
TCP, UDP
Runs on
Server
>
sudo
firewall-cmd --add-service=mountd --permanent --zone=ACTIVE_ZONElockd
To set a static port for
lockd
:Edit
/etc/sysconfig/nfs
on the server and find and setLOCKD_TCPPORT=NNNNN LOCKD_UDPPORT=NNNN
Replace NNNNN with an unused port of your choice. Use the same port for both protocols.
Restart the NFS server:
>
sudo
systemctl restart nfs-server
Port
NNNNN
Protocol(s)
TCP, UDP
Runs on
Client, Server
>
sudo
firewall-cmd --add-port=NNNNN/{tcp,udp} --permanent --zone=ACTIVE_ZONEstatd
To set a static port for
statd
:Edit
/etc/sysconfig/nfs
on the server and find and setSTATD_PORT=NNNNN
Replace NNNNN with an unused port of your choice.
Restart the NFS server:
>
sudo
systemctl restart nfs-server
Port
NNNNN
Protocol(s)
TCP, UDP
Runs on
Client, Server
>
sudo
firewall-cmd --add-port=NNNNN/{tcp,udp} --permanent --zone=ACTIVE_ZONE
firewalld
configuration
Whenever you change the firewalld
configuration, you need to reload
the daemon to activate the changes:
>
sudo
firewall-cmd --reload
Make sure to replace ACTIVE_ZONE with the firewall zone used on the respective machine. Note that, depending on the firewall configuration, the active zone can differ from machine to machine.
19.6 Managing Access Control Lists over NFSv4 #
There is no single standard for Access Control Lists (ACLs) in Linux
beyond the simple read, write, and execute (rwx
) flags
for user, group, and others (ugo
). One option for
finer control is the Draft POSIX ACLs, which were
never formally standardized by POSIX. Another is the NFSv4 ACLs, which
were designed to be part of the NFSv4 network file system with the goal
of making something that provided reasonable compatibility between POSIX
systems on Linux and WIN32 systems on Microsoft Windows.
NFSv4 ACLs are not sufficient to correctly implement Draft POSIX ACLs so
no attempt has been made to map ACL accesses on an NFSv4 client (such as
using setfacl
).
When using NFSv4, Draft POSIX ACLs cannot be used even in emulation and
NFSv4 ACLs need to be used directly; that means while
setfacl
can work on NFSv3, it cannot work on NFSv4. To
allow NFSv4 ACLs to be used on an NFSv4 file system, SUSE Linux Enterprise Server
provides the nfs4-acl-tools
package, which contains
the following:
nfs4-getfacl
nfs4-setfacl
nfs4-editacl
These operate in a generally similar way to getfacl
and setfacl
for examining and modifying NFSv4 ACLs.
These commands are effective only if the file system on the NFS server
provides full support for NFSv4 ACLs. Any limitation imposed by the
server will affect programs running on the client in that some particular
combinations of Access Control Entries (ACEs) might not be possible.
It is not supported to mount NFS volumes locally on the exporting NFS server.
Additional Information#
For information, see Introduction to NFSv4 ACLs at https://wiki.linux-nfs.org/wiki/index.php/ACLs#Introduction_to_NFSv4_ACLs.
19.7 More information #
In addition to the man pages of exports
,
nfs
, and mount
, information about
configuring an NFS server and client is available in
/usr/share/doc/packages/nfsidmap/README
. For further
documentation online, refer to the following Web sites:
For general information about network security, refer to Chapter 23, Masquerading and firewalls.
Refer to Section 21.4, “Auto-mounting an NFS share” if you need to automatically mount NFS exports.
For more details about configuring NFS by using AutoYaST, refer to Section 4.21, “NFS client and server”.
For instructions about securing NFS exports with Kerberos, refer to Section 6.6, “Kerberos and NFS”.
Find the detailed technical documentation online at SourceForge.
19.8 Gathering information for NFS troubleshooting #
19.8.1 Common troubleshooting #
In some cases, you can understand the problem in your NFS by reading
the error messages produced and looking into the
/var/log/messages
file. However, in many cases,
the information provided by the error messages and in
/var/log/messages
is not detailed enough. In these
cases, most NFS problems can be best understood through capturing
network packets while reproducing the problem.
Clearly define the problem. Examine the problem by testing the system in a variety of ways and determining when the problem occurs. Isolate the simplest steps that lead to the problem. Then try to reproduce the problem as described in the procedure below.
Capture network packets. On Linux, you can use the
tcpdump
command, which is supplied by the tcpdump package.An example of
tcpdump
syntax follows:tcpdump -s0 -i eth0 -w /tmp/nfs-demo.cap host x.x.x.x
Where:
- s0
Prevents packet truncation
- eth0
Should be replaced with the name of the local interface which the packets will pass through. You can use the
any
value to capture all interfaces at the same time, but usage of this attribute often results in inferior data as well as confusion in analysis.- w
Designates the name of the capture file to write.
- x.x.x.x
Should be replaced with the IP address of the other end of the NFS connection. For example, when taking a
tcpdump
at the NFS client side, specify the IP address of the NFS Server, and vice versa.
NoteIn some cases, capturing the data at either the NFS client or NFS server is sufficient. However, in cases where end-to-end network integrity is in doubt, it is often necessary to capture data at both ends.
Do not shut down the
tcpdump
process and proceed to the next step.(Optional) If the problem occurs during execution of the
nfs mount
command itself, you can try to use the high-verbosity option (-vvv
) of thenfs mount
command to get more output.(Optional) Get an
strace
of the reproduction method. Anstrace
of reproduction steps records exactly what system calls were made at exactly what time. This information can be used to further determine on which events in thetcpdump
you should focus.For example, if you found out that executing the command mycommand --param was failing on an NFS mount, then you could
strace
the command with:strace -ttf -s128 -o/tmp/nfs-strace.out mycommand --param
In case you do not get any
strace
of the reproduction step, note the time when the problem was reproduced. Check the/var/log/messages
log file to isolate the problem.Once the problem has been reproduced, stop
tcpdump
running in your terminal by pressing CTRL–c. If thestrace
command resulted in a hang, also terminate thestrace
command.An administrator with experience in analyzing packet traces and
strace
data can now inspect data in/tmp/nfs-demo.cap
and/tmp/nfs-strace.out
.
19.8.2 Advanced NFS debugging #
Please bear in mind that the following section is intended only for skilled NFS administrators who understand the NFS code. Therefore, perform the first steps described in Section 19.8.1, “Common troubleshooting” to help narrow down the problem and to inform an expert about which areas of debug code (if any) might be needed to learn deeper details.
There are various areas of debug code that can be enabled to gather additional NFS-related information. However, the debug messages are quite cryptic and the volume of them can be so large that the use of debug code can affect system performance. It may even impact the system enough to prevent the problem from occurring. In the majority of cases, the debug code output is not needed, nor is it typically useful to anyone who is not highly familiar with the NFS code.
19.8.2.1 Activating debugging with rpcdebug
#
The rpcdebug
tool allows you to set and clear NFS client and server
debug flags. In case the rpcdebug
tool is not available in your
SUSE Linux Enterprise Server installation, you can install it from the package
nfs-client or nfs-kernel-server for the NFS server.
To set debug flags, run:
rpcdebug -m module -s flags
To clear the debug flags, run:
rpcdebug -m module -c flags
where module can be:
- nfsd
Debug for the NFS server code
- nfs
Debug for the NFS client code
- nlm
Debug for the NFS Lock Manager, at either the NFS client or NFS server. This only applies to NFS v2/v3.
- rpc
Debug for the Remote Procedure Call module, at either the NFS client or NFS server.
For information on detailed usage of the rpcdebug
command, refer to the manual page:
man 8 rpcdebug
19.8.2.2 Activating debug for other code upon which NFS depends #
NFS activities may depend on other related services, such as the NFS
mount daemon—rpc.mountd
. You can set options
for related services within /etc/sysconfig/nfs
.
For example, /etc/sysconfig/nfs
contains the
parameter:
MOUNTD_OPTIONS=""
To enable the debug mode, you have to use the -d
option followed by any of the values: all
,
auth
, call
,
general
, or parse
.
For example, the following code enables all forms of
rpc.mountd
logging:
MOUNTD_OPTIONS="-d all"
For all available options refer to the manual pages:
man 8 rpc.mountd
After changing /etc/sysconfig/nfs
, services need
to be restarted:
systemctl restart nfs-server # for nfs server related changes systemctl restart nfs # for nfs client related changes