A Network File System (NFS) allows remote
hosts to mount file systems over a network and interact with those file systems
as though they are mounted locally. This enables system administrators to
consolidate resources onto centralized servers on the network.
This chapter focuses on fundamental NFS concepts and
supplemental information.
9.1. How It Works
Currently, there are three versions of NFS. NFS version 2
(NFSv2) is older and widely supported. NFS version 3 (NFSv3) supports safe
asynchronous writes and is more robust at error handling than NFSv2; it also
supports 64-bit file sizes and offsets, allowing clients to access more than
2Gb of file data.
NFS version 4 (NFSv4) works through firewalls and on the
Internet, no longer requires an
rpcbind
service, supports ACLs, and utilizes stateful operations. Red Hat Enterprise
Linux 6 supports NFSv2, NFSv3, and NFSv4 clients. When mounting a file
system via NFS, Red Hat Enterprise Linux uses NFSv4 by default, if the server
supports it.
All versions of NFS can use Transmission Control
Protocol (TCP) running over an IP network, with NFSv4 requiring
it. NFSv2 and NFSv3 can use the User Datagram Protocol (UDP) running
over an IP network to provide a stateless network connection between the client
and server.
When using NFSv2 or NFSv3 with UDP, the stateless UDP
connection (under normal conditions) has less protocol overhead than TCP. This
can translate into better performance on very clean, non-congested networks.
However, because UDP is stateless, if the server goes down unexpectedly, UDP
clients continue to saturate the network with requests for the server. In
addition, when a frame is lost with UDP, the entire RPC request must be
retransmitted; with TCP, only the lost frame needs to be resent. For these
reasons, TCP is the preferred protocol when connecting to an NFS server.
The mounting and locking protocols have been incorporated
into the NFSv4 protocol. The server also listens on the well-known TCP port
2049. As such, NFSv4 does not need to interact with
Noterpcbind
[3],
lockd
, and rpc.statd
daemons. The rpc.mountd
daemon is still required on
the NFS server to set up the exports, but is not involved in any over-the-wire
operations.
TCP is the default transport protocol for NFS version 2 and
3 under Red Hat Enterprise Linux. UDP can be used for compatibility purposes as
needed, but is not recommended for wide usage. NFSv4 requires TCP.
All the RPC/NFS daemons have a
'-p'
command line option that can set the port, making
firewall configuration easier.
After TCP wrappers grant access to the client, the NFS
server refers to the
Important/etc/exports
configuration file to determine whether the client is allowed to access any
exported file systems. Once verified, all file and directory operations are
available to the user.
In order for NFS to work with a default installation of Red
Hat Enterprise Linux with a firewall enabled, configure IPTables with the
default TCP port 2049. Without proper IPTables configuration, NFS will not
function properly.
The NFS initialization script and
rpc.nfsd
process now allow binding to
any specified port during system start up. However, this can be error-prone if
the port is unavailable, or if it conflicts with another daemon. 9.1.1. Required Services
Red Hat Enterprise Linux uses a combination of kernel-level
support and daemon processes to provide NFS file sharing. All NFS versions rely
on Remote Procedure Calls (RPC) between clients and servers.
RPC services under Red Hat Enterprise Linux 6 are controlled by the
rpcbind
service. To share or mount NFS
file systems, the following services work together depending on which version
of NFS is implemented:
nfs
service
nfs start
starts the NFS server and the appropriate RPC processes
to service requests for shared NFS file systems.
nfslock
service
nfslock start
activates a mandatory service that starts the
appropriate RPC processes allowing NFS clients to lock files on the server.
rpcbind
rpcbind
accepts port reservations from local RPC services. These ports are then made
available (or advertised) so the corresponding remote RPC services can access
them. rpcbind
responds to
requests for RPC services and sets up connections to the requested RPC service.
This is not used with NFSv4.
The following RPC processes facilitate NFS services:
rpc.mountd
This process is used by an NFS
server to process
MOUNT
requests from NFSv2 and NFSv3 clients. It checks that the requested NFS share
is currently exported by the NFS server, and that the client is allowed to
access it. If the mount request is allowed, the rpc.mountd server replies with
a Success
status and
provides the File-Handle
for
this NFS share back to the NFS client.
rpc.nfsd
rpc.nfsd
allows explicit NFS versions and protocols the server advertises to be defined.
It works with the Linux kernel to meet the dynamic demands of NFS clients, such
as providing server threads each time an NFS client connects. This process
corresponds to the nfs
service.
lockd
lockd
is a kernel thread which runs on both clients and servers. It implements the Network
Lock Manager (NLM) protocol, which allows NFSv2 and NFSv3 clients to lock
files on the server. It is started automatically whenever the NFS server is run
and whenever an NFS file system is mounted.
rpc.statd
This process implements the Network
Status Monitor (NSM) RPC protocol, which notifies NFS clients when an NFS
server is restarted without being gracefully brought down.
rpc.statd
is started automatically by
the nfslock
service, and
does not require user configuration. This is not used with NFSv4.
rpc.rquotad
This process provides user quota
information for remote users.
rpc.rquotad
is started automatically by the nfs
service and does not require user configuration.
rpc.idmapd
rpc.idmapd
provides NFSv4 client and server upcalls, which map between on-the-wire NFSv4
names (which are strings in the form of user
@domain
) and local UIDs and GIDs.
For idmapd
to function with
NFSv4, the /etc/idmapd.conf
file must be configured. This service is required for use with NFSv4, although
not when all hosts share the same DNS domain name. 9.2. pNFS
Support for Parallel NFS (pNFS) as part of the NFS v4.1
standard is available as of Red Hat Enterprise Linux 6.4. The pNFS
architecture improves the scalability of NFS, with possible improvements to
performance. That is, when a server implements pNFS as well, a client is able
to access data through multiple servers concurrently. It supports three storage
protocols or layouts: files, objects, and blocks.
To enable this functionality, use one of the following mount
options on mounts from a pNFS-enabled server:
-o minorversion=1
or
-o v4.1
After the server is pNFS-enabled, the
nfs_layout_nfsv41_files
kernel is
automatically loaded on the first mount. Use the following command to verify
the module was loaded: $ lsmod | grep nfs_layout_nfsv41_files
Another way to verify a successful NFSv4.1 mount is with the
Importantmount
command. The mount
entry in the output should contain minorversion=1
.
The protocol allows for three possible pNFS layout types:
files, objects, and blocks. However the Red Hat Enterprise Linux 6.4
client only supports the files layout type, so will use pNFS only when the
server also supports the files layout type.
For more information on pNFS.
No comments:
Post a Comment