There are two ways to configure an NFS server:
- Manually editing the NFS
configuration file, that is,
/etc/exports
, and - through the command line,
that is, by using the command
exportfs
The /etc/exports
Configuration File
The
/etc/exports
file controls which file systems are exported to remote hosts and specifies
options. It follows the following syntax rules: - Blank lines are ignored.
- To add a comment, start a
line with the hash mark (
#
). - You can wrap long lines with
a backslash (
\
). - Each exported file system should be on its own individual line.
- Any lists of authorized hosts placed after an exported file system must be separated by space characters.
- Options for each of the hosts must be placed in parentheses directly after the host identifier, without any spaces separating the host and the first parenthesis.
Each entry for an exported file system has the following
structure:
export
host
(options
)
The aforementioned structure uses the following variables:
export
The directory being exported
host
The host or network to which the
export is being shared
options
The options to be used for
host
It is possible to specify multiple hosts, along with
specific options for each host. To do so, list them on the same line as a
space-delimited list, with each hostname followed by its respective options (in
parentheses), as in:
export
host1
(options1
)host2
(options2
)host3
(options3
)
For information on different methods for specifying
hostnames.
In its simplest form, the
/etc/exports
file only specifies the exported directory and the hosts permitted to access
it, as in the following example:
Example 9.6. The
/etc/exports
file/exported/directory bob.example.com
Here,
bob.example.com
can mount /exported/directory/
from the NFS server. Because no options are specified in this example, NFS will
use default settings.
The default settings are:
ro
The exported file system is
read-only. Remote hosts cannot change the data shared on the file system. To
allow hosts to make changes to the file system (that is, read/write), specify
the
rw
option.
sync
The NFS server will not reply to
requests before changes made by previous requests are written to disk. To
enable asynchronous writes instead, specify the option
async
.
wdelay
The NFS server will delay writing
to the disk if it suspects another write request is imminent. This can improve
performance as it reduces the number of times the disk must be accesses by
separate write commands, thereby reducing write overhead. To disable this,
specify the
no_wdelay
. no_wdelay
is only available if the
default sync
option is also
specified.
root_squash
This prevents root users connected remotely
(as opposed to locally) from having root privileges; instead, the NFS server
will assign them the user ID
nfsnobody
.
This effectively "squashes" the power of the remote root user to the
lowest local user, preventing possible unauthorized writes on the remote
server. To disable root squashing, specify no_root_squash
.
To squash every remote user (including root), use
all_squash
. To specify the user and
group IDs that the NFS server should assign to remote users from a particular
host, use the anonuid
and anongid
options, respectively, as in: export
host
(anonuid=uid
,anongid=gid
)
Here,
uid
and gid
are user ID
number and group ID number, respectively. The anonuid
and anongid
options allow you to create a special user and group account for remote NFS
users to share.
By default, access control lists (ACLs)
are supported by NFS under Red Hat Enterprise Linux. To disable this feature,
specify the
no_acl
option
when exporting the file system.
Each default for every exported file system must be
explicitly overridden. For example, if the
rw
option is not specified, then the exported file system is shared as read-only.
The following is a sample line from /etc/exports
which overrides two default options: /another/exported/directory
192.168.0.3(rw,async)
In this example
192.168.0.3
can mount /another/exported/directory/
read/write and all writes to disk are asynchronous. For more information on
exporting options, refer to man exportfs
.
Other options are available where no default value is
specified. These include the ability to disable sub-tree checking, allow access
from insecure ports, and allow insecure file locks (necessary for certain early
NFS client implementations). Refer to
man
exports
for details on these less-used options.
The exportfs
Command
Every file system being exported to remote users with NFS,
as well as the access level for those file systems, are listed in the
/etc/exports
file. When the nfs
service starts, the /usr/sbin/exportfs
command launches and
reads this file, passes control to rpc.mountd
(if NFSv2 or NFSv3) for the actual mounting process, then to rpc.nfsd
where the file systems are then
available to remote users.
When issued manually, the
/usr/sbin/exportfs
command allows the root user to selectively export or unexport directories
without restarting the NFS service. When given the proper options, the /usr/sbin/exportfs
command writes the
exported file systems to /var/lib/nfs/xtab
.
Since rpc.mountd
refers to
the xtab
file when deciding
access privileges to a file system, changes to the list of exported file
systems take effect immediately.
The following is a list of commonly-used options available
for
/usr/sbin/exportfs
:
-r
Causes all directories listed in
/etc/exports
to be exported by
constructing a new export list in /etc/lib/nfs/xtab
.
This option effectively refreshes the export list with any changes made to /etc/exports
.
-a
Causes all directories to be
exported or unexported, depending on what other options are passed to
/usr/sbin/exportfs
. If no other options
are specified, /usr/sbin/exportfs
exports all file systems specified in /etc/exports
.
-o
file-systems
Specifies directories to be
exported that are not listed in
/etc/exports
.
Replace file-systems
with additional file systems to be exported. These file systems must be
formatted in the same way they are specified in /etc/exports
. This option is often used to test an
exported file system before adding it permanently to the list of file systems to
be exported.
-i
Ignores
/etc/exports
; only options given from
the command line are used to define exported file systems.
-u
Unexports all shared directories.
The command
/usr/sbin/exportfs -ua
suspends NFS file sharing while keeping all NFS daemons up. To re-enable NFS
sharing, use exportfs -r
.
-v
Verbose operation, where the file
systems being exported or unexported are displayed in greater detail when the
exportfs
command is executed.
If no options are passed to the
exportfs
command, it displays a list of
currently exported file systems. For more information about the exportfs
command, refer to man exportfs
.
Using exportfs
with
NFSv4
In Red Hat Enterprise Linux 6, no extra steps are
required to configure NFSv4 exports as any filesystems mentioned are
automatically available to NFSv2, NFSv3, and NFSv4 clients using the same path.
This was not the case in previous versions.
To prevent clients from using NFSv4, turn it off by
sellecting
RPCNFSDARGS= -N 4
in /etc/sysconfig/nfs
. Running NFS Behind a Firewall
NFS requires
rpcbind
,
which dynamically assigns ports for RPC services and can cause problems for
configuring firewall rules. To allow clients to access NFS shares behind a
firewall, edit the /etc/sysconfig/nfs
configuration file to control which ports the required RPC services run on.
The
/etc/sysconfig/nfs
may not exist by default on all systems. If it does not exist, create it and
add the following variables, replacing port
with an unused port number (alternatively, if the file exists, un-comment and
change the default entries as required): MOUNTD_PORT=port
Controls which TCP and UDP port
mountd
(rpc.mountd
) uses. STATD_PORT=port
Controls which TCP and UDP port
status (
rpc.statd
) uses. LOCKD_TCPPORT=port
Controls which TCP port
nlockmgr
(lockd
) uses. LOCKD_UDPPORT=port
Controls which UDP port
nlockmgr
(lockd
) uses.
If NFS fails to start, check
/var/log/messages
. Normally, NFS will fail to start if
you specify a port number that is already in use. After editing /etc/sysconfig/nfs
, restart the NFS
service using service nfs restart
.
Run the rpcinfo -p
command
to confirm the changes.
To configure a firewall to allow NFS, perform the following
steps:
Procedure Configure a firewall to allow NFS
- Allow TCP and UDP port 2049 for NFS.
- Allow TCP and UDP port 111 (
rpcbind
/sunrpc
). - Allow the TCP and UDP port
specified with
MOUNTD_PORT="port"
- Allow the TCP and UDP port
specified with
STATD_PORT="port"
- Allow the TCP port specified
with
LOCKD_TCPPORT="port"
- Allow the UDP port specified
with
LOCKD_UDPPORT="port"
To allow NFSv4.0 callbacks to pass through firewalls set
/proc/sys/fs/nfs/nfs_callback_tcpport
and allow the server to connect to that port on the client.
This process is not needed for NFSv4.1 or higher, and the
other ports for
mountd
, statd
, and lockd
are not required in a pure NFSv4 environment. Discovering NFS exports
There are two ways to discover which file systmes an NFS
server exports.
First, on any server that supports NFSv2 or NFSv3, use the
showmount
command: $ showmount -e myserver
Export list for mysever
/exports/foo
/exports/bar
Second, on any server that supports NFSv4, mount
/
and look around. # mount myserver
:/ /mnt/
#cd /mnt/
exports
# ls exports
foo
bar
On servers that support both NFSv4 and either NFSv2 or
NFSv3, both methods will work and give the same results.
Hostname Formats
The host(s) can be in the following forms:
Single machine
A fully-qualified domain name (that
can be resolved by the server), hostname (that can be resolved by the server),
or an IP address.
Series of machines specified with wildcards
Use the
*
or ?
character to specify a string match. Wildcards are not to be used with IP
addresses; however, they may accidentally work if reverse DNS lookups fail.
When specifying wildcards in fully qualified domain names, dots (.
) are not included in the wildcard. For
example, *.example.com
includes one.example.com
but
does not include one.two.example.com
.
IP networks
Use
a.b.c.d/z
, where a.b.c.d
is the network and z
is the number of bits in the
netmask (for example 192.168.0.0/24). Another acceptable format is a.b.c.d/netmask
, where a.b.c.d
is the network and netmask
is the netmask (for example,
192.168.100.8/255.255.255.0).
Netgroups
Use the format @
group-name
, where group-name
is the NIS netgroup name. NFS over RDMA
To enable the RDMA transport in the linux kernel NFS server,
use the following procedure:
Procedure Enable RDMA from server
- Ensure the RDMA rpm is installed and the RDMA service is enabled with the following command:
# yum install rdma; chkconfig --level 2345 rdma on
- Ensure the package that provides the nfs-rdma service is installed and the service is enabled with the following command:
# yum install rdma; chkconfig --level 345 nfs-rdma on
- Ensure that the RDMA port is
set to the preferred port (default for Red Hat Enterprise Linux 6 is
2050). To do so, edit the
/etc/rdma/rdma.conf
file to set NFSoRDMA_LOAD=yes and NFSoRDMA_PORT to the desired port. - Set up the exported filesystem as normal for NFS mounts.
On the client side, use the following procedure:
Procedure Enable RDMA from client
- Ensure the RDMA rpm is installed and the RDMA service is enabled with the following command:
# yum install rdma; chkconfig --level 2345 rdma on
- Mount the NFS exported partition using the RDMA option on the mount call. The port option can optionally be added to the call.
# mount -t nfs -o rdma,port=port_number
Securing NFS
NFS is well-suited for sharing entire file systems with a
large number of known hosts in a transparent manner. However, with ease-of-use
comes a variety of potential security problems. Consider the following sections
when exporting NFS file systems on a server or mounting them on a client. Doing
so minimizes NFS security risks and better protects data on the server.
NFS Security with AUTH_SYS and export controls
Traditionally, NFS has given two options in order to control
access to exported files.
First, the server restricts which hosts are allowed to mount
which filesystems either by IP address or by host name.
Second, the server enforces file system permissions for
users on NFS clients in the same way it does local users. Traditionally it does
this using
AUTH_SYS
(also
called AUTH_UNIX
) which
relies on the client to state the UID and GID's of the user. Be aware that this
means a malicious or misconfigured client can easily get this wrong and allow a
user access to files that it should not.
To limit the potential risks, administrators often allow
read-only access or squash user permissions to a common user and group ID.
Unfortunately, these solutions prevent the NFS share from being used in the way
it was originally intended.
Additionally, if an attacker gains control of the DNS server
used by the system exporting the NFS file system, the system associated with a
particular hostname or fully qualified domain name can be pointed to an
unauthorized machine. At this point, the unauthorized machine is the
system permitted to mount the NFS share, since no username or password
information is exchanged to provide additional security for the NFS mount.
Wildcards should be used sparingly when exporting
directories through NFS, as it is possible for the scope of the wildcard to
encompass more systems than intended.
It is also possible to restrict access to the
rpcbind
service with TCP wrappers.
Creating rules with iptables
can also limit access to ports used by rpcbind
,
rpc.mountd
, and rpc.nfsd
.
For more information on securing NFS and
rpcbind
, refer to man iptables
.
NFS security with AUTH_GSS
The release of NFSv4 brought a revolution to NFS security by
mandating the implementation of RPCSEC_GSS and the Kerberos version 5 GSS-API
mechanism. However, RPCSEC_GSS and the Kerberos mechanism are also available
for all versions of NFS.
With the RPCSEC_GSS Kerberos mechanism, the server no longer
depends on the client to correctly represent which user is accessing the file,
as is the case with AUTH_SYS. Instead, it uses cryptography to authenticate
users to the server, preventing a malicious client from impersonating a user
without having that user's kerberos credentials.
Note
It is assumed that a Kerberos ticket-granting server (KDC)
is installed and configured correctly, prior to configuring an NFSv4 server.
Kerberos is a network authentication system which allows clients and servers to
authenticate to each other through use of symmetric encryption and a trusted
third party, the KDC. For more information on Kerberos see Red Hat's Identity
Management Guide.
To set up RPCSEC_GSS, use the following procedure:
Set up RPCSEC_GSS
- Create
nfs/client.mydomain@MYREALM
andnfs/server.mydomain@MYREALM
principals. - Add the corresponding keys to keytabs for the client and server.
- On the server side, add
sec=krb5,krb5i,krb5p
to the export. To continue allowing AUTH_SYS, addsec=sys,krb5,krb5i,krb5p
instead. - On the client side, add
sec=krb5
(orsec=krb5i
, orsec=krb5p
depending on the setup) to the mount options.
For more information, such as the difference between
krb5
, krb5i
,
and krb5p
, refer to the exports
and nfs
man pages NFS security with NFSv4
NFSv4 includes ACL support based on the Microsoft Windows NT
model, not the POSIX model, because of the former's features and wide
deployment.
Another important security feature of NFSv4 is the removal
of the use of the
MOUNT
protocol for mounting file systems. This protocol presented possible security
holes because of the way that it processed file handles. File Permissions
Once the NFS file system is mounted read/write by a remote
host, the only protection each shared file has is its permissions. If two users
that share the same user ID value mount the same NFS file system, they can
modify each others' files. Additionally, anyone logged in as root on the client
system can use the
su -
command to access any files with the NFS share.
By default, access control lists (ACLs) are supported by NFS
under Red Hat Enterprise Linux. Red Hat recommends that this feature is kept
enabled.
By default, NFS uses root squashing when exporting
a file system. This sets the user ID of anyone accessing the NFS share as the
root user on their local machine to
nobody
.
Root squashing is controlled by the default option root_squash
; for more information about
this option.If possible, never disable root squashing.
When exporting an NFS share as read-only, consider using the
all_squash
option. This option
makes every user accessing the exported file system take the user ID of the nfsnobody
user.
NFS and rpcbind
Note
The following section only applies to NFSv2 or NFSv3
implementations that require the
rpcbind
service for backward compatibility.
The
rpcbind
utility maps RPC services to the ports on which they listen. RPC processes
notify rpcbind
when they
start, registering the ports they are listening on and the RPC program numbers
they expect to serve. The client system then contacts rpcbind
on the server with a particular
RPC program number. The rpcbind
service redirects the client to the proper port number so it can communicate
with the requested service.
Because RPC-based services rely on
rpcbind
to make all connections with
incoming client requests, rpcbind
must be available before any of these services start.
The
rpcbind
service uses TCP wrappers for access control, and access control rules for rpcbind
affect all RPC-based
services. Alternatively, it is possible to specify access control rules for
each of the NFS RPC daemons. The man
pages for rpc.mountd
and rpc.statd
contain information regarding
the precise syntax for these rules.
Troubleshooting NFS and rpcbind
Because
rpcbind
provides coordination between RPC services and the port numbers used to
communicate with them, it is useful to view the status of current RPC services
using rpcbind
when
troubleshooting. The rpcinfo
command shows each RPC-based service with port numbers, an RPC program number,
a version number, and an IP protocol type (TCP or UDP).
To make sure the proper NFS RPC-based services are enabled
for
rpcbind
, issue the
following command: # rpcinfo -p
Example 9.7.
rpcinfo -p
command output
The following is sample output from this command:
program vers proto port service
100021 1 udp 32774 nlockmgr
100021 3 udp 32774 nlockmgr
100021 4 udp 32774 nlockmgr
100021 1 tcp 34437 nlockmgr
100021 3 tcp 34437 nlockmgr
100021 4 tcp 34437 nlockmgr
100011 1 udp 819 rquotad
100011 2 udp 819 rquotad
100011 1 tcp 822 rquotad
100011 2 tcp 822 rquotad
100003 2 udp 2049 nfs
100003 3 udp 2049 nfs
100003 2 tcp 2049 nfs
100003 3 tcp 2049 nfs
100005 1 udp 836 mountd
100005 1 tcp 839 mountd
100005 2 udp 836 mountd
100005 2 tcp 839 mountd
100005 3 udp 836 mountd
100005 3 tcp 839 mountd
If one of the NFS services does not start up correctly,
rpcbind
will be unable to map RPC
requests from clients for that service to the correct port. In many cases, if
NFS is not present in rpcinfo
output, restarting NFS causes the service to correctly register with rpcbind
and begin working.
For more information and a list of options on
rpcinfo
, refer to its man
page.
No comments:
Post a Comment