Sunday, 27 April 2014

How to install nagios in RHEL|Centos|Fedora

Nagios is the industry standard in enterprise-class monitoring for good reason. It allows you to gain insight into your network and fix problems before customers know they even exist. It’s stable, scalable, supported, and extensible.

Install  Nagios

I First step  :

Let   me  show  you  how to install   nagios monitoring tool .
This  installation has been  tested  by  unixmen team  in  Fedora/Cenots/RHEL/ .
1- First   install some  tools : httpd, gcc, glib, glibc-common, gd and gd-devel
yum install httpd php
yum install gcc
yum install glibc glibc-common
yum install gd gd-devel
 
2- Create  nagios  user :
#/usr/sbin/useradd -m nagios
#passwd nagios
3- Add  nagcmd  group
/usr/sbin/groupadd nagcmd
/usr/sbin/usermod -a -G nagcmd nagios
/usr/sbin/usermod -a -G nagcmd apache
4- Now go   to http://www.nagios.org   download  files  .
nagios-3.1.0.tar.gz  nagios-plugins-1.4.13.tar.gz  nrpe-2.12.tar.gz
tar  -zxvf  nagios-3.1.0.tar.gz  
cd  nagios-3.1.0
./configure --with-command-group=nagcmd
 #make all; make install; make install-init; make install-config; make install-commandmode; make install-webconf
5-  Edit  your  email  admin address :
Go to
vi /usr/local/nagios/etc/objects/contacts.cfg 

6- Create a nagiosadmin account for logging into the Nagios web interfaceassign to this you’ll need it later.
htpasswd -c /usr/local/nagios/etc/htpasswd.users nagiosadmin
enter   the  password.
7- Restart  the  httpd  server :
#Service  httpd   restart

The  second  step  : Extract   and  install  plugins

1- Go  to you  downloaded  nagios  tools
 tar  -zxvf   nagios-plugins-1.4.13.tar.gz
2- cd  nagios-plugins
./configure --with-nagios-user=nagios --with-nagios-group=nagios
make; make   install
3- Now  you  have  to  add nagios  to  Chkconfig
chkconfig --add nagios
chkconfig nagios on
4- Verify  if  you  have  a  good config of nagios with the command
/usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg
5- Check if  there  are  no errors  displayed;  then start   nagios with command :
service  nagios  start


 Modify SELinux Settings

Fedora ships with SELinux (Security Enhanced Linux) installed and in Enforcing mode by default. This can result in "Internal Server Error" messages when you attempt to access the Nagios CGIs.


See if SELinux is in Enforcing mode.


getenforce

Put SELinux into Permissive mode.


setenforce 0

To make this change permanent, you'll have to modify the settings in /etc/selinux/config and reboot.


Instead of disabling SELinux or setting it to permissive mode, you can use the following command to run the CGIs under SELinux enforcing/targeted mode:


chcon -R -t httpd_sys_content_t /usr/local/nagios/sbin/

chcon -R -t httpd_sys_content_t /usr/local/nagios/share/


now  open   your  browser   and http://localhost/nagios orr http://ip/nagios


Saturday, 26 April 2014

Install VM-Workstation on a Linux Host



You run the Linux bundle installer to install Workstation on a Linux host system. On most Linux distributions, the Linux bundle installer launches a GUI wizard. On some Linux distributions, including Red Hat Enterprise Linux 5.1, the bundle installer launches a command-line wizard instead of a GUI wizard. You can run the installer with the --console option to install Workstation in a terminal window.
Remote connections and virtual machine sharing are enabled by default when you install Workstation. With remote connections, you can connect to remote hosts and run remote virtual machines. With virtual machine sharing, you can create virtual machines that other instances of Workstation can access remotely.
Shared virtual machines are stored in the shared virtual machines directory, where VMware Workstation Server (vmware-workstation-server) manages them. Remote users connect to VMware Workstation Server through HTTPS port 443 on the host system.
To change the shared virtual machines directory or select a different port during the installation process, you must specify the --custom option. You can also change the shared virtual machines directory, select a different port, and disable remote connections and virtual machine sharing after Workstation is installed by modifying the Shared VMs Workstation preference setting. See Using VMware Workstation for more information.



Prerequisites
Verify that the host system meets the host system requirements. See Host System Requirements.
Verify that no incompatible VMware products are installed on the host system. See Installing Workstation with Other VMware Products.
Obtain the Workstation software and license key. See Obtaining the Workstation Software and License Key.
If you plan to use the Integrated Virtual Debugger for Eclipse, install it on the host system. See Installing the Integrated Virtual Debuggers for Eclipse and Visual Studio.
Compile the real-time clock function into the Linux kernel.
Verify that the parallel port PC-style hardware option (CONFIG_PARPORT_PC) is built and loaded as a kernel module and that it is set to m when the kernel is compiled.
Familiarize yourself with the Linux command-line installation options. You must use the --custom option to specify certain configuration settings. See Linux Command Line Installation Options.
Verify that you have root access on the host system.



Procedure
1
Log in to the host system with the user name that you plan to use when you run Workstation.
2
Become root.
For example: su root
The command that you use depends on your Linux distribution and configuration.
3
If you are installing Workstation from the installation media, mount the Workstation installation media.
4
Change directories to the directory that contains the Workstation installer file.
Option
Description
If you are installing the software from a CD
The installer file is in the Linux directory.
If you downloaded the software
The installer file is in the download directory.
5
Run the appropriate Workstation installer for the host system.
For example: sh VMware-Workstation-xxxx-xxxx.architecture.bundle [--option]
xxxx-xxxx is the version and build numbers, architecture is i386 or x86_64, and option is a command line option.
6
Accept the Open Virtualization Format (OVF) Tool license agreement.
If you are using the --console option or installing Workstation on a host system that does not support the GUI wizard, press Enter to scroll through and read the license agreement or type q to skip to the [yes/no] prompt.
7
Follow the prompts to finish the installation.

After Workstation is installed, vmware-workstation-server starts on the host system. vmware-workstation-server starts whenever you restart the host system.

Monday, 14 April 2014

NFS Client Configuration in Linux




The mount command mounts NFS shares on the client side. Its format is as follows:
# mount -t nfs -o options host:/remote/export /local/directory
This command uses the following variables:
options
A comma-delimited list of mount options;
The hostname, IP address, or fully qualified domain name of the server exporting the file system you wish to mount
/remote/export
The file system or directory being exported from the server, that is, the directory you wish to mount
/local/directory
The client location where /remote/export is mounted
The NFS protocol version used in Red Hat Enterprise Linux 6 is identified by the mount options nfsvers or vers. By default, mount will use NFSv4 with mount -t nfs. If the server does not support NFSv4, the client will automatically step down to a version supported by the server. If the nfsvers/vers option is used to pass a particular version not supported by the server, the mount will fail. The file system type nfs4 is also available for legacy reasons; this is equivalent to running mount -t nfs -o nfsvers=4 host:/remote/export /local/directory.
Refer to man mount for more details.
If an NFS share was mounted manually, the share will not be automatically mounted upon reboot. Red Hat Enterprise Linux offers two methods for mounting remote file systems automatically at boot time: the /etc/fstab file and the autofs service.

Mounting NFS File Systems using /etc/fstab

An alternate way to mount an NFS share from another machine is to add a line to the /etc/fstab file. The line must state the hostname of the NFS server, the directory on the server being exported, and the directory on the local machine where the NFS share is to be mounted. You must be root to modify the /etc/fstab file.
Example Syntax example
The general syntax for the line in /etc/fstab is as follows:
server:/usr/local/pub    /pub   nfs    defaults 0 0

The mount point /pub must exist on the client machine before this command can be executed. After adding this line to /etc/fstab on the client system, use the command mount /pub, and the mount point /pub is mounted from the server.
The /etc/fstab file is referenced by the netfs service at boot time, so lines referencing NFS shares have the same effect as manually typing the mount command during the boot process.
A valid /etc/fstab entry to mount an NFS export should contain the following information:
server:/remote/export /local/directory nfs options 0 0
The variables server, /remote/export, /local/directory, and options are the same ones used when manually mounting an NFS share

 

autofs

One drawback to using /etc/fstab is that, regardless of how infrequently a user accesses the NFS mounted file system, the system must dedicate resources to keep the mounted file system in place. This is not a problem with one or two mounts, but when the system is maintaining mounts to many systems at one time, overall system performance can be affected. An alternative to /etc/fstab is to use the kernel-based automount utility. An automounter consists of two components:
  • a kernel module that implements a file system, and
  • a user-space daemon that performs all of the other functions.
The automount utility can mount and unmount NFS file systems automatically (on-demand mounting), therefore saving system resources. It can be used to mount other file systems including AFS, SMBFS, CIFS, and local file systems.
Important
The nfs-utils package is now a part of both the 'NFS file server' and the 'Network File System Client' groups. As such, it is no longer installed by default with the Base group. Ensure that nfs-utils is installed on the system first before attempting to automount an NFS share.
autofs is also part of the 'Network File System Client' group.
autofs uses /etc/auto.master (master map) as its default primary configuration file. This can be changed to use another supported network source and name using the autofs configuration (in /etc/sysconfig/autofs) in conjunction with the Name Service Switch (NSS) mechanism. An instance of the autofs version 4 daemon was run for each mount point configured in the master map and so it could be run manually from the command line for any given mount point. This is not possible with autofs version 5, because it uses a single daemon to manage all configured mount points; as such, all automounts must be configured in the master map. This is in line with the usual requirements of other industry standard automounters. Mount point, hostname, exported directory, and options can all be specified in a set of files (or other supported network sources) rather than configuring them manually for each host.

Improvements in autofs Version 5 over Version 4

autofs version 5 features the following enhancements over version 4:
Direct map support
Direct maps in autofs provide a mechanism to automatically mount file systems at arbitrary points in the file system hierarchy. A direct map is denoted by a mount point of /- in the master map. Entries in a direct map contain an absolute path name as a key (instead of the relative path names used in indirect maps).
Lazy mount and unmount support
Multi-mount map entries describe a hierarchy of mount points under a single key. A good example of this is the -hosts map, commonly used for automounting all exports from a host under /net/host as a multi-mount map entry. When using the -hosts map, an ls of /net/host will mount autofs trigger mounts for each export from host. These will then mount and expire them as they are accessed. This can greatly reduce the number of active mounts needed when accessing a server with a large number of exports.
Enhanced LDAP support
The autofs configuration file (/etc/sysconfig/autofs) provides a mechanism to specify the autofs schema that a site implements, thus precluding the need to determine this via trial and error in the application itself. In addition, authenticated binds to the LDAP server are now supported, using most mechanisms supported by the common LDAP server implementations. A new configuration file has been added for this support: /etc/autofs_ldap_auth.conf. The default configuration file is self-documenting, and uses an XML format.
Proper use of the Name Service Switch (nsswitch) configuration.
The Name Service Switch configuration file exists to provide a means of determining from where specific configuration data comes. The reason for this configuration is to allow administrators the flexibility of using the back-end database of choice, while maintaining a uniform software interface to access the data. While the version 4 automounter is becoming increasingly better at handling the NSS configuration, it is still not complete. Autofs version 5, on the other hand, is a complete implementation.
Refer to man nsswitch.conf for more information on the supported syntax of this file. Not all NSS databases are valid map sources and the parser will reject ones that are invalid. Valid sources are files, yp, nis, nisplus, ldap, and hesiod.
Multiple master map entries per autofs mount point
One thing that is frequently used but not yet mentioned is the handling of multiple master map entries for the direct mount point /-. The map keys for each entry are merged and behave as one map.
Example Multiple master map entries per autofs mount point
An example is seen in the connectathon test maps for the direct mounts below:
/- /tmp/auto_dcthon
/- /tmp/auto_test3_direct
/- /tmp/auto_test4_direct

 

autofs Configuration

The primary configuration file for the automounter is /etc/auto.master, The master map lists autofs-controlled mount points on the system, and their corresponding configuration files or network sources known as automount maps. The format of the master map is as follows:
mount-point map-name options
The variables used in this format are:
mount-point
The autofs mount point, /home, for example.
map-name
The name of a map source which contains a list of mount points, and the file system location from which those mount points should be mounted. The syntax for a map entry is described below.
options
If supplied, these will apply to all entries in the given map provided they don't themselves have options specified. This behavior is different from autofs version 4 where options were cumulative. This has been changed to implement mixed environment compatibility.
Example /etc/auto.master file
The following is a sample line from /etc/auto.master file (displayed with cat /etc/auto.master):
/home /etc/auto.misc
The general format of maps is similar to the master map, however the "options" appear between the mount point and the location instead of at the end of the entry as in the master map:
mount-point   [options]   location

The variables used in this format are:
mount-point
This refers to the autofs mount point. This can be a single directory name for an indirect mount or the full path of the mount point for direct mounts. Each direct and indirect map entry key (mount-point above) may be followed by a space separated list of offset directories (sub directory names each beginning with a "/") making them what is known as a multi-mount entry.
options
Whenever supplied, these are the mount options for the map entries that do not specify their own options.
location
This refers to the file system location such as a local file system path (preceded with the Sun map format escape character ":" for map names beginning with "/"), an NFS file system or other valid file system location.
The following is a sample of contents from a map file (for example, /etc/auto.misc):
payroll -fstype=nfs personnel:/dev/hda3
sales -fstype=ext3 :/dev/hda4
The first column in a map file indicates the autofs mount point (sales and payroll from the server called personnel). The second column indicates the options for the autofs mount while the third column indicates the source of the mount. Following the above configuration, the autofs mount points will be /home/payroll and /home/sales. The -fstype= option is often omitted and is generally not needed for correct operation.
The automounter will create the directories if they do not exist. If the directories exist before the automounter was started, the automounter will not remove them when it exits. You can start or restart the automount daemon by issuing either of the following two commands:
  • service autofs start (if the automount daemon has stopped)
  • service autofs restart
Using the above configuration, if a process requires access to an autofs unmounted directory such as /home/payroll/2006/July.sxc, the automount daemon automatically mounts the directory. If a timeout is specified, the directory will automatically be unmounted if the directory is not accessed for the timeout period.
You can view the status of the automount daemon by issuing the following command:
#  service autofs status

Overriding or Augmenting Site Configuration Files

It can be useful to override site defaults for a specific mount point on a client system. For example, consider the following conditions:
  • Automounter maps are stored in NIS and the /etc/nsswitch.conf file has the following directive:
automount:    files nis
  • The auto.master file contains the following
+auto.master
  • The NIS auto.master map file contains the following:
/home auto.home
  • The NIS auto.home map contains the following:
·                beth        fileserver.example.com:/export/home/beth
·                joe        fileserver.example.com:/export/home/joe
*       fileserver.example.com:/export/home/&
  • The file map /etc/auto.home does not exist.
Given these conditions, let's assume that the client system needs to override the NIS map auto.home and mount home directories from a different server. In this case, the client will need to use the following /etc/auto.master map:
/home ­/etc/auto.home
+auto.master
The /etc/auto.home map contains the entry:
*    labserver.example.com:/export/home/&
Because the automounter only processes the first occurrence of a mount point, /home will contain the contents of /etc/auto.home instead of the NIS auto.home map.
Alternatively, to augment the site-wide auto.home map with just a few entries, create an /etc/auto.home file map, and in it put the new entries. At the end, include the NIS auto.home map. Then the /etc/auto.home file map will look similar to:
mydir someserver:/export/mydir
+auto.home
Given the NIS auto.home map listed above, ls /home would now output:
beth joe mydir
This last example works as expected because autofs does not include the contents of a file map of the same name as the one it is reading. As such, autofs moves on to the next map source in the nsswitch configuration.

Using LDAP to Store Automounter Maps

LDAP client libraries must be installed on all systems configured to retrieve automounter maps from LDAP. On Red Hat Enterprise Linux, the openldap package should be installed automatically as a dependency of the automounter. To configure LDAP access, modify /etc/openldap/ldap.conf. Ensure that BASE, URI, and schema are set appropriately for your site.
The most recently established schema for storing automount maps in LDAP is described by rfc2307bis. To use this schema it is necessary to set it in the autofs configuration (/etc/sysconfig/autofs) by removing the comment characters from the schema definition. For example:
Example Setting autofs configuration
DEFAULT_MAP_OBJECT_CLASS="automountMap"
DEFAULT_ENTRY_OBJECT_CLASS="automount"
DEFAULT_MAP_ATTRIBUTE="automountMapName"
DEFAULT_ENTRY_ATTRIBUTE="automountKey"
DEFAULT_VALUE_ATTRIBUTE="automountInformation"

Ensure that these are the only schema entries not commented in the configuration. The automountKey replaces the cn attribute in the rfc2307bis schema. An LDIF of a sample configuration is described below:
Example  LDF configuration
# extended LDIF
#
# LDAPv3
# base <> with scope subtree
# filter: (&(objectclass=automountMap)(automountMapName=auto.master))
# requesting: ALL
#
 
# auto.master, example.com
dn: automountMapName=auto.master,dc=example,dc=com
objectClass: top
objectClass: automountMap
automountMapName: auto.master
 
# extended LDIF
#
# LDAPv3
# base <automountMapName=auto.master,dc=example,dc=com> with scope subtree
# filter: (objectclass=automount)
# requesting: ALL
#
 
# /home, auto.master, example.com
dn: automountMapName=auto.master,dc=example,dc=com
objectClass: automount
cn: /home
 
automountKey: /home
automountInformation: auto.home
 
# extended LDIF
#
# LDAPv3
# base <> with scope subtree
# filter: (&(objectclass=automountMap)(automountMapName=auto.home))
# requesting: ALL
#
 
# auto.home, example.com
dn: automountMapName=auto.home,dc=example,dc=com
objectClass: automountMap
automountMapName: auto.home
 
# extended LDIF
#
# LDAPv3
# base <automountMapName=auto.home,dc=example,dc=com> with scope subtree
# filter: (objectclass=automount)
# requesting: ALL
#
 
# foo, auto.home, example.com
dn: automountKey=foo,automountMapName=auto.home,dc=example,dc=com
objectClass: automount
automountKey: foo
automountInformation: filer.example.com:/export/foo
 
# /, auto.home, example.com
dn: automountKey=/,automountMapName=auto.home,dc=example,dc=com
objectClass: automount
automountKey: /
automountInformation: filer.example.com:/export/&

Common NFS Mount Options

Beyond mounting a file system with NFS on a remote host, it is also possible to specify other options at mount time to make the mounted share easier to use. These options can be used with manual mount commands, /etc/fstab settings, and autofs.
The following are options commonly used for NFS mounts:
intr
Allows NFS requests to be interrupted if the server goes down or cannot be reached.
lookupcache=mode
Specifies how the kernel should manage its cache of directory entries for a given mount point. Valid arguments for mode are all, none, or pos/positive.
nfsvers=version
Specifies which version of the NFS protocol to use, where version is 2, 3, or 4. This is useful for hosts that run multiple NFS servers. If no version is specified, NFS uses the highest version supported by the kernel and mount command.
The option vers is identical to nfsvers, and is included in this release for compatibility reasons.
noacl
Turns off all ACL processing. This may be needed when interfacing with older versions of Red Hat Enterprise Linux, Red Hat Linux, or Solaris, since the most recent ACL technology is not compatible with older systems.
nolock
Disables file locking. This setting is occasionally required when connecting to older NFS servers.
noexec
Prevents execution of binaries on mounted file systems. This is useful if the system is mounting a non-Linux file system containing incompatible binaries.
nosuid
Disables set-user-identifier or set-group-identifier bits. This prevents remote users from gaining higher privileges by running a setuid program.
port=num
port=num — Specifies the numeric value of the NFS server port. If num is 0 (the default), then mount queries the remote host's rpcbind service for the port number to use. If the remote host's NFS daemon is not registered with its rpcbind service, the standard NFS port number of TCP 2049 is used instead.
rsize=num and wsize=num
These settings speed up NFS communication for reads (rsize) and writes (wsize) by setting a larger data block size (num, in bytes), to be transferred at one time. Be careful when changing these values; some older Linux kernels and network cards do not work well with larger block sizes. For NFSv2 or NFSv3, the default values for both parameters is set to 8192. For NFSv4, the default values for both parameters is set to 32768.
sec=mode
Specifies the type of security to utilize when authenticating an NFS connection. Its default setting is sec=sys, which uses local UNIX UIDs and GIDs by using AUTH_SYS to authenticate NFS operations.
sec=krb5 uses Kerberos V5 instead of local UNIX UIDs and GIDs to authenticate users.
sec=krb5i uses Kerberos V5 for user authentication and performs integrity checking of NFS operations using secure checksums to prevent data tampering.
sec=krb5p uses Kerberos V5 for user authentication, integrity checking, and encrypts NFS traffic to prevent traffic sniffing. This is the most secure setting, but it also involves the most performance overhead.
tcp
Instructs the NFS mount to use the TCP protocol.
udp
Instructs the NFS mount to use the UDP protocol.
For a complete list of options and more detailed information on each one, refer to man mount and man nfs.

Starting and Stopping NFS

To run an NFS server, the rpcbind service must be running. To verify that rpcbind is active, use the following command:
    # service rpcbind status
If the rpcbind service is running, then the nfs service can be started. To start an NFS server, use the following command:
    # service nfs start
nfslock must also be started for both the NFS client and server to function properly. To start NFS locking, use the following command:
    # service nfslock start
If NFS is set to start at boot, ensure that nfslock also starts by running chkconfig --list nfslock. If nfslock is not set to on, this implies that you will need to manually run the service nfslock start each time the computer starts. To set nfslock to automatically start on boot, use chkconfig nfslock on.
nfslock is only needed for NFSv2 and NFSv3.
To stop the server, use:
    # service nfs stop
The restart option is a shorthand way of stopping and then starting NFS. This is the most efficient way to make configuration changes take effect after editing the configuration file for NFS. To restart the server type:
    # service nfs restart
The condrestart (conditional restart) option only starts nfs if it is currently running. This option is useful for scripts, because it does not start the daemon if it is not running. To conditionally restart the server type:
    # service nfs condrestart
To reload the NFS server configuration file without restarting the service type:
    # service nfs reload

NFS Server Configuration in Linux




There are two ways to configure an NFS server:
  • Manually editing the NFS configuration file, that is, /etc/exports, and
  • through the command line, that is, by using the command exportfs

The /etc/exports Configuration File

The /etc/exports file controls which file systems are exported to remote hosts and specifies options. It follows the following syntax rules:
  • Blank lines are ignored.
  • To add a comment, start a line with the hash mark (#).
  • You can wrap long lines with a backslash (\).
  • Each exported file system should be on its own individual line.
  • Any lists of authorized hosts placed after an exported file system must be separated by space characters.
  • Options for each of the hosts must be placed in parentheses directly after the host identifier, without any spaces separating the host and the first parenthesis.
Each entry for an exported file system has the following structure:
export host(options)
The aforementioned structure uses the following variables:
export
The directory being exported
host
The host or network to which the export is being shared
options
The options to be used for host
It is possible to specify multiple hosts, along with specific options for each host. To do so, list them on the same line as a space-delimited list, with each hostname followed by its respective options (in parentheses), as in:
export host1(options1) host2(options2) host3(options3)
For information on different methods for specifying hostnames.
In its simplest form, the /etc/exports file only specifies the exported directory and the hosts permitted to access it, as in the following example:
Example 9.6. The /etc/exports file
/exported/directory bob.example.com
Here, bob.example.com can mount /exported/directory/ from the NFS server. Because no options are specified in this example, NFS will use default settings.

The default settings are:
ro
The exported file system is read-only. Remote hosts cannot change the data shared on the file system. To allow hosts to make changes to the file system (that is, read/write), specify the rw option.
sync
The NFS server will not reply to requests before changes made by previous requests are written to disk. To enable asynchronous writes instead, specify the option async.
wdelay
The NFS server will delay writing to the disk if it suspects another write request is imminent. This can improve performance as it reduces the number of times the disk must be accesses by separate write commands, thereby reducing write overhead. To disable this, specify the no_wdelay. no_wdelay is only available if the default sync option is also specified.
root_squash
This prevents root users connected remotely (as opposed to locally) from having root privileges; instead, the NFS server will assign them the user ID nfsnobody. This effectively "squashes" the power of the remote root user to the lowest local user, preventing possible unauthorized writes on the remote server. To disable root squashing, specify no_root_squash.
To squash every remote user (including root), use all_squash. To specify the user and group IDs that the NFS server should assign to remote users from a particular host, use the anonuid and anongid options, respectively, as in:
export host(anonuid=uid,anongid=gid)
Here, uid and gid are user ID number and group ID number, respectively. The anonuid and anongid options allow you to create a special user and group account for remote NFS users to share.
By default, access control lists (ACLs) are supported by NFS under Red Hat Enterprise Linux. To disable this feature, specify the no_acl option when exporting the file system.
Each default for every exported file system must be explicitly overridden. For example, if the rw option is not specified, then the exported file system is shared as read-only. The following is a sample line from /etc/exports which overrides two default options:
/another/exported/directory 192.168.0.3(rw,async)
In this example 192.168.0.3 can mount /another/exported/directory/ read/write and all writes to disk are asynchronous. For more information on exporting options, refer to man exportfs.
Other options are available where no default value is specified. These include the ability to disable sub-tree checking, allow access from insecure ports, and allow insecure file locks (necessary for certain early NFS client implementations). Refer to man exports for details on these less-used options.

The exportfs Command

Every file system being exported to remote users with NFS, as well as the access level for those file systems, are listed in the /etc/exports file. When the nfs service starts, the /usr/sbin/exportfs command launches and reads this file, passes control to rpc.mountd (if NFSv2 or NFSv3) for the actual mounting process, then to rpc.nfsd where the file systems are then available to remote users.
When issued manually, the /usr/sbin/exportfs command allows the root user to selectively export or unexport directories without restarting the NFS service. When given the proper options, the /usr/sbin/exportfs command writes the exported file systems to /var/lib/nfs/xtab. Since rpc.mountd refers to the xtab file when deciding access privileges to a file system, changes to the list of exported file systems take effect immediately.
The following is a list of commonly-used options available for /usr/sbin/exportfs:
-r
Causes all directories listed in /etc/exports to be exported by constructing a new export list in /etc/lib/nfs/xtab. This option effectively refreshes the export list with any changes made to /etc/exports.
-a
Causes all directories to be exported or unexported, depending on what other options are passed to /usr/sbin/exportfs. If no other options are specified, /usr/sbin/exportfs exports all file systems specified in /etc/exports.
-o file-systems
Specifies directories to be exported that are not listed in /etc/exports. Replace file-systems with additional file systems to be exported. These file systems must be formatted in the same way they are specified in /etc/exports. This option is often used to test an exported file system before adding it permanently to the list of file systems to be exported.
-i
Ignores /etc/exports; only options given from the command line are used to define exported file systems.
-u
Unexports all shared directories. The command /usr/sbin/exportfs -ua suspends NFS file sharing while keeping all NFS daemons up. To re-enable NFS sharing, use exportfs -r.
-v
Verbose operation, where the file systems being exported or unexported are displayed in greater detail when the exportfs command is executed.
If no options are passed to the exportfs command, it displays a list of currently exported file systems. For more information about the exportfs command, refer to man exportfs.

Using exportfs with NFSv4

In Red Hat Enterprise Linux 6, no extra steps are required to configure NFSv4 exports as any filesystems mentioned are automatically available to NFSv2, NFSv3, and NFSv4 clients using the same path. This was not the case in previous versions.
To prevent clients from using NFSv4, turn it off by sellecting RPCNFSDARGS= -N 4 in /etc/sysconfig/nfs.

Running NFS Behind a Firewall

NFS requires rpcbind, which dynamically assigns ports for RPC services and can cause problems for configuring firewall rules. To allow clients to access NFS shares behind a firewall, edit the /etc/sysconfig/nfs configuration file to control which ports the required RPC services run on.
The /etc/sysconfig/nfs may not exist by default on all systems. If it does not exist, create it and add the following variables, replacing port with an unused port number (alternatively, if the file exists, un-comment and change the default entries as required):
MOUNTD_PORT=port
Controls which TCP and UDP port mountd (rpc.mountd) uses.
STATD_PORT=port
Controls which TCP and UDP port status (rpc.statd) uses.
LOCKD_TCPPORT=port
Controls which TCP port nlockmgr (lockd) uses.
LOCKD_UDPPORT=port
Controls which UDP port nlockmgr (lockd) uses.
If NFS fails to start, check /var/log/messages. Normally, NFS will fail to start if you specify a port number that is already in use. After editing /etc/sysconfig/nfs, restart the NFS service using service nfs restart. Run the rpcinfo -p command to confirm the changes.
To configure a firewall to allow NFS, perform the following steps:
Procedure  Configure a firewall to allow NFS
  1. Allow TCP and UDP port 2049 for NFS.
  2. Allow TCP and UDP port 111 (rpcbind/sunrpc).
  3. Allow the TCP and UDP port specified with MOUNTD_PORT="port"
  4. Allow the TCP and UDP port specified with STATD_PORT="port"
  5. Allow the TCP port specified with LOCKD_TCPPORT="port"
  6. Allow the UDP port specified with LOCKD_UDPPORT="port"
Note
To allow NFSv4.0 callbacks to pass through firewalls set /proc/sys/fs/nfs/nfs_callback_tcpport and allow the server to connect to that port on the client.
This process is not needed for NFSv4.1 or higher, and the other ports for mountd, statd, and lockd are not required in a pure NFSv4 environment.

Discovering NFS exports

There are two ways to discover which file systmes an NFS server exports.
First, on any server that supports NFSv2 or NFSv3, use the showmount command:
$ showmount -e myserver
Export list for mysever
/exports/foo
/exports/bar
Second, on any server that supports NFSv4, mount / and look around.
# mount myserver:/ /mnt/
#cd /mnt/
exports
# ls exports
foo
bar
On servers that support both NFSv4 and either NFSv2 or NFSv3, both methods will work and give the same results.

Hostname Formats

The host(s) can be in the following forms:
Single machine
A fully-qualified domain name (that can be resolved by the server), hostname (that can be resolved by the server), or an IP address.
Series of machines specified with wildcards
Use the * or ? character to specify a string match. Wildcards are not to be used with IP addresses; however, they may accidentally work if reverse DNS lookups fail. When specifying wildcards in fully qualified domain names, dots (.) are not included in the wildcard. For example, *.example.com includes one.example.com but does not include one.two.example.com.
IP networks
Use a.b.c.d/z, where a.b.c.d is the network and z is the number of bits in the netmask (for example 192.168.0.0/24). Another acceptable format is a.b.c.d/netmask, where a.b.c.d is the network and netmask is the netmask (for example, 192.168.100.8/255.255.255.0).
Netgroups
Use the format @group-name, where group-name is the NIS netgroup name.

NFS over RDMA

To enable the RDMA transport in the linux kernel NFS server, use the following procedure:
Procedure Enable RDMA from server
  1. Ensure the RDMA rpm is installed and the RDMA service is enabled with the following command:
# yum install rdma; chkconfig --level 2345 rdma on
  1. Ensure the package that provides the nfs-rdma service is installed and the service is enabled with the following command:
# yum install rdma; chkconfig --level 345 nfs-rdma on
  1. Ensure that the RDMA port is set to the preferred port (default for Red Hat Enterprise Linux 6 is 2050). To do so, edit the /etc/rdma/rdma.conf file to set NFSoRDMA_LOAD=yes and NFSoRDMA_PORT to the desired port.
  2. Set up the exported filesystem as normal for NFS mounts.
On the client side, use the following procedure:
Procedure Enable RDMA from client
  1. Ensure the RDMA rpm is installed and the RDMA service is enabled with the following command:
# yum install rdma; chkconfig --level 2345 rdma on
  1. Mount the NFS exported partition using the RDMA option on the mount call. The port option can optionally be added to the call.
# mount -t nfs -o rdma,port=port_number

Securing NFS

NFS is well-suited for sharing entire file systems with a large number of known hosts in a transparent manner. However, with ease-of-use comes a variety of potential security problems. Consider the following sections when exporting NFS file systems on a server or mounting them on a client. Doing so minimizes NFS security risks and better protects data on the server.

NFS Security with AUTH_SYS and export controls

Traditionally, NFS has given two options in order to control access to exported files.
First, the server restricts which hosts are allowed to mount which filesystems either by IP address or by host name.
Second, the server enforces file system permissions for users on NFS clients in the same way it does local users. Traditionally it does this using AUTH_SYS (also called AUTH_UNIX) which relies on the client to state the UID and GID's of the user. Be aware that this means a malicious or misconfigured client can easily get this wrong and allow a user access to files that it should not.
To limit the potential risks, administrators often allow read-only access or squash user permissions to a common user and group ID. Unfortunately, these solutions prevent the NFS share from being used in the way it was originally intended.
Additionally, if an attacker gains control of the DNS server used by the system exporting the NFS file system, the system associated with a particular hostname or fully qualified domain name can be pointed to an unauthorized machine. At this point, the unauthorized machine is the system permitted to mount the NFS share, since no username or password information is exchanged to provide additional security for the NFS mount.
Wildcards should be used sparingly when exporting directories through NFS, as it is possible for the scope of the wildcard to encompass more systems than intended.
It is also possible to restrict access to the rpcbind service with TCP wrappers. Creating rules with iptables can also limit access to ports used by rpcbind, rpc.mountd, and rpc.nfsd.
For more information on securing NFS and rpcbind, refer to man iptables.

NFS security with AUTH_GSS

The release of NFSv4 brought a revolution to NFS security by mandating the implementation of RPCSEC_GSS and the Kerberos version 5 GSS-API mechanism. However, RPCSEC_GSS and the Kerberos mechanism are also available for all versions of NFS.
With the RPCSEC_GSS Kerberos mechanism, the server no longer depends on the client to correctly represent which user is accessing the file, as is the case with AUTH_SYS. Instead, it uses cryptography to authenticate users to the server, preventing a malicious client from impersonating a user without having that user's kerberos credentials.
Note
It is assumed that a Kerberos ticket-granting server (KDC) is installed and configured correctly, prior to configuring an NFSv4 server. Kerberos is a network authentication system which allows clients and servers to authenticate to each other through use of symmetric encryption and a trusted third party, the KDC. For more information on Kerberos see Red Hat's Identity Management Guide.
To set up RPCSEC_GSS, use the following procedure:
Set up RPCSEC_GSS
  1. Create nfs/client.mydomain@MYREALM and nfs/server.mydomain@MYREALM principals.
  2. Add the corresponding keys to keytabs for the client and server.
  3. On the server side, add sec=krb5,krb5i,krb5p to the export. To continue allowing AUTH_SYS, add sec=sys,krb5,krb5i,krb5p instead.
  4. On the client side, add sec=krb5 (or sec=krb5i, or sec=krb5p depending on the setup) to the mount options.
For more information, such as the difference between krb5, krb5i, and krb5p, refer to the exports and nfs man pages

NFS security with NFSv4

NFSv4 includes ACL support based on the Microsoft Windows NT model, not the POSIX model, because of the former's features and wide deployment.
Another important security feature of NFSv4 is the removal of the use of the MOUNT protocol for mounting file systems. This protocol presented possible security holes because of the way that it processed file handles.

File Permissions

Once the NFS file system is mounted read/write by a remote host, the only protection each shared file has is its permissions. If two users that share the same user ID value mount the same NFS file system, they can modify each others' files. Additionally, anyone logged in as root on the client system can use the su - command to access any files with the NFS share.
By default, access control lists (ACLs) are supported by NFS under Red Hat Enterprise Linux. Red Hat recommends that this feature is kept enabled.
By default, NFS uses root squashing when exporting a file system. This sets the user ID of anyone accessing the NFS share as the root user on their local machine to nobody. Root squashing is controlled by the default option root_squash; for more information about this option.If possible, never disable root squashing.
When exporting an NFS share as read-only, consider using the all_squash option. This option makes every user accessing the exported file system take the user ID of the nfsnobody user.

NFS and rpcbind

Note
The following section only applies to NFSv2 or NFSv3 implementations that require the rpcbind service for backward compatibility.
The rpcbind utility maps RPC services to the ports on which they listen. RPC processes notify rpcbind when they start, registering the ports they are listening on and the RPC program numbers they expect to serve. The client system then contacts rpcbind on the server with a particular RPC program number. The rpcbind service redirects the client to the proper port number so it can communicate with the requested service.
Because RPC-based services rely on rpcbind to make all connections with incoming client requests, rpcbind must be available before any of these services start.
The rpcbind service uses TCP wrappers for access control, and access control rules for rpcbind affect all RPC-based services. Alternatively, it is possible to specify access control rules for each of the NFS RPC daemons. The man pages for rpc.mountd and rpc.statd contain information regarding the precise syntax for these rules.

Troubleshooting NFS and rpcbind

Because rpcbind provides coordination between RPC services and the port numbers used to communicate with them, it is useful to view the status of current RPC services using rpcbind when troubleshooting. The rpcinfo command shows each RPC-based service with port numbers, an RPC program number, a version number, and an IP protocol type (TCP or UDP).
To make sure the proper NFS RPC-based services are enabled for rpcbind, issue the following command:
# rpcinfo -p
Example 9.7. rpcinfo -p command output
The following is sample output from this command:
program vers proto  port service
      100021    1   udp  32774  nlockmgr
      100021    3   udp  32774  nlockmgr
      100021    4   udp  32774  nlockmgr
      100021    1   tcp  34437  nlockmgr
      100021    3   tcp  34437  nlockmgr
      100021    4   tcp  34437  nlockmgr
      100011    1   udp    819  rquotad
      100011    2   udp    819  rquotad
      100011    1   tcp    822  rquotad
      100011    2   tcp    822  rquotad
      100003    2   udp   2049  nfs
      100003    3   udp   2049  nfs
      100003    2   tcp   2049  nfs
      100003    3   tcp   2049  nfs
      100005    1   udp    836  mountd
      100005    1   tcp    839  mountd
      100005    2   udp    836  mountd
      100005    2   tcp    839  mountd
      100005    3   udp    836  mountd
      100005    3   tcp    839  mountd

If one of the NFS services does not start up correctly, rpcbind will be unable to map RPC requests from clients for that service to the correct port. In many cases, if NFS is not present in rpcinfo output, restarting NFS causes the service to correctly register with rpcbind and begin working.
For more information and a list of options on rpcinfo, refer to its man page.