Setting Up an NFS Server
1.Introduction to Server Setup
It is assumed that you will be
setting up both a server and a client. If you are just setting up a
client to work off of somebody else's server (say in your
department), you can skip to “Settingup an NFS Client”. However, every client that is set up
requires modifications on the server to authorize that client (unless
the server setup is done in a very insecure way), so even if you are
not setting up a server you may wish to read this section to get an
idea what kinds of authorization problems to look out for.
Setting up the server will be done in two
steps: Setting up the configuration files for NFS, and then starting
the NFS services.
2.Setting up the Configuration Files
There are three main configuration
files you will need to edit to set up an NFS server:
/etc/exports
,
/etc/hosts.allow
,
and /etc/hosts.deny
.
Strictly speaking, you only need to edit /etc/exports
to get NFS to work, but you would be left with an extremely insecure
setup. You may also need to edit your startup scripts;
2.1. /etc/exports
This file contains a list of
entries; each entry indicates a volume that is shared and how it is
shared. Check the man pages (man
exports) for a complete description
of all the setup options for the file, although the description here
will probably satisfy most people's needs.
An entry in
/etc/exports
will typically look like this:
directory machine1(option11,option12)
machine2(option21,option22)
where
- directory
- the directory that you want to share. It may be an entire volume though it need not be. If you share a directory, then all directories under it within the same file system will be shared as well.
- machine1 and machine2
- client machines that will have access to the directory. The machines may be listed by their DNS address or their IP address (e.g., machine.company.com or 192.168.0.8 ). Using IP addresses is more reliable and more secure. If you need to use DNS addresses, and they do not seem to be resolving to the right machine.
- optionxx
- the option listing for each machine will describe what kind of access that machine will have. Important options are:
- ro: The directory is shared read only; the client machine will not be able to write it. This is the default.
- rw: The client machine will have read and write access to the directory.
- no_root_squash: By default, any file request made by user root on the client machine is treated as if it is made by user nobody on the server. (Exactly which UID the request is mapped to depends on the UID of user "nobody" on the server, not the client.) If no_root_squash is selected, then root on the client machine will have the same level of access to the files on the system as root on the server. This can have serious security implications, although it may be necessary if you want to perform any administrative work on the client machine that involves the exported directories. You should not specify this option without a good reason.
- no_subtree_check: If only part of a volume is exported, a routine called subtree checking verifies that a file that is requested from the client is in the appropriate part of the volume. If the entire volume is exported, disabling this check will speed up transfers.
- sync: By default, all but the most recent version (version 1.11) of the exportfs command will use async behavior, telling a client machine that a file write is complete - that is, has been written to stable storage - when NFS has finished handing the write over to the filesystem. This behavior may cause data corruption if the server reboots, and the sync option prevents this.
Suppose we have two client machines,
slave1 and
slave2, that
have IP addresses 192.168.0.1
and 192.168.0.2,
respectively. We wish to share our software binaries and home
directories with these machines. A typical setup for
/etc/exports
might look like this:
/usr/local 192.168.0.1(ro) 192.168.0.2(ro)
/home 192.168.0.1(rw) 192.168.0.2(rw)
Here we are sharing /usr/local
read-only to slave1
and slave2,
because it probably contains our software and there may not be
benefits to allowing slave1
and slave2 to
write to it that outweigh security concerns. On the other hand, home
directories need to be exported read-write if users are to save their
work on them.
If you have a large installation,
you may find that you have a bunch of computers all on the same local
network that require access to your server. There are a few ways of
simplifying references to large numbers of machines. First, you can
give access to a range of machines at once by specifying a network
and a netmask. For example, if you wanted to allow access to all the
machines with IP addresses between 192.168.0.0
and 192.168.0.255
then you could have the entries:
/usr/local 192.168.0.0/255.255.255.0(ro)
/home 192.168.0.0/255.255.255.0(rw)
you may also wish to look at the man pages for init
and
hosts.allow
.
Second, you can use NIS netgroups in
your entry. To specify a netgroup in your exports file, simply
prepend the name of the netgroup with an "@".
Third, you can use wildcards such as
*.foo.com or
192.168.
instead of hostnames. There were problems with wildcard
implementation in the 2.2 kernel series that were fixed in kernel
2.2.19.
However, you should keep in mind that any
of these simplifications could cause a security risk if there are
machines in your netgroup or local network that you do not trust
completely.
A few cautions are in order about
what cannot (or should not) be exported. First, if a directory is
exported, its parent and child directories cannot be exported if they
are in the same filesystem. However, exporting both should not be
necessary because listing the parent directory in the
/etc/exports
file will cause all underlying directories within that file system to
be exported.
Second, it is a poor idea to export a FAT
or VFAT (i.e., MS-DOS or Windows 95/98) filesystem with NFS. FAT is
not designed for use on a multi-user machine, and as a result,
operations that depend on permissions will not work well. Moreover,
some of the underlying filesystem design is reported to work poorly
with NFS's expectations.
Third, device or other special files
may not export correctly to non-Linux clients.
2.2. /etc/hosts.allow and /etc/hosts.deny
These two files specify which computers on
the network can use services on your machine. Each line of the file
contains a single entry listing a service and a set of machines. When
the server gets a request from a machine, it does the following:
- It first checks
hosts.allow
to see if the machine matches a rule listed here. If it does, then the machine is allowed access. - If the machine does not match an entry in
hosts.allow
the server then checkshosts.deny
to see if the client matches a rule listed there. If it does then the machine is denied access. - If the client matches no listings in either file, then it is allowed access.
In addition to controlling access to
services handled by inetd
(such as telnet and FTP), this file can also control access to NFS by
restricting connections to the daemons that provide NFS services.
Restrictions are done on a per-service basis.
The first daemon to restrict access
to is the portmapper. This daemon essentially just tells requesting
clients how to find all the NFS services on the system. Restricting
access to the portmapper is the best defense against someone breaking
into your system through NFS because completely unauthorized clients
won't know where to find the NFS daemons. However, there are two
things to watch out for. First, restricting portmapper isn't enough
if the intruder already knows for some reason how to find those
daemons. And second, if you are running NIS, restricting portmapper
will also restrict requests to NIS. That should usually be harmless
since you usually want to restrict NFS and NIS in a similar way, but
just be cautioned. (Running NIS is generally a good idea if you are
running NFS, because the client machines need a way of knowing who
owns what files on the exported volumes. Of course there are other
ways of doing this such as syncing password files.
In general it is a good idea with NFS (as
with most internet services) to explicitly deny access to IP
addresses that you don't need to allow access to.
The first step in doing this is to
add the followng entry to /etc/hosts.deny:
portmap:ALL
Starting with nfs-utils 0.2.0, you can be a bit more
careful by controlling access to individual daemons. It's a good
precaution since an intruder will often be able to weasel around the
portmapper. If you have a newer version of nfs-utils, add entries for
each of the NFS daemons (see the next section to find out what these
daemons are; for now just put entries for them in
hosts.deny
):
lockd:ALL
mountd:ALL
rquotad:ALL
statd:ALL
Even if you have an older version of nfs-utils,
adding these entries is at worst harmless (since they will just be
ignored) and at best will save you some trouble when you upgrade.
Some sys admins choose to put the entry ALL:ALL
in the file
/etc/hosts.deny
,
which causes any service that looks at these files to deny access to
all hosts unless it is explicitly allowed. While this is more secure
behavior, it may also get you in trouble when you are installing new
services, you forget you put it there, and you can't figure out for
the life of you why they won't work.
Next, we need to add an entry to
hosts.allow
to give any hosts access that we want to have access. (If we just
leave the above lines in hosts.deny
then nobody will have access to NFS.) Entries in hosts.allow
follow the format:
service: host [or network/netmask] , host [or network/netmask]
Here, host is IP address of a potential client; it may be possible in
some versions to use the DNS name of the host, but it is strongly
discouraged.
Suppose we have the setup above and
we just want to allow access to slave1.foo.com
and slave2.foo.com,
and suppose that the IP addresses of these machines are 192.168.0.1
and 192.168.0.2,
respectively. We could add the following entry to /etc/hosts.allow:
portmap: 192.168.0.1 , 192.168.0.2
For recent nfs-utils versions, we would also add the following
(again, these entries are harmless even if they are not supported):
lockd: 192.168.0.1 , 192.168.0.2
rquotad: 192.168.0.1 , 192.168.0.2
mountd: 192.168.0.1 , 192.168.0.2
statd: 192.168.0.1 , 192.168.0.2
If you intend to run NFS on a large number of
machines in a local network,
/etc/hosts.allow
also allows for network/netmask style entries in the same manner as
/etc/exports
above.
3. Getting the services Started
3.1. Pre-requisites
The NFS server should now be
configured and we can start it running. First, you will need to have
the appropriate packages installed. This consists mainly of a new
enough kernel and a new enough version of the nfs-utils package.
Next, before you can start NFS, you will
need to have TCP/IP networking functioning correctly on your machine.
If you can use telnet, FTP, and so on, then chances are your TCP
networking is fine.
That said, with most recent Linux
distributions you may be able to get NFS up and running simply by
rebooting your machine, and the startup scripts should detect that
you have set up your
/etc/exports
file and will start up NFS correctly. If you try this, see “Settingup an NFS Client” Verifying that NFS is running. If this does
not work, or if you are not in a position to reboot your machine,
then the following section will tell you which daemons need to be
started in order to run NFS services. If for some reason nfsd
was already running when you edited your configuration files above,
you will have to flush your configuration; see “Settingup an NFS Client” for details.
3.2. Starting the Portmapper
NFS depends on the portmapper
daemon, either called portmap
or rpc.portmap.
It will need to be started first. It should be located in
/sbin
but is sometimes in /usr/sbin
.
Most recent Linux distributions start this daemon in the boot
scripts, but it is worth making sure that it is running before you
begin working with NFS (just type ps
aux | grep portmap).
3.3. The Daemons
NFS serving is taken care of by five
daemons: rpc.nfsd,
which does most of the work; rpc.lockd
and rpc.statd,
which handle file locking; rpc.mountd,
which handles the initial mount requests, and rpc.rquotad,
which handles user file quotas on exported volumes. Starting with
2.2.18, lockd
is called by nfsd
upon demand, so you do not need to worry about starting it yourself.
statd
will need to be started separately. Most recent Linux distributions
will have startup scripts for these daemons.
The daemons are all part of the
nfs-utils package, and may be either in the
/sbin
directory or the /usr/sbin
directory.
If your distribution does not include them
in the startup scripts, then then you should add them, configured to
start in the following order:
rpc.portmap rpc.mountd, rpc.nfsd rpc.statd, rpc.lockd (if necessary), and rpc.rquotad
The nfs-utils package has sample startup scripts for RedHat and
Debian. If you are using a different distribution, in general you can
just copy the RedHat script, but you will probably have to take out
the line that says:
. ../init.d/functions
to avoid getting error messages.
3.4. Verifying that NFS is running
To do this, query the portmapper
with the command rpcinfo quota
to find out what services it is providing. You should get something
like this:
program vers proto port
100000 2 tcp 111 portmapper
100000 2 udp 111 portmapper
100011 1 udp 749 rquotad
100011 2 udp 749 rquotad
100005 1 udp 759 mountd
100005 1 tcp 761 mountd
100005 2 udp 764 mountd
100005 2 tcp 766 mountd
100005 3 udp 769 mountd
100005 3 tcp 771 mountd
100003 2 udp 2049 nfs
100003 3 udp 2049 nfs
300019 1 tcp 830 amd
300019 1 udp 831 amd
100024 1 udp 944 status
100024 1 tcp 946 status
100021 1 udp 1042 nlockmgr
100021 3 udp 1042 nlockmgr
100021 4 udp 1042 nlockmgr
100021 1 tcp 1629 nlockmgr
100021 3 tcp 1629 nlockmgr
100021 4 tcp 1629 nlockmgr
This says that we have NFS versions 2 and 3, rpc.statd version 1,
network lock manager (the service name for rpc.lockd) versions 1, 3,
and 4. There are also different service listings depending on whether
NFS is travelling over TCP or UDP. Linux systems use UDP by default
unless TCP is explicitly requested; however other OSes such as
Solaris default to TCP.
If you do not at least see a line
that says portmapper,
a line that says nfs,
and a line that says mountd
then you will need to backtrack and try again to start up the daemons
If you do see these services listed, then
you should be ready to set up NFS clients to access files from your
server.
3.5. Making Changes to /etc/exports later on
If you come back and change your
/etc/exports
file, the changes you make may not take effect immediately. You
should run the command exportfs -ra
to force nfsd
to re-read the /etc/exports
file. If you can't find the exportfs
command, then you can kill nfsd
with the -HUP
flag (see the man pages for kill for details).
If that still doesn't work, don't
forget to check
hosts.allow
to make sure you haven't forgotten to list any new client machines
there. Also check the host listings on any firewalls you may have set
up