NFS automount evolves

** NOTE: This version is obsoleted! The latest version can be found here.

I’ve updated the NFS automount script that provides “self-healing” NFS mounts. The script now allows a mount to be defined as read-write or read-only, and then subsequently monitors that the share is mounted as R/W or R/O (of course, it can’t mount a share that has been shared as R/O as R/W). Both Linux (tested on CentOS 6.1) and FreeBSD versions are provided.

Since various systems can provide cross-mounts via NFS, and they may be started/rebooted at the same time, various shares may or may not be available at each system’s boot time. By utilizing this script the mounts become available soon after the respective share becomes available (simply adjust the run frequency in crontab to the needs of your specific application). Also, by not adding the NFS mount points in fstab the boot process is not delayed by a share that is not [yet] available.

First for CentOS/Linux:

#!/bin/sh

SHELL=/bin/sh
PATH=/sbin:/bin:/usr/sbin:/usr/bin

# set mount/remount request flags
mount=false
remount=false

# remote system name
remotesystem="$1"

# rw/ro
if [ "$2" = "rw" ]; then
    mountmode="-w"
else
    mountmode="-r"
fi

# remote share name
remoteshare="$3"

# local mount point
mountpoint="$4"

# file to indicate local mount status
testfile=${mountpoint}/"$5" 

# rw test file
rw_testfile=${mountpoint}/nfs_enforcer_rw_testfile

# command locations
pingcmd=/bin/ping
showmountcmd=/usr/sbin/showmount
grepcmd=/bin/grep
mountcmd=/bin/mount
umountcmd=/bin/umount
statcmd=/usr/bin/stat
touchcmd=/bin/touch
rmcmd=/bin/rm

# --- end variables ---

# make sure the mountpoint is not stale
statresult=`${statcmd} ${mountpoint} 2>&1 | ${grepcmd} "Stale"`

if [ "${statresult}" != "" ]; then
   #result not empty: mountpoint is stale; remove it
   ${umountcmd} -f ${mountpoint}
fi

# ping the remote system (2 sec timeout)
${pingcmd} -w2 -c1 -q ${remotesystem} > /dev/null 2>&1

# make sure the remote system is reachable
if [ "$?" -eq "0" ]; then

   # query the availability of the remote share; not empty result indicates OK   
   offsiteshare=`${showmountcmd} -e ${remotesystem} | ${grepcmd} "${remoteshare}"`
   if [ "${offsiteshare}" != "" ] ; then

      # make sure the local mount point (directory) exists (so that [re-]mount, if necessary, is valid)
      if [ -d ${mountpoint} ] ; then

         localmount=`${mountcmd} | ${grepcmd} "${mountpoint}"`

         # make sure the share test file is _not_ present (to make sure the mountpoint is inactive)
         if [ ! -f ${testfile} ] ; then

            # make sure the local mountpoint is inactive (double checking)
            if [ "${localmount}" = "" ] ; then

               # all set to go; request mount
               mount=true
            fi
         
         else 
            
            # make sure the local mountpoint is active (double checking)
            if [ "${localmount}" != "" ] ; then

               # attempt to create a test file..
               ${touchcmd} ${rw_testfile} > /dev/null  2>&1

               # ..and test its existence; first handle RW mounted shares:
               if [ -f ${rw_testfile} ] ; then

                  # share was RO requested
                  if [ "$2" = "ro" ]; then
                     remount=true
                  fi

                  # Delete the testfile
                  ${rmcmd} ${rw_testfile}

               # hanle RO mounted shares:
               else

                  # share was RW requested
                  if [ "$2" = "rw" ]; then
                     remount=true
                  fi
               fi
            fi
         fi
      fi
   fi
fi

# perform remount (unmount, request mount)
if $remount ; then
   ${umountcmd} -f ${mountpoint}
   mount=true
fi

# perform mount when so requested
if $mount ; then
   ${mountcmd} ${mountmode} -t nfs ${remotesystem}:${remoteshare} ${mountpoint}
fi

exit 0

Then for FreeBSD/UNIX:

#!/bin/sh

SHELL=/bin/sh
PATH=/etc:/bin:/sbin:/bin:/usr/bin:/usr/sbin:/usr/local/bin

# set mount/remount request flags
mount=false
remount=false

# remote system name
remotesystem="$1"

# rw/ro
if [ "$2" = "rw" ]; then
    mountmode="-w"
else
    mountmode="-r"
fi

# remote share name
remoteshare="$3"

# local mount point
mountpoint="$4"

# file to indicate local mount status
testfile=${mountpoint}/"$5" 

# rw test file
rw_testfile=${mountpoint}/nfs_enforcer_rw_testfile

# command locations
pingcmd=/sbin/ping
showmountcmd=/usr/bin/showmount
grepcmd=/usr/bin/grep
mountcmd=/sbin/mount
umountcmd=/sbin/umount
statcmd=stat
touchcmd=/usr/bin/touch
rmcmd=/bin/rm

# --- end variables ---

# make sure the mountpoint is not stale
statresult=`${statcmd} ${mountpoint} 2>&1 | ${grepcmd} "Stale"`

if [ "${statresult}" != "" ]; then
   #result not empty: mountpoint is stale; remove it
   ${umountcmd} -f ${mountpoint}
fi

# ping the remote system (2 sec timeout)
remoteping=`${pingcmd} -c1 -o -q -t2 ${remotesystem} | grep " 0.0%"`

# make sure the remote system is reachable
if [ "${remoteping}" != "" ] ; then
   
   # query the availability of the remote share; not empty result indicates OK   
   offsiteshare=`${showmountcmd} -e ${remotesystem} | ${grepcmd} "${remoteshare}"`
   if [ "${offsiteshare}" != "" ] ; then
   
      # make sure the local mount point (directory) exists (so that [re-]mount, if necessary, is valid)
      if [ -d ${mountpoint} ] ; then

         localmount=`${mountcmd} | ${grepcmd} "${mountpoint}"`
      
         # make sure the share test file is _not_ present (to make sure the mountpoint is inactive)
         if [ ! -f ${testfile} ] ; then
        
            # make sure the local mountpoint is inactive (double checking)
            if [ "${localmount}" = "" ] ; then

               # all set to go; request mount
               mount=true
            fi
               
         else

            # make sure the local mountpoint is active (double checking)
            if [ "${localmount}" != "" ] ; then

               # attempt to create a test file..
               ${touchcmd} ${rw_testfile} > /dev/null  2>&1

               # ..and test its existence; first handle RW mounted shares:
               if [ -f ${rw_testfile} ] ; then

                  # share was RO requested
                  if [ "$2" = "ro" ]; then
                     remount=true
                  fi

                  # Delete the testfile
                  ${rmcmd} ${rw_testfile}

               # hanle RO mounted shares:
               else

                  # share was RW requested
                  if [ "$2" = "rw" ]; then
                     remount=true
                  fi
               fi
            fi
         fi
      fi
   fi
fi

# perform remount (unmount, request mount)
if $remount ; then
   ${umountcmd} -f ${mountpoint}
   mount=true
fi

# perform mount when so requested
if $mount ; then
   ${mountcmd} ${mountmode} -t nfs ${remotesystem}:${remoteshare} ${mountpoint}
fi

exit 0

You should run the automount script from a runfile, like so:

#!/bin/sh

NFS_ENFORCE=/usr/local/sbin/nfs_enforcer

# Separate the following parameters with spaces:
#
# - nfs enforcer command (set above)
# - remote system name (must be resolvable)
# - read/write (rw) or read-only (ro); NOTE: share may be read-only regardless of how this is set
# - remote share name (from remote's /etc/exports)
# - local mount point (existing local directory)
# - share test file (an immutable file on the share)

# e.g.
# $NFS_ENFORCE dbsysvm rw /nfs4shares/conduit /mnt/dbsys_conduit .conduit@dbsysvm
# or (for local remount read-only)
# $NFS_ENFORCE localhost ro /var/web/projects/repository /mnt/rorepo .repository@localhost

$NFS_ENFORCE localhost ro /var/web/projects/repository /mnt/rorepo .repository@localhost

exit 0

..and call the the above runfile from crontab:

*/10  *  *  *  *  root  /usr/local/sbin/nfs_enforcer.batch > /dev/null