NFS automount evolves

** NOTE: This version is obsoleted! The latest version can be found here.

I’ve updated the NFS automount script that provides “self-healing” NFS mounts. The script now allows a mount to be defined as read-write or read-only, and then subsequently monitors that the share is mounted as R/W or R/O (of course, it can’t mount a share that has been shared as R/O as R/W). Both Linux (tested on CentOS 6.1) and FreeBSD versions are provided.

Since various systems can provide cross-mounts via NFS, and they may be started/rebooted at the same time, various shares may or may not be available at each system’s boot time. By utilizing this script the mounts become available soon after the respective share becomes available (simply adjust the run frequency in crontab to the needs of your specific application). Also, by not adding the NFS mount points in fstab the boot process is not delayed by a share that is not [yet] available.

First for CentOS/Linux:

#!/bin/sh

SHELL=/bin/sh
PATH=/sbin:/bin:/usr/sbin:/usr/bin

# set mount/remount request flags
mount=false
remount=false

# remote system name
remotesystem="$1"

# rw/ro
if [ "$2" = "rw" ]; then
    mountmode="-w"
else
    mountmode="-r"
fi

# remote share name
remoteshare="$3"

# local mount point
mountpoint="$4"

# file to indicate local mount status
testfile=${mountpoint}/"$5" 

# rw test file
rw_testfile=${mountpoint}/nfs_enforcer_rw_testfile

# command locations
pingcmd=/bin/ping
showmountcmd=/usr/sbin/showmount
grepcmd=/bin/grep
mountcmd=/bin/mount
umountcmd=/bin/umount
statcmd=/usr/bin/stat
touchcmd=/bin/touch
rmcmd=/bin/rm

# --- end variables ---

# make sure the mountpoint is not stale
statresult=`${statcmd} ${mountpoint} 2>&1 | ${grepcmd} "Stale"`

if [ "${statresult}" != "" ]; then
   #result not empty: mountpoint is stale; remove it
   ${umountcmd} -f ${mountpoint}
fi

# ping the remote system (2 sec timeout)
${pingcmd} -w2 -c1 -q ${remotesystem} > /dev/null 2>&1

# make sure the remote system is reachable
if [ "$?" -eq "0" ]; then

   # query the availability of the remote share; not empty result indicates OK   
   offsiteshare=`${showmountcmd} -e ${remotesystem} | ${grepcmd} "${remoteshare}"`
   if [ "${offsiteshare}" != "" ] ; then

      # make sure the local mount point (directory) exists (so that [re-]mount, if necessary, is valid)
      if [ -d ${mountpoint} ] ; then

         localmount=`${mountcmd} | ${grepcmd} "${mountpoint}"`

         # make sure the share test file is _not_ present (to make sure the mountpoint is inactive)
         if [ ! -f ${testfile} ] ; then

            # make sure the local mountpoint is inactive (double checking)
            if [ "${localmount}" = "" ] ; then

               # all set to go; request mount
               mount=true
            fi
         
         else 
            
            # make sure the local mountpoint is active (double checking)
            if [ "${localmount}" != "" ] ; then

               # attempt to create a test file..
               ${touchcmd} ${rw_testfile} > /dev/null  2>&1

               # ..and test its existence; first handle RW mounted shares:
               if [ -f ${rw_testfile} ] ; then

                  # share was RO requested
                  if [ "$2" = "ro" ]; then
                     remount=true
                  fi

                  # Delete the testfile
                  ${rmcmd} ${rw_testfile}

               # hanle RO mounted shares:
               else

                  # share was RW requested
                  if [ "$2" = "rw" ]; then
                     remount=true
                  fi
               fi
            fi
         fi
      fi
   fi
fi

# perform remount (unmount, request mount)
if $remount ; then
   ${umountcmd} -f ${mountpoint}
   mount=true
fi

# perform mount when so requested
if $mount ; then
   ${mountcmd} ${mountmode} -t nfs ${remotesystem}:${remoteshare} ${mountpoint}
fi

exit 0

Then for FreeBSD/UNIX:

#!/bin/sh

SHELL=/bin/sh
PATH=/etc:/bin:/sbin:/bin:/usr/bin:/usr/sbin:/usr/local/bin

# set mount/remount request flags
mount=false
remount=false

# remote system name
remotesystem="$1"

# rw/ro
if [ "$2" = "rw" ]; then
    mountmode="-w"
else
    mountmode="-r"
fi

# remote share name
remoteshare="$3"

# local mount point
mountpoint="$4"

# file to indicate local mount status
testfile=${mountpoint}/"$5" 

# rw test file
rw_testfile=${mountpoint}/nfs_enforcer_rw_testfile

# command locations
pingcmd=/sbin/ping
showmountcmd=/usr/bin/showmount
grepcmd=/usr/bin/grep
mountcmd=/sbin/mount
umountcmd=/sbin/umount
statcmd=stat
touchcmd=/usr/bin/touch
rmcmd=/bin/rm

# --- end variables ---

# make sure the mountpoint is not stale
statresult=`${statcmd} ${mountpoint} 2>&1 | ${grepcmd} "Stale"`

if [ "${statresult}" != "" ]; then
   #result not empty: mountpoint is stale; remove it
   ${umountcmd} -f ${mountpoint}
fi

# ping the remote system (2 sec timeout)
remoteping=`${pingcmd} -c1 -o -q -t2 ${remotesystem} | grep " 0.0%"`

# make sure the remote system is reachable
if [ "${remoteping}" != "" ] ; then
   
   # query the availability of the remote share; not empty result indicates OK   
   offsiteshare=`${showmountcmd} -e ${remotesystem} | ${grepcmd} "${remoteshare}"`
   if [ "${offsiteshare}" != "" ] ; then
   
      # make sure the local mount point (directory) exists (so that [re-]mount, if necessary, is valid)
      if [ -d ${mountpoint} ] ; then

         localmount=`${mountcmd} | ${grepcmd} "${mountpoint}"`
      
         # make sure the share test file is _not_ present (to make sure the mountpoint is inactive)
         if [ ! -f ${testfile} ] ; then
        
            # make sure the local mountpoint is inactive (double checking)
            if [ "${localmount}" = "" ] ; then

               # all set to go; request mount
               mount=true
            fi
               
         else

            # make sure the local mountpoint is active (double checking)
            if [ "${localmount}" != "" ] ; then

               # attempt to create a test file..
               ${touchcmd} ${rw_testfile} > /dev/null  2>&1

               # ..and test its existence; first handle RW mounted shares:
               if [ -f ${rw_testfile} ] ; then

                  # share was RO requested
                  if [ "$2" = "ro" ]; then
                     remount=true
                  fi

                  # Delete the testfile
                  ${rmcmd} ${rw_testfile}

               # hanle RO mounted shares:
               else

                  # share was RW requested
                  if [ "$2" = "rw" ]; then
                     remount=true
                  fi
               fi
            fi
         fi
      fi
   fi
fi

# perform remount (unmount, request mount)
if $remount ; then
   ${umountcmd} -f ${mountpoint}
   mount=true
fi

# perform mount when so requested
if $mount ; then
   ${mountcmd} ${mountmode} -t nfs ${remotesystem}:${remoteshare} ${mountpoint}
fi

exit 0

You should run the automount script from a runfile, like so:

#!/bin/sh

NFS_ENFORCE=/usr/local/sbin/nfs_enforcer

# Separate the following parameters with spaces:
#
# - nfs enforcer command (set above)
# - remote system name (must be resolvable)
# - read/write (rw) or read-only (ro); NOTE: share may be read-only regardless of how this is set
# - remote share name (from remote's /etc/exports)
# - local mount point (existing local directory)
# - share test file (an immutable file on the share)

# e.g.
# $NFS_ENFORCE dbsysvm rw /nfs4shares/conduit /mnt/dbsys_conduit .conduit@dbsysvm
# or (for local remount read-only)
# $NFS_ENFORCE localhost ro /var/web/projects/repository /mnt/rorepo .repository@localhost

$NFS_ENFORCE localhost ro /var/web/projects/repository /mnt/rorepo .repository@localhost

exit 0

..and call the the above runfile from crontab:

*/10  *  *  *  *  root  /usr/local/sbin/nfs_enforcer.batch > /dev/null

Things I didn’t know about ESXi

I’m setting up a development server using vmware ESXi virtual server running CentOS 5.5 x64 and FreeBSD 8.0 x64. Currently, the second installation pass is in progress. Being fresh to ESX/ESXi there were couple of things I didn’t realize:

First (the reason for the reinstall), if there is plenty of hard drive space available, it’s good idea not to deplete it all for the sytem installations. I split a 1.3Tb RAID 5 array between the two operating systems until I realized that 1) you can’t shrink vmfs partitions and 2) by consuming all hard drive space one limits the flexibility of the system down the line. Let’s say you want to install a newer version of an operating system and decide to do a fresh install. You need space for it while you want to keep the old version around at least long enough to migrate settings and data over.

Second, while I was aware of that ESXi doesn’t offer console access beyond the “yellow and grey” terminal, I didn’t realize you have no access to the VM consoles, either. So, with CentOS or FreeBSD installed, the only way to access their consoles is via the vSphere client (someone correct me if I’m wrong — I wish I were as I’d like to have local console access to the guest OS’es).

Finally, VMware Go “doesn’t currently support ESXi servers with multiple datastores”. So if you have, say, a 3ware/LSI/AMCC RAID controller which isn’t currently supported under ESXi as a boot device but which you likely still want to use as a datastore, you’ll end up with at least two datastores. So vSphere is really the only way to go for VM management also for this reason (since LSI provides a vmware-specific driver, one may also be able to direct-connect the LSI RAID array to the VM without it being an ESXi datastore, but that’s not the configuration I’m looking for—the boot device is small and houses just ESXi while the VMs and their associated datastores are located on the array).

In the end everything’s working quite well. I like the flexibility virtualization offers.. and consolidation is useful even in a small environment (one dev machine is less than two or three dev machines :)).

Explorations in the World of Linux

I’ve been a FreeBSD admin for the past decade, and during this time have become quite familiar with the *BSD system. It has its quirks, but overall it’s very clean and easy to maintain.

From time to time – usually when I’ve been getting ready to upgrade to the next major revision of FreeBSD – I’ve taken some time to research what the current pros and cons are for FreeBSD vs. some Linux distro. Always, in the end, FreeBSD has won. However, a development project I’m starting to work on will utilize Zend Server, which is only supported on handful of common Linux distros and on Windows (which is, by default, not an option as I strongly maintain that Windows is not suitable as a web server platform). There is, of course, Linux compatibility layer in FreeBSD, but as Zend doesn’t currently support it as a platform for Zend Server, I wouldn’t feel comfortable using it in a production environment.

So even though I find FreeBSD superior to Linux in many ways, I’ve now spent some time getting acquainted with Linux. I first started with Red Hat, then moved to CentOS which is the Linux distribution I’m currently testing. Now it’s not bad, per se, but I frequently come back to the thought: “Why would someone, anyone prefer THIS over a BSD system?!” The package management with yum, rpm, and the GUI overlays is easy enough, but it’s chaotic! Having to enable and disable repos, set their priorities, etc. seems unnecessarily complicated. On the FreeBSD side there is the ports collection which provides most of the software that one can imagine ever needing. The odd few items that either aren’t available in ports, or whose configuration is somehow not complete enough through ports can be easily compiled from the source tarball. Everything’s quite easy to keep track of, and to duplicate if one’s building a new system.

I’m sure some of this feeling stems from the fact that I have been using a BSD system for so long, and from the fact that I probably don’t yet know Linux well enough (say, to build the system from a scratch..). But as far as I can tell, package management is done with yum and rpm (on CentOS, say), by adjusting repository priorities, and enabling/disabling repositories. That is messy!

Well, I now have a functional development server running Zend Server with Apache, Subversion, and MySQL, and as the vendor (Zend) dictates the rules, I must continue development on Linux. Perhaps in six months time I’ll have more favorable comments about it as compared to FreeBSD… but I sort of doubt it. My guess is I’ll just learn to live with it, every now and then wistfully glancing to the direction of the BSD server.