NFS automount evolves

** NOTE: This version is obsoleted! The latest version can be found here.

I’ve updated the NFS automount script that provides “self-healing” NFS mounts. The script now allows a mount to be defined as read-write or read-only, and then subsequently monitors that the share is mounted as R/W or R/O (of course, it can’t mount a share that has been shared as R/O as R/W). Both Linux (tested on CentOS 6.1) and FreeBSD versions are provided.

Since various systems can provide cross-mounts via NFS, and they may be started/rebooted at the same time, various shares may or may not be available at each system’s boot time. By utilizing this script the mounts become available soon after the respective share becomes available (simply adjust the run frequency in crontab to the needs of your specific application). Also, by not adding the NFS mount points in fstab the boot process is not delayed by a share that is not [yet] available.

First for CentOS/Linux:

#!/bin/sh

SHELL=/bin/sh
PATH=/sbin:/bin:/usr/sbin:/usr/bin

# set mount/remount request flags
mount=false
remount=false

# remote system name
remotesystem="$1"

# rw/ro
if [ "$2" = "rw" ]; then
    mountmode="-w"
else
    mountmode="-r"
fi

# remote share name
remoteshare="$3"

# local mount point
mountpoint="$4"

# file to indicate local mount status
testfile=${mountpoint}/"$5" 

# rw test file
rw_testfile=${mountpoint}/nfs_enforcer_rw_testfile

# command locations
pingcmd=/bin/ping
showmountcmd=/usr/sbin/showmount
grepcmd=/bin/grep
mountcmd=/bin/mount
umountcmd=/bin/umount
statcmd=/usr/bin/stat
touchcmd=/bin/touch
rmcmd=/bin/rm

# --- end variables ---

# make sure the mountpoint is not stale
statresult=`${statcmd} ${mountpoint} 2>&1 | ${grepcmd} "Stale"`

if [ "${statresult}" != "" ]; then
   #result not empty: mountpoint is stale; remove it
   ${umountcmd} -f ${mountpoint}
fi

# ping the remote system (2 sec timeout)
${pingcmd} -w2 -c1 -q ${remotesystem} > /dev/null 2>&1

# make sure the remote system is reachable
if [ "$?" -eq "0" ]; then

   # query the availability of the remote share; not empty result indicates OK   
   offsiteshare=`${showmountcmd} -e ${remotesystem} | ${grepcmd} "${remoteshare}"`
   if [ "${offsiteshare}" != "" ] ; then

      # make sure the local mount point (directory) exists (so that [re-]mount, if necessary, is valid)
      if [ -d ${mountpoint} ] ; then

         localmount=`${mountcmd} | ${grepcmd} "${mountpoint}"`

         # make sure the share test file is _not_ present (to make sure the mountpoint is inactive)
         if [ ! -f ${testfile} ] ; then

            # make sure the local mountpoint is inactive (double checking)
            if [ "${localmount}" = "" ] ; then

               # all set to go; request mount
               mount=true
            fi
         
         else 
            
            # make sure the local mountpoint is active (double checking)
            if [ "${localmount}" != "" ] ; then

               # attempt to create a test file..
               ${touchcmd} ${rw_testfile} > /dev/null  2>&1

               # ..and test its existence; first handle RW mounted shares:
               if [ -f ${rw_testfile} ] ; then

                  # share was RO requested
                  if [ "$2" = "ro" ]; then
                     remount=true
                  fi

                  # Delete the testfile
                  ${rmcmd} ${rw_testfile}

               # hanle RO mounted shares:
               else

                  # share was RW requested
                  if [ "$2" = "rw" ]; then
                     remount=true
                  fi
               fi
            fi
         fi
      fi
   fi
fi

# perform remount (unmount, request mount)
if $remount ; then
   ${umountcmd} -f ${mountpoint}
   mount=true
fi

# perform mount when so requested
if $mount ; then
   ${mountcmd} ${mountmode} -t nfs ${remotesystem}:${remoteshare} ${mountpoint}
fi

exit 0

Then for FreeBSD/UNIX:

#!/bin/sh

SHELL=/bin/sh
PATH=/etc:/bin:/sbin:/bin:/usr/bin:/usr/sbin:/usr/local/bin

# set mount/remount request flags
mount=false
remount=false

# remote system name
remotesystem="$1"

# rw/ro
if [ "$2" = "rw" ]; then
    mountmode="-w"
else
    mountmode="-r"
fi

# remote share name
remoteshare="$3"

# local mount point
mountpoint="$4"

# file to indicate local mount status
testfile=${mountpoint}/"$5" 

# rw test file
rw_testfile=${mountpoint}/nfs_enforcer_rw_testfile

# command locations
pingcmd=/sbin/ping
showmountcmd=/usr/bin/showmount
grepcmd=/usr/bin/grep
mountcmd=/sbin/mount
umountcmd=/sbin/umount
statcmd=stat
touchcmd=/usr/bin/touch
rmcmd=/bin/rm

# --- end variables ---

# make sure the mountpoint is not stale
statresult=`${statcmd} ${mountpoint} 2>&1 | ${grepcmd} "Stale"`

if [ "${statresult}" != "" ]; then
   #result not empty: mountpoint is stale; remove it
   ${umountcmd} -f ${mountpoint}
fi

# ping the remote system (2 sec timeout)
remoteping=`${pingcmd} -c1 -o -q -t2 ${remotesystem} | grep " 0.0%"`

# make sure the remote system is reachable
if [ "${remoteping}" != "" ] ; then
   
   # query the availability of the remote share; not empty result indicates OK   
   offsiteshare=`${showmountcmd} -e ${remotesystem} | ${grepcmd} "${remoteshare}"`
   if [ "${offsiteshare}" != "" ] ; then
   
      # make sure the local mount point (directory) exists (so that [re-]mount, if necessary, is valid)
      if [ -d ${mountpoint} ] ; then

         localmount=`${mountcmd} | ${grepcmd} "${mountpoint}"`
      
         # make sure the share test file is _not_ present (to make sure the mountpoint is inactive)
         if [ ! -f ${testfile} ] ; then
        
            # make sure the local mountpoint is inactive (double checking)
            if [ "${localmount}" = "" ] ; then

               # all set to go; request mount
               mount=true
            fi
               
         else

            # make sure the local mountpoint is active (double checking)
            if [ "${localmount}" != "" ] ; then

               # attempt to create a test file..
               ${touchcmd} ${rw_testfile} > /dev/null  2>&1

               # ..and test its existence; first handle RW mounted shares:
               if [ -f ${rw_testfile} ] ; then

                  # share was RO requested
                  if [ "$2" = "ro" ]; then
                     remount=true
                  fi

                  # Delete the testfile
                  ${rmcmd} ${rw_testfile}

               # hanle RO mounted shares:
               else

                  # share was RW requested
                  if [ "$2" = "rw" ]; then
                     remount=true
                  fi
               fi
            fi
         fi
      fi
   fi
fi

# perform remount (unmount, request mount)
if $remount ; then
   ${umountcmd} -f ${mountpoint}
   mount=true
fi

# perform mount when so requested
if $mount ; then
   ${mountcmd} ${mountmode} -t nfs ${remotesystem}:${remoteshare} ${mountpoint}
fi

exit 0

You should run the automount script from a runfile, like so:

#!/bin/sh

NFS_ENFORCE=/usr/local/sbin/nfs_enforcer

# Separate the following parameters with spaces:
#
# - nfs enforcer command (set above)
# - remote system name (must be resolvable)
# - read/write (rw) or read-only (ro); NOTE: share may be read-only regardless of how this is set
# - remote share name (from remote's /etc/exports)
# - local mount point (existing local directory)
# - share test file (an immutable file on the share)

# e.g.
# $NFS_ENFORCE dbsysvm rw /nfs4shares/conduit /mnt/dbsys_conduit .conduit@dbsysvm
# or (for local remount read-only)
# $NFS_ENFORCE localhost ro /var/web/projects/repository /mnt/rorepo .repository@localhost

$NFS_ENFORCE localhost ro /var/web/projects/repository /mnt/rorepo .repository@localhost

exit 0

..and call the the above runfile from crontab:

*/10  *  *  *  *  root  /usr/local/sbin/nfs_enforcer.batch > /dev/null

NFS automount, Linux (CentOS) version

** NOTE: This version is obsoleted! The latest version can be found here.

Last summer I posted a script that would repeatedly (via cron) check on a availability and status of a NFS mount, and attempt to keep it mounted if possible. That script was written for (Free)BSD. Below is a slightly modified version that runs on Linux (in this case, CentOS).

#!/bin/sh

SHELL=/bin/sh
PATH=/sbin:/bin:/usr/sbin:/usr/bin

# remote system name
remotesystem=sunrise.externalized.net

# remote share name
remoteshare=/nfs4exports/minecraft-backups

# local mount point
mountpoint=/bak/remote

# file to indicate local mount status
testfile=$mountpoint/.minecraftbackups

# command locations
pingcmd=/bin/ping
showmountcmd=/usr/sbin/showmount
grepcmd=/bin/grep
mountcmd=/bin/mount
umountcmd=/bin/umount
statcmd=/usr/bin/stat

# --- end variables ---

# make sure the mountpoint is not stale
testvar=`${statcmd} ${mountpoint} 2>&1 | ${grepcmd} "Stale"`

if [ "${testvar}" != "" ]; then
   #result not empty: mountpoint is stale; remove it
   ${umountcmd} -f ${mountpoint}
fi

# ping the remote system (2 sec timeout)
${pingcmd} -w2 -c1 -q ${remotesystem} > /dev/null 2>&1

if [ "$?" -eq "0" ]; then
   
   # server is available so query availability of the remote share; not empty is OK
   offsiteshare=`${showmountcmd} -e ${remotesystem} | ${grepcmd} "${remoteshare}"`

   # make sure the local mountpoint is not active
   localmount=`${mountcmd} | ${grepcmd} "${mountpoint}"`

   if [ "${offsiteshare}" != "" ] ; then
      if [ ! -e ${testfile} ] ; then
         if [ "${localmount}" = "" ] ; then
            ${mountcmd} -w -t nfs ${remotesystem}:${remoteshare} ${mountpoint}
         fi
      fi
   fi
fi

exit 0

Marvell 88E8056 and ESXi 4.1

So I have an older development/experimental server that runs couple of VMs on ESXi 4.1. The server’s motherboard (ASUS P5BV-C/4L) is from an old workstation, and it has integrated quad NICs which would be nice to be able to use.. except that the default build of ESXi 4.1 doesn’t see them (even though ESXi 4.1 technically supports Marvell 88E8056 NICs).

There are several pages that discuss the issue extensively, and have a lot of good information on them. Yet another page has a quick low down on how to get the driver properly installed.

However, having not worked on ESXi CLI for some time I had forgotten, for example, that busybox that ESXi uses wipes the root files on every reboot. After a while I recalled (from an old note) that to save changes to the /etc/vmware/simple.map I would need to execute /sbin/backup.sh 0 /bootbank/ after making the edits. But even that was unecessary.

One sentence on the brief recap page, would have saved me couple of hours tonight. So here it is: »Just upload the attached oem.tgz into /bootbank folder with scp, then reboot, and you’re done!» And when you do that, you are done – the pre-prepared oem.tgz works perfectly!

Yes, had I known, I would’ve known, but I didn’t. 🙂 Hopefully this saves time for someone else!

Expanding VMware Workstation VM partition

Few days ago I set up CentOS 5.5 on VMware Workstation 7.1 for PHP debugging. During the installation I shrunk the suggested VM disk size from the default 20Gb to 10Gb thinking that there’ll be plenty of space (being more familiar with FreeBSD systems that generally have a rather small footprint). But once I had completed the installation the root partition had just couple of hundred megabytes of free space remaining. Argh!

After looking around for the solution for a few moments I downloaded the free Parted Magic Live CD that includes GParted, Clonezilla and number of other utilities in an easy to use package. In the end extending the CentOS partition was a snap. After shutting down the CentOS VM, I first extended the VM disk in Workstation 7.1 VM settings from 10Gb to 20Gb in VM settings > Hard Disk > Utilities > Expand.

Then I edited the VM’s .vmx file by adding the following statement:

bios.bootDelay = “10000”

This slows down the VM’s boot sequence by adding a 10 second delay so that it’s easier to focus the VM screen (with a click of a mouse) and hit F2 before the VM startup commences. Note that you need to click the area of the VM screen where POST (boot) info is being displayed to give it focus; clicking on the similarly colored (black) area closer to the edges of the VM display is at ths point (during the boot) actually outside of the VM “screen”, hence it will not focus the VM, and without focus clicking on F2 does nothing. The other alternative (to ensure that VM enters its virtual BIOS settings) is to add a statement:

bios.forceSetupOnce = “TRUE”

.. in the .vmx file.

Once in VM BIOS settings I changed the boot order so that the CD/DVD drive is now the first boot device. I then popped the Parted Magic CD in the drive and rebooted the VM. With Parted Magic up and running I started Partition Editor (GParted), and moved the 1.4Gb linux-swap partition to the end of the newly expanded 20Gb disk space. Next I expanded the third partition (“/home”) to total of 6.8Gb, and moved it also to the right, back to back with the swap partition. Finally I gave the root partition (“/”) the rest of the free space giving it total size of 11.7Gb. Once the operations had been applied (requested changes written to the disk) I exited GParted and shut down Parted Magic choosing “reboot system” on exit.

Once the CentOS finished boothing, I checked the partition sizes with ‘df -h’ to confirm that root and home partitions reflect the extra space assigned to them. You might also want to restore the hard drive as the primary boot device for the VM in the VM BIOS settings so that a CD/DVD you might later forget in the CD/DVD drive won’t try to boot instead of the VM.