NFS automount, Linux (CentOS) version

** NOTE: This version is obsoleted! The latest version can be found here.

Last summer I posted a script that would repeatedly (via cron) check on a availability and status of a NFS mount, and attempt to keep it mounted if possible. That script was written for (Free)BSD. Below is a slightly modified version that runs on Linux (in this case, CentOS).



# remote system name

# remote share name

# local mount point

# file to indicate local mount status

# command locations

# --- end variables ---

# make sure the mountpoint is not stale
testvar=`${statcmd} ${mountpoint} 2>&1 | ${grepcmd} "Stale"`

if [ "${testvar}" != "" ]; then
   #result not empty: mountpoint is stale; remove it
   ${umountcmd} -f ${mountpoint}

# ping the remote system (2 sec timeout)
${pingcmd} -w2 -c1 -q ${remotesystem} > /dev/null 2>&1

if [ "$?" -eq "0" ]; then
   # server is available so query availability of the remote share; not empty is OK
   offsiteshare=`${showmountcmd} -e ${remotesystem} | ${grepcmd} "${remoteshare}"`

   # make sure the local mountpoint is not active
   localmount=`${mountcmd} | ${grepcmd} "${mountpoint}"`

   if [ "${offsiteshare}" != "" ] ; then
      if [ ! -e ${testfile} ] ; then
         if [ "${localmount}" = "" ] ; then
            ${mountcmd} -w -t nfs ${remotesystem}:${remoteshare} ${mountpoint}

exit 0

Marvell 88E8056 and ESXi 4.1

So I have an older development/experimental server that runs couple of VMs on ESXi 4.1. The server’s motherboard (ASUS P5BV-C/4L) is from an old workstation, and it has integrated quad NICs which would be nice to be able to use.. except that the default build of ESXi 4.1 doesn’t see them (even though ESXi 4.1 technically supports Marvell 88E8056 NICs).

There are several pages that discuss the issue extensively, and have a lot of good information on them. Yet another page has a quick low down on how to get the driver properly installed.

However, having not worked on ESXi CLI for some time I had forgotten, for example, that busybox that ESXi uses wipes the root files on every reboot. After a while I recalled (from an old note) that to save changes to the /etc/vmware/ I would need to execute /sbin/ 0 /bootbank/ after making the edits. But even that was unecessary.

One sentence on the brief recap page, would have saved me couple of hours tonight. So here it is: »Just upload the attached oem.tgz into /bootbank folder with scp, then reboot, and you’re done!» And when you do that, you are done – the pre-prepared oem.tgz works perfectly!

Yes, had I known, I would’ve known, but I didn’t. 🙂 Hopefully this saves time for someone else!