Mounting an NFS share after boot, and checking up on it periodically…

** NOTE: This version is obsoleted! The latest version can be found here.

I needed to automatically mount an NFS share after reboot. But the availability of that share could not be guaranteed – the system on the LAN offering the share might be down for maintenance when the system mounting the share is being rebooted. In such case there would be a lengthy wait during the boot sequence until the mount attempt would time out.

So I wrote a short script to handle the situation. When initialized at boot time through init.d or rc.d, it’ll first attempt to mount the share, but then times out in two seconds (this is a LAN NFS share so if the system offering the share is up there should not be a longer delay than that) and so the boot sequence is not slowed down terribly. (see update below) Once boot is complete, the script is run via cron every five minutes. Depending on the criticality of the share you may want to make that time shorter or longer. In this case it is a backup share which is not critical for the system’s functioning.

This technique would handle circular mounts, too, but obviously you would run into trouble if the mounts are required for successful system boot.

For this to work successfully add a marker file, such as “.myremoteservertransfers” in my example script below, in the share folder on the system exporting the share. I usually set the undeletable attribute on the file to make sure it doesn’t get accidentally deleted.

Update: Even with this code the boot sequence appears to hang until portmap times out (which takes quite a while) if the NFS share is not available at boot time. I removed the rc.d mount attempt and just shortened the cron poll period to 1 minute. That way the share will be up very quickly once it becomes available, yet the overhead caused by the periodic ping is minimal (both servers are on local LAN).

#!/bin/sh

SHELL=/bin/sh
PATH=/etc:/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin

# remote system name
remotesystem=myremoteserver

# remote share name
remoteshare=/nfsexports/backupshare

# local mount point
mountpoint=/localbackups/TRANSFERS/${myremoteserver}

# file to indicate local mount status
testfile=$mountpoint/.myremoteservertransfers

# --- end variables ---

# ping result to the remote system (2 sec timeout); not empty is OK
remoteping=`ping -c1 -o -q -t2 ${remotesystem} | grep " 0.0%"`

if [ "${remoteping}" != "" ] ; then

   # server is available so query availability of the remote share; not empty is OK
   offsiteshare=`showmount -e ${remotesystem} | grep "${remoteshare}"`

   # make sure the local mountpoint is not active; not empty is OK
   localmount=`/sbin/mount | /usr/bin/grep "${mountpoint}"`

   if [ "${offsiteshare}" != "" ] ; then
      if [ ! -e ${testfile} ] ; then
         if [ "${localmount}" = "" ] ; then
            mount -r -t nfs ${remotesystem}:${remoteshare} ${mountpoint}
         fi
      fi
   fi
fi

exit 0

Caching inbound email on LAN with Postfix (and restricting reception of external mail only to the external mail provider)

Externalizing email reception often offers many benefits: firstly, it’s [more] worry-free than servicing email internally, especially in smaller organizations where there may not be an email administrator on-call 24/7. Or, think of a situation where there is an “IT guy” who manages internal email. Then he goes on a vacation and email goes down. Now what? And even when the IT guy is present, the budget may not allow for good redundancy for email reception. What if the email server melts down. Perhaps there is a backup plan but without a stand-by server and/or perhaps virtualization option getting mail reception back online could take a day, which potentially would be a big hindrance to business.

However, outsourced email is not without its pitfalls, too. Even with a reasonably fast network connection there is more noticeable latency when accessing a remote email server as opposed to a LAN-based solution. Then there is the issue of outsourced service quality vs. the cost. Some services like Fusemail or Rackspace offer reasonable quality and fairly customizable features. But when something does go wrong you’re dependent on their response time. You’ve essentially handed away control of your email reception, for better or for worse.

Mostly, however, reception uptime is good with most well-run outsourced mail services, and the issues that more commonly crop up are related to latency and in some cases (like with Fusemail from time to time) apparent capacity issues. And if you access Fusemail with Outlook 2010’s IMAP client, you may have noticed frequent spontaneously changing message ID’s which repeatedly pop up a notification on Outlook.

The client-side issues are easy to remedy by caching inbound email on your local server. It gives you the best of both worlds: quick access to email and safety of someone monitoring mail reception 24/7 with multiple redundancies. If your local caching mail server goes down even for an extended amount of time, all you need to do is to repoint your clients to the external provider’s IMAP or POP server and you’re back in business. You may also opt to use your own outbound SMTP service (assuming you have a static IP in use) which makes it possible to isolate your domains’ SPF records to the IPs you own (as opposed to allowing anyone with an account, for example, at Fusemail to spoof mail from your domains without an SPF penalty). And if you use Fusemail, your own SMTP server will give you a peace of mind so that your outbound emails won’t trigger suspension of your account like happened to me soon after I first signed up with them (see Fusemail auto-suspends spam-suspect accounts!). Perhaps they’ve fixed that issue since then.

Setting up a caching mail service on your LAN is fairly easy with Postfix. The following tutorial assumes you already have a functioning Postfix/Dovecot setup where you’re able to send and receive email based on your requirements.

To start with, configure and test local users that you would like to correspond to outsourced email service’s mailboxes. They do not need to have the same login name, and you can also consolidate multiple external accounts to one local account. In smaller setups it’s the easiest to simply create a flat-file for user Dovecot passwords lookups.

Assuming you like to receive all email through an outsourced service (which, if you use an outsourced service, is the preferred option), you will want to restrict mail reception from the outside world only to the sending mail servers of the external mail provider of your choice. To accomplish this some restrictions are added to the local cache server’s main.cf file. The following is the configuration I use; I’ve carefully given thought of the restrictions not being too constrictive as to unnecessarily prevent connections, but on the other hand cut off connections that would not result in a successful or desired mail transit.

smtpd_helo_restrictions =
        permit_mynetworks
        reject_invalid_helo_hostname
        permit_sasl_authenticated
        reject_non_fqdn_helo_hostname

smtpd_client_restrictions =
        permit_mynetworks
        permit_sasl_authenticated
        check_client_access hash:$config_directory/tables/smtpd_client_access
        check_client_access cidr:$config_directory/tables/smtpd_client_access.cidr
        reject

smtpd_etrn_restrictions =
        permit_mynetworks
        reject

smtpd_sender_restrictions =
        reject_non_fqdn_sender
        reject_unknown_sender_domain

smtpd_recipient_restrictions =
        reject_non_fqdn_recipient
        reject_unknown_recipient_domain
        permit_mynetworks
        reject_unlisted_recipient
        permit_sasl_authenticated
        check_recipient_access hash:$config_directory/tables/smtpd_recipient_access
        #the following also permits mynetworks!
        check_recipient_access pcre:$config_directory/tables/smtpd_recipient_access.pcre
        reject_unauth_destination

smtpd_data_restrictions =
        reject_multi_recipient_bounce
        reject_unauth_pipelining

You will notice external hash and PCRE lookup tables “smtpd_client_access”, “smtpd_client_access.cidr”, “smtpd_recipient_access”, and “smtpd_recipient_access.pcre”. Let’s look at them next.

smtpd_client_access (hash) and smtpd_client_access.cidr (example below) list the external IP addresses you allow to connect and hence relay mail to your cache server. If the external IPs are not on this list, the connection is terminated.

Here’s an example smtpd_client_access (hash, so it’s converted to smtpd_client_access.db with postmap):

# some individual external server I want to allow to connect
100.200.100.200 PERMIT

And here’s an example smtpd_client_access.cidr:

1.2.3.4/24      OK
10.20.30.40      OK
100.200.201.0/21      OK

While the sending servers of an outsourced service don’t change often, they may change at any time without a warning. Thus maintaining the above list manually would be a frustrating task. To automate the process, you can cull this information from the outsourced mail service’s SPF records with a cron-scheduled shell script (note that paths relate to FreeBSD; if you run Linux, adjust them to the taste/requirements):

#!/bin/sh

ORIGINAL=/usr/local/etc/postfix/tables/smtpd_client_access.cidr
NEW=/tmp/postfix_clients.tmp

dig +short fusemail.net TXT | grep 'v=spf1' | egrep -o 'ip4:[0-9./]+' | sed 's/^ip4://' | sed 's/$/      OK/' > $NEW

ORIGINAL_CK=`cksum $ORIGINAL | awk '{print $1}'`
NEW_CK=`cksum $NEW | awk '{print $1}'`

if [ -s $NEW ] ; then
  if [ $ORIGINAL_CK != $NEW_CK ] ; then
    cp -f $NEW $ORIGINAL
    postfix reload > /dev/null 2>&1
  fi
fi

rm $NEW

exit 0

The above example is obviously for Fusemail, but you can modify it for other providers simply by replacing the provider domain name on the dig line.

In my configuration smtpd_recipient_access (hash) lists simply the nullroute that is often required – such as in php.ini mail configuration where you might put:

sendmail_path = /usr/sbin/sendmail -t -i -f nullroute@mydomain.com

So in smtpd_recipient_access (hash) I list:

nullroute@mydomain.com  PERMIT

Meanwhile, smtpd_recipient_access.pcre lists the users who’re allowed to receive mail externally, from the IPs you defined with smtpd_client_access/smtpd_client_access.pcre:

if !/^(nullroute|abuse|postmaster|user1|user2|user3)@mydomain.com$/
/^/ internal
endif

Again, the above is sufficient for small configurations, but if you have dozens or more users whose external email you’re caching you may be better off storing the client_access table on a database server.

Finally, as you notice the ‘internal’ keyword being added above to user addresses that are not matched by smtpd_client_access.pcre, you want to add the following in main.cf:

smtpd_restriction_classes = internal, public
internal = permit_mynetworks, reject
public = permit

With these in place you will now receive email only from your external mail provider (and perhaps some other authorized servers if you defined one with smtpd_client_access hash table). This is important because you’re likely using your outsourced mail provider’s spam filtering, and you don’t want spammers contacting your cache mail server directly.

With the local server configured (and hopefully sufficiently tested 🙂 ) you can then go ahead and create mail forwarding rules at the external provider of your choice. You would simply copy any arrived email to the corresponding email address at your local cache server. I have additionally created a rule at the external provider which prunes the mailbox after given number of time since the users will not go through and delete read email there.

You may want to allow authenticated client access to your local users so that they can access their email via IMAP or POP remotely, and perhaps over the web (like via locally installed Squirrelmail). It is also a consideration that the email at the external provider will not be synced back – if users delete email from their locally cached mailbox it will not be deleted from the mailbox at the external provider, so under normal circumstances your cached email server should be the only access point for mail. But in an event of the server melt-down it is a minor inconvenience that the external provider’s mailbox would have older emails (that perhaps were deleted locally) in it – at least the users can continue to access email while you’re getting the caching server back online!

** NOTE: I’m using the current GA/Stable version of Postfix (2.7.0). If you’re using an older version, double-check that the configuration options I propose above are available before using them! This is one reason for why I prefer to use FreeBSD for mail; the ports version of Postfix is kept well up-to-date while CentOS/RHEL Postfix package is — as you would expect from an Enterprise Linux — currently at 2.3.3. You could compile it yourself, I suppose, but I’m not up to the task since I’m not a full-time Postfix admin.