Deleting all network interface aliases

I recently needed to move bunch of aliased IPs from one FreeBSD server to another. Adding aliases to /etc/rc.conf and then running ./netstart while in /etc adds new multiplexed IPs to the system all right, but if you need to remove aliased IPs, running /etc/netstart won’t remove them even if the aliases have been removed from /etc/rc.conf. Perhaps there is some easy single command that culls the active alias IPs to those specified in /etc/rc.conf, but I’m not aware of it. The following command can be used to quickly delete all aliased IPs for a specific interface (here “em0”):

ifconfig | grep “0xffffffff” | awk ‘{ print $2 }’ | xargs -n 1 ifconfig em0 delete

For this to work, the netmasks of the aliases and the master IP for the inteface must differ. The netmasks of the aliases are usually set to 255.255.255.255 (hence “0xffffffff”) while the netmask of the master IP is usually something different, specific to your network, e.g. 255.255.255.128 (“0xffffff80”).

Once the above command has been run, /etc/netstart can then be executed to load the remaining or reconfigured aliases (if any) from /etc/rc.conf.

FreeBSD Full / Incremental Filesystem Dump Shell Script

A FreeBSD shell script to dump filesystems with full, and automatically incremented incremental backups to a given directory location.

I wanted to automate filesystem dumps on my servers running FreeBSD 7.2. After some searching I came across Vivek Gite’s FreeBSD Full / Incremental Tape Backup Shell Script which gave me a lot of ideas. Since I’m not using tape as the backup target I wanted to make a script specifically for that purpose while at the same time improve handling of some error conditions (such as, most importantly, checking for a missing level 0 dump before proceeding with an incremental dump) and add some new features such as autoincrement the dump level so that the dump level is not tied to specific day of the week.

Here’s my version of the script. While it bears some resemblance to Vivek’s script, it is largely rewritten. Read the script header for more information.

NOTE! In his comment James pointed out a possible bug in the script. The displayed script indeed had a problem: it was missing a backslash in front of the first dollar sign at:

eval “local fspath=$${fsname}path”

This was caused by the script display plugin in WordPress that treated the backslash as an escape character (this has now been fixed). To be on the safe side, please download the script as a tarball. To further validate the integrity of the tarball, it should produce a md5 hash of 732ac44f11ba4484be4568e84929bb6a.

#!/bin/sh

# Autodump 1.5a (released 01 August 2009)
# Copyright (c) 2009 Ville Walveranta 
#
# A FreeBSD shell script to dump filesystems with full, and automatically 
# incremented incremental backups to a given directory location; this script
# was written with the intent of saving the filesystem dumps not onto a tape
# device but on another hard drive such as a different filesystem on the same 
# computer. The resulting dump files can be copied offsite with a separate 
# cron job.
#
# This script creates the necessary directory structure below the defined 
# 'BASEDIR' as well as the necessary log file. This script also ensures that
# the level 0 dump exists before creating an incremental dump; if it doesn't
# the script automatically erases the incremental files for the current week 
# (if any exist) and starts over with a level 0 dump. This way you can start 
# using the script on any day of the week and level 0 dump is automatically 
# created on the first run.
#
# When ran daily (such as from a cron job), the script creates level 0 dump
# on every Monday (beginning the ISO week), or Sunday (beginning of the U.S. 
# week) and an incremental dump on all the other days of each week. The dumps 
# are compressed with gzip and saved below the 'BASEDIR' to an automatically 
# created directory whose name is derived from the list given in 'FSNAMES'. 
# Each week's dumps are organized into subfolders with name YYYY-WW ('WW' 
# being the current week). By default three most recent weekly dumps 
# (level 0 + incrementals) are retained.
#
# The script maintains each weekly folder's date at the _beginning_ date
# of the dump (i.e. Monday or Sunday of the current week) at 00:00, not 
# at the most recent incremental's date/time.
#
# By default the root (/) and usr (/usr) filesystems are dumped. To add more  
# add a "friendly name" to the 'FSNAMES' list (it is used for the weekly folder
# names, for dump filenames, and to reference the corresponding mount point
# variable); then add the corresponding mount point variable (i.e. if you 
# add "var" to 'FSNAMES', then add a variable varpath=/var). The "path" 
# ending of the mount point variable name is required. 
#
# Since the number of incremental dumps is limited to nine (level 0 +
# incremental levels 1-9), the script will allow maximum of one dump 
# to be created per day. However, since the level incrementing is dynamic
# you can start the script on any day of the week, and run it on any
# number of days during the rest of the week and you'll always get
# level 0 plus the incremental dumps in sequential order. However, The 
# new weekly folder is always created on Monday or Sunday (as chosen by
# you). Note that the script determines whether "today's" dump exists 
# based on the modification date stamp of the most recent dump. Hence 
# it is a good idea to run this script in the early hours of each day 
# rather than in the very end of each day. Running the script, for 
# example, at 23:50 has the potential to push longer dump processes 
# over the midnight and so potentially cause the next day's dump to 
# be skipped.
#
# Written for FreeBSD 7.2 but should work on most BSD and *NIX systems with
# minor modifications.
# -------------------------------------------------------------------------
# Copyright (c) 2009 Ville Walveranta 
# 
# This script is licensed under GNU GPL version 2.0 or above, and is provided
# 'as-is' with no warranty which is to say that I'm not liable if it wipes out
# your hard drive clean or doesn't back up your precious data. However, to the 
# best or my knowledge it is working as expected -- I'm using it myself. :-)
# -------------------------------------------------------------------------
# This script was inspired by 
# FreeBSD Full / Incremental Tape Backup Shell Script
# by nixCraft project / Vivek Gite
# at 
# -------------------------------------------------------------------------


#### GLOBAL VARIABLES ###############################################

WEEKSTARTS=Mon      # Accepted values are "Mon" (ISO standard) or "Sun" (U.S.)
KEEPDUMPS=30        # in days; this is evaluated on the weekly level per start
                    # of the week, so '30' keeps 3-4 weekly dumps
BASEDIR=/bak/dumps
GLOBALDUMPOPTS=Lua  # add 'n' for wall notifications
LOGFILE=/var/log/dump.log

# to add more filesystems to be dumped add the dump name in 'FSNAMES'
# and add the corresponding mount point variable (dumpname+path=mountpoint)
FSNAMES="root usr"  # this is used for dump directory name 
                    # and to ID the path from a variable below
rootpath=/
usrpath=/usr

#####################################################################

DUMP=/sbin/dump
GZIP=/usr/bin/gzip
LOGGER=/usr/bin/logger

WEEKDAY=$(date +"%a")
DATE=$(date +"%Y%m%d")
HUMANDATE=$(date +"%d-%b-%Y")
HUMANDATE=`echo $HUMANDATE | tr '[:lower:]' '[:upper:]'`
HUMANTIME=$(date +"%H:%M (%Z)")
TODAYYR=$(date +"%Y")
TODAYMO=$(date +"%m")
TODAYDT=$(date +"%d")

# datestamp at midnight today
TODAYSTARTSTAMP=$(date -j +%s "${TODAYYR}${TODAYMO}${TODAYDT}0000")

# default lastdump to midnight today; it will be checked
# and and adjusted later
LASTDUMP=$TODAYSTARTSTAMP

# do not crete world-readable dumps!
umask 117

# make sure the logfile exists
if [ ! -e $LOGFILE ] ; then
   touch $LOGFILE
   chmod 660 $LOGFILE
fi

# make sure that entire week's incremental dumps are deposted
# in the same directory, even when a week spans new year
# NOTE: When the ending year has a partial 53rd week, there
# won't be a dump folder for the first week of the new year.
# The incremental dumps instead complete the 53rd week folder,
# even when the 1st week of the new year begins mid-week. 
# However, the dates of the incremental dump files in the 
# 53rd week folder correctly reflect the dates of the 
# beginning year.
adjust_date(){
   local dateoffset=$1
   local epochnow=$(date +%s)
   local offsetsecs=`expr $dateoffset "*" 86400`
   local newepoch=`expr $epochnow "-" $offsetsecs`
   local year=`date -r $newepoch +"%Y"`
   
   if [ "$WEEKSTARTS" = "Mon" ] ; then
      local week=`date -r $newepoch +"%W"`
   else
      local week=`date -r $newepoch +"%U"`
   fi
   NEWEPOCHISO=`date -r $newepoch +"%Y%m%d0000"`

   #system week starts from `0', there is no calendar week `0'
   week=`expr $week "+" 1`
   YWEEK=${year}-${week}
}

# determines the 'distance' from the level 0 dump in days
if [ "$WEEKSTARTS" = "Mon" ] ; then
   case $WEEKDAY in
      Mon) adjust_date 0;;
      Tue) adjust_date 1;;
      Wed) adjust_date 2;;
      Thu) adjust_date 3;;
      Fri) adjust_date 4;;
      Sat) adjust_date 5;;
      Sun) adjust_date 6;;
      *) ;;
   esac
else 
   case $WEEKDAY in
      Sun) adjust_date 0;;
      Mon) adjust_date 1;;
      Tue) adjust_date 2;;
      Wed) adjust_date 3;;
      Thu) adjust_date 4;;
      Fri) adjust_date 5;;
      Sat) adjust_date 6;;
      *) ;;
   esac
fi

mk_auto_dump(){

   local fsname=$1
   
   # get the current filesystem's path
   # as defined in the corresponding variable
   eval "local fspath=$${fsname}path"

   # composite the dump path
   local dumppath=${BASEDIR}/${fsname}/${YWEEK}

   # make sure the dump directory for this week exists;
   # this automatically creates a new dump directory on 
   # every Monday or Sunday (as selected by 'WEEKSTARTS')
   [ ! -d $dumppath ] && mkdir -p $dumppath

   # get name of the last file in the current dump directory
   local lastfile=`ls -ltr $dumppath | grep -v "^d" | tail -n 1 | awk '{ print $9 }'`

   # assume that the 'lastfile', if it exists, was not created today
   local dumped_today=false
   
   # if a file exists, check its modification date; 
   # if it is at or after 00:00 today, set a flag to skip the dump
   if [ "$lastfile" != "" ] ; then
      local fq_lastfile=${dumppath}/$lastfile
      if [ -e $fq_lastfile ] ; then
         # get the last modification time for the most recently created dumpfile
         LASTDUMP=`stat -f %m $fq_lastfile`
         if [ $LASTDUMP -ge $TODAYSTARTSTAMP ] ; then
            local dumped_today=true
         fi
      fi
      
      # get the first and the last dump level for this directory
      local levelcommand="ls $dumppath | sed -e 's/^[[:digit:]]*_//' | sed -e 's/..*$//'"
      local firstlevel=`eval $levelcommand | head -n 1`
      local lastlevel=`eval $levelcommand | tail -n 1`

      # make sure level zero dump exists;
      # if it doesn't, start over
      if [ "$firstlevel" != "0" ] ; then
         # it doesn't matter if a previous dump exists from today
         # since we're starting over as level 0 dump is missing
         local dumped_today=false
         local dumplevel=0
         rm -f $dumppath/*.gz
      else
         # otherwise just increment the dump level
         # for levels 1-6, i.e. normally Tuesday thru Sunday
         local dumplevel=`expr $lastlevel "+" 1`
      fi
   else
      # no dump exists in this week's folder; reset level to '0'
      local dumplevel=0
   fi

   # skip the entire dump process if a dumpfile has
   # already been created for this filesystem today
   if [ "$dumped_today" = "false" ] ; then  
   
      # define the dump filename
      local dumpfn=${DATE}_${dumplevel}

      echo ---------------- >> $LOGFILE
      echo >> $LOGFILE
      echo BEGINNING LEVEL $dumplevel DUMP OF '$fsname' (${fspath}) FILESYSTEM ON $HUMANDATE AT $HUMANTIME >> $LOGFILE
      echo >> $LOGFILE
      echo Creating a snapshot of '$fspath'.. >> $LOGFILE
      # execute the dump
      $DUMP -$dumplevel -$GLOBALDUMPOPTS -f ${dumppath}/${dumpfn} $fspath >> $LOGFILE 2>&1
      local dumpresult=$?
   
      if [ "$dumpresult" != "0" ] ; then
         # log the dump result to syslog
         $LOGGER "$DUMP LEVEL $dumplevel DUMP OF $fsname (${fspath}) FAILED!"

         echo "*** DUMP FAILED - LEVEL $dumplevel DUMP of $fsname (${fspath}) ***" >> $LOGFILE
         echo >> $LOGFILE
      else
         # log the dump result to syslog
         $LOGGER "LEVEL $dumplevel DUMP of $fsname (${fspath}) COMPLETED SUCCESSFULLY!"

         echo >> $LOGFILE
         # compress the dump
         echo Compressing the dumpfile '${dumpfn}'.. >> $LOGFILE
         $GZIP -v ${dumppath}/${dumpfn} >> $LOGFILE 2>&1
         echo DONE >> $LOGFILE
         echo >> $LOGFILE

         # make sure dumps are not world readable (security risk!)
         echo Updating dumpfile '${dumpfn}.gz' permissions.. >> $LOGFILE
         chmod -v -v 440 ${dumppath}/${dumpfn}.gz >> $LOGFILE 2>&1
         echo DONE >> $LOGFILE
         echo >> $LOGFILE
   
         # reset current dump dir's timestamp to that of the level 0 dump
         touch -t ${NEWEPOCHISO} ${dumppath}

         # delete old dumps
         echo Deleting old '$fsname' dumpfiles.. >> $LOGFILE
         find $BASEDIR/$fsname -mtime +$KEEPDUMPS -maxdepth 1 -print -exec rm -rf {} ; >> $LOGFILE 2>&1
         echo DONE >> $LOGFILE
         echo >> $LOGFILE
      fi
   else
      local lastdump_readable=`date -j -r $LASTDUMP +"%H:%M"`
      local lastdump_readableZ=`date -j -r $LASTDUMP +"%Z"`
      local lastdumpmsg="Autodump for filesystem '$fsname' ($fspath) has already been executed today at $lastdump_readable ($lastdump_readableZ)."
      echo $lastdumpmsg
      $LOGGER $lastdumpmsg
   fi
}


# Dump filesystems defined in 'FSNAMES'
#
# Monday or Sunday (as selected by 'WEEKSTARTS') starts with 
# the level 0 dump, with incrementals created through the rest of 
# the week (autoincremented). If the level 0 dump is missing in 
# the current week's folder for filesystem currently being backed 
# up, it is created automatically instead of an incremental dump, 
# no matter what day of the week it is.
for f in $FSNAMES
do
   mk_auto_dump $f
done

FreeBSD vs the world

As I upgraded few FreeBSD installations to FreeBSD 7.2 over the last couple of days, I took the customary stroll to see how FreeBSD continues to stack up against the Linux distributions.  And once again I determined it does so very well.  I’ve been a devout FreeBSD user for almost a decade, and every time I take a look at the Linux world I come back to the same conclusion: I like the fact that there is just one FreeBSD. It’s very well managed and its QA is excellent (not to mention its TCP stack is famed for being the most stable, and its ports collection rivals anything offered by Linux).

Here’re couple of useful sites for those wondering which OS to choose:

Polishlinux.org – Compare distros: FreeBSD vs. Debian – Comparison data is up to date and there are a lot of good user comments to sift through. You can also choose other distros to compare to.

Wikipedia – Comparison of BSD operating systems

And lastly, a good example of why the sheer number of Linux distros is disorienting: DistroWatch lists at least a few hundred Linux distros (plus couple of BSD derivatives).

Installing bcron on FreeBSD 7.0

bcron is a better cron (though the “b” in the name probably comes from the first name of its writer, Bruce Guenter).  It was created with security in mind, and is especially well suited for multi-user systems where the individual users need to be given access to their respective crontabs. With bcron this can be accomplished without compromising the system security.  Here’s a quote from the bcron page:

This is bcron, a new cron system designed with secure operations in mind. To do this, the system is divided into several seperate programs, each responsible for a seperate task, with strictly controlled communications between them. The user interface is a drop-in replacement for similar systems (such as vixie-cron), but the internals differ greatly.

As of writing of this bcron can not be found in the FreeBSD 7.0 ports system. Fortunately its installation is fairly straightforward.  Yet the included documentation is rather spartan so I provide a more complete outline below.

  1. Install latest bglibs if not yet installed** bglibs is best to install from a downloaded tarball rather than from the ports (while the ports version installs the libs in a more logical location at /usr/local/lib/bglibs/ the programs that utilize the library (bcron, ucspi-unix, etc.) have difficulty locating it.

    ** few symlinks are required (these refer to the locations bglibs installs itself when compiled from the tarball rather than from the ports):

    /usr/local/bglibs -> /usr/local/lib/bglibs
    /usr/local/bglibs/lib/libbg-sysdeps.so.2 -> /usr/local/lib/libbg-sysdeps.so.2
    /usr/local/bglibs/lib/libbg.so.2 -> /usr/local/lib/libbg.so.2

  2. Install ucspi-unix if not yet installed as bcron components communicate via UNIX sockets. This requires bglibs and also compiles and installs well using a downloaded tarball (it’s also available in ports at /usr/ports/sysutils/ucspi-unix, but I prefer to compile it from the downloaded tarball).
  3. Make sure /var has been moved off the root to /usr/var before proceeding. See an older post for details.
  4. Make sure daemontools (and hence supervise) has been installed and is operational as bcron will be started with it.
  5. Create a system user “cron” (for example by using vipw command) and group “cron” (by editing /etc/group). This user/group will own all the crontab files (though not /etc/crontab as it’s system crontab and needs to be owned by root:wheel).

    user:

    cron:*:50:50::0:0:BCron Sandbox:/nonexistent:/usr/sbin/nologin

    group:
    cron:*:50:

  6. Create the spool & tmp directories:
    mkdir -p /var/spool/cron/crontabs /var/spool/cron/tmp
    mkfifo /var/spool/cron/trigger
    sh
    for i in crontabs tmp trigger; do
    chown cron:cron /var/spool/cron/$i
    chmod go-rwx /var/spool/cron/$i
    done
  7. Create the configuration directory /usr/local/etc/bcron:mkdir -p /usr/local/etc/bcron** You can put any common configuration settings into this directory (it is an “ENVDIR”), like alternate spool directories in BCRON_SPOOL.
  8. Create the bcron service directories (there are three services) and add the scripts below it:

    mkdir -p /var/bcron/supervise/bcron-sched/log
    mkdir /var/bcron/supervise/bcron-spool
    mkdir /var/bcron/supervise/bcron-update

    Set their permissions to 1750 for security purposes (no world access, sticky bit):

    chmod 1750 /var/bcron/supervise/bcron-sched
    chmod 1750 /var/bcron/supervise/bcron-spool
    chmod 1750 /var/bcron/supervise/bcron-update

    Make all the run and log/run scripts executable by root, readable by group:

    chmod 740 /var/bcron/supervise/bcron-sched/run
    chmod 740 /var/bcron/supervise/bcron-sched/log/run
    chmod 740 /var/bcron/supervise/bcron-spool/run
    chmod 740 /var/bcron/supervise/bcron-update/run

    and make log bcron-sched subdir accessible by root, group:

    chmod 750 /var/bcron/supervise/bcron-sched/log

    RUN SCRIPTS:
    /var/bcron/supervise/bcron-sched/run:

    #!/bin/sh
    exec 2>&1
    exec envdir /usr/local/etc/bcron bcron-start | multilog t /var/log/bcron

    /var/bcron/supervise/bcron-sched/log/run:

    #!/bin/sh
    exec >/dev/null 2>&1
    exec
    multilog t /var/log/bcron

    /var/bcron/supervise/bcron-spool/run:

    #!/bin/sh
    exec >/dev/null 2>&1
    exec
    envdir /usr/local/etc/bcron
    envuidgid cron
    sh -c ‘
    exec
    unixserver -U ${BCRON_SOCKET:-/var/run/bcron-spool}
    bcron-spool

    /var/bcron/supervise/bcron-update/run:

    #!/bin/sh
    exec >/dev/null 2>&1
    exec
    bcron-update /etc/crontab

  9. Kill the deafult cron daemon and add the following to rc.conf so it won’t restart on reboot:

    #disable default cron; bcron is used instead (started by supervise)
    cron_enable=”NO”

  10. Symlink bcron services’ primary supervise directories to under /var/service to start bcron services (you can also use svc-add command if you have installed supervise-scripts):
    ln -s /var/bcron/supervise/bcron-sched /var/service/bcron-sched
    ln -s /var/bcron/supervise/bcron-spool /var/service/bcron-spool
    ln -s /var/bcron/supervise/bcron-update /var/service/bcron-update
  11. Set /etc/crontab permissions to 600, and make sure it’s owned by the root.
    chmod 600 /etc/crontab
    chown root:wheel /etc/crontab

    ** For other users the owner of the crontab file in their respective home folders would be cron:cron.

  12. Edit /etc/crontab and test that it gets updated. Note that there is a brief delay, perhaps one minute or so, after you save the crontab until the change becomes effective. Also note that the default shell for the crontab is /bin/sh. You might want to change it to something more powerful like c-shell (/bin/csh) or bash (/bin/bash) that you’re familiar with. You may also want to augment the default path, for example, by including /usr/local/bin for user-installed commands.