FreeBSD Full / Incremental Filesystem Dump Shell Script

A FreeBSD shell script to dump filesystems with full, and automatically incremented incremental backups to a given directory location.

I wanted to automate filesystem dumps on my servers running FreeBSD 7.2. After some searching I came across Vivek Gite’s FreeBSD Full / Incremental Tape Backup Shell Script which gave me a lot of ideas. Since I’m not using tape as the backup target I wanted to make a script specifically for that purpose while at the same time improve handling of some error conditions (such as, most importantly, checking for a missing level 0 dump before proceeding with an incremental dump) and add some new features such as autoincrement the dump level so that the dump level is not tied to specific day of the week.

Here’s my version of the script. While it bears some resemblance to Vivek’s script, it is largely rewritten. Read the script header for more information.

NOTE! In his comment James pointed out a possible bug in the script. The displayed script indeed had a problem: it was missing a backslash in front of the first dollar sign at:

eval “local fspath=$${fsname}path”

This was caused by the script display plugin in WordPress that treated the backslash as an escape character (this has now been fixed). To be on the safe side, please download the script as a tarball. To further validate the integrity of the tarball, it should produce a md5 hash of 732ac44f11ba4484be4568e84929bb6a.


# Autodump 1.5a (released 01 August 2009)
# Copyright (c) 2009 Ville Walveranta 
# A FreeBSD shell script to dump filesystems with full, and automatically 
# incremented incremental backups to a given directory location; this script
# was written with the intent of saving the filesystem dumps not onto a tape
# device but on another hard drive such as a different filesystem on the same 
# computer. The resulting dump files can be copied offsite with a separate 
# cron job.
# This script creates the necessary directory structure below the defined 
# 'BASEDIR' as well as the necessary log file. This script also ensures that
# the level 0 dump exists before creating an incremental dump; if it doesn't
# the script automatically erases the incremental files for the current week 
# (if any exist) and starts over with a level 0 dump. This way you can start 
# using the script on any day of the week and level 0 dump is automatically 
# created on the first run.
# When ran daily (such as from a cron job), the script creates level 0 dump
# on every Monday (beginning the ISO week), or Sunday (beginning of the U.S. 
# week) and an incremental dump on all the other days of each week. The dumps 
# are compressed with gzip and saved below the 'BASEDIR' to an automatically 
# created directory whose name is derived from the list given in 'FSNAMES'. 
# Each week's dumps are organized into subfolders with name YYYY-WW ('WW' 
# being the current week). By default three most recent weekly dumps 
# (level 0 + incrementals) are retained.
# The script maintains each weekly folder's date at the _beginning_ date
# of the dump (i.e. Monday or Sunday of the current week) at 00:00, not 
# at the most recent incremental's date/time.
# By default the root (/) and usr (/usr) filesystems are dumped. To add more  
# add a "friendly name" to the 'FSNAMES' list (it is used for the weekly folder
# names, for dump filenames, and to reference the corresponding mount point
# variable); then add the corresponding mount point variable (i.e. if you 
# add "var" to 'FSNAMES', then add a variable varpath=/var). The "path" 
# ending of the mount point variable name is required. 
# Since the number of incremental dumps is limited to nine (level 0 +
# incremental levels 1-9), the script will allow maximum of one dump 
# to be created per day. However, since the level incrementing is dynamic
# you can start the script on any day of the week, and run it on any
# number of days during the rest of the week and you'll always get
# level 0 plus the incremental dumps in sequential order. However, The 
# new weekly folder is always created on Monday or Sunday (as chosen by
# you). Note that the script determines whether "today's" dump exists 
# based on the modification date stamp of the most recent dump. Hence 
# it is a good idea to run this script in the early hours of each day 
# rather than in the very end of each day. Running the script, for 
# example, at 23:50 has the potential to push longer dump processes 
# over the midnight and so potentially cause the next day's dump to 
# be skipped.
# Written for FreeBSD 7.2 but should work on most BSD and *NIX systems with
# minor modifications.
# -------------------------------------------------------------------------
# Copyright (c) 2009 Ville Walveranta 
# This script is licensed under GNU GPL version 2.0 or above, and is provided
# 'as-is' with no warranty which is to say that I'm not liable if it wipes out
# your hard drive clean or doesn't back up your precious data. However, to the 
# best or my knowledge it is working as expected -- I'm using it myself. :-)
# -------------------------------------------------------------------------
# This script was inspired by 
# FreeBSD Full / Incremental Tape Backup Shell Script
# by nixCraft project / Vivek Gite
# at 
# -------------------------------------------------------------------------

#### GLOBAL VARIABLES ###############################################

WEEKSTARTS=Mon      # Accepted values are "Mon" (ISO standard) or "Sun" (U.S.)
KEEPDUMPS=30        # in days; this is evaluated on the weekly level per start
                    # of the week, so '30' keeps 3-4 weekly dumps
GLOBALDUMPOPTS=Lua  # add 'n' for wall notifications

# to add more filesystems to be dumped add the dump name in 'FSNAMES'
# and add the corresponding mount point variable (dumpname+path=mountpoint)
FSNAMES="root usr"  # this is used for dump directory name 
                    # and to ID the path from a variable below



WEEKDAY=$(date +"%a")
DATE=$(date +"%Y%m%d")
HUMANDATE=$(date +"%d-%b-%Y")
HUMANDATE=`echo $HUMANDATE | tr '[:lower:]' '[:upper:]'`
HUMANTIME=$(date +"%H:%M (%Z)")
TODAYYR=$(date +"%Y")
TODAYMO=$(date +"%m")
TODAYDT=$(date +"%d")

# datestamp at midnight today

# default lastdump to midnight today; it will be checked
# and and adjusted later

# do not crete world-readable dumps!
umask 117

# make sure the logfile exists
if [ ! -e $LOGFILE ] ; then
   touch $LOGFILE
   chmod 660 $LOGFILE

# make sure that entire week's incremental dumps are deposted
# in the same directory, even when a week spans new year
# NOTE: When the ending year has a partial 53rd week, there
# won't be a dump folder for the first week of the new year.
# The incremental dumps instead complete the 53rd week folder,
# even when the 1st week of the new year begins mid-week. 
# However, the dates of the incremental dump files in the 
# 53rd week folder correctly reflect the dates of the 
# beginning year.
   local dateoffset=$1
   local epochnow=$(date +%s)
   local offsetsecs=`expr $dateoffset "*" 86400`
   local newepoch=`expr $epochnow "-" $offsetsecs`
   local year=`date -r $newepoch +"%Y"`
   if [ "$WEEKSTARTS" = "Mon" ] ; then
      local week=`date -r $newepoch +"%W"`
      local week=`date -r $newepoch +"%U"`
   NEWEPOCHISO=`date -r $newepoch +"%Y%m%d0000"`

   #system week starts from `0', there is no calendar week `0'
   week=`expr $week "+" 1`

# determines the 'distance' from the level 0 dump in days
if [ "$WEEKSTARTS" = "Mon" ] ; then
   case $WEEKDAY in
      Mon) adjust_date 0;;
      Tue) adjust_date 1;;
      Wed) adjust_date 2;;
      Thu) adjust_date 3;;
      Fri) adjust_date 4;;
      Sat) adjust_date 5;;
      Sun) adjust_date 6;;
      *) ;;
   case $WEEKDAY in
      Sun) adjust_date 0;;
      Mon) adjust_date 1;;
      Tue) adjust_date 2;;
      Wed) adjust_date 3;;
      Thu) adjust_date 4;;
      Fri) adjust_date 5;;
      Sat) adjust_date 6;;
      *) ;;


   local fsname=$1
   # get the current filesystem's path
   # as defined in the corresponding variable
   eval "local fspath=$${fsname}path"

   # composite the dump path
   local dumppath=${BASEDIR}/${fsname}/${YWEEK}

   # make sure the dump directory for this week exists;
   # this automatically creates a new dump directory on 
   # every Monday or Sunday (as selected by 'WEEKSTARTS')
   [ ! -d $dumppath ] && mkdir -p $dumppath

   # get name of the last file in the current dump directory
   local lastfile=`ls -ltr $dumppath | grep -v "^d" | tail -n 1 | awk '{ print $9 }'`

   # assume that the 'lastfile', if it exists, was not created today
   local dumped_today=false
   # if a file exists, check its modification date; 
   # if it is at or after 00:00 today, set a flag to skip the dump
   if [ "$lastfile" != "" ] ; then
      local fq_lastfile=${dumppath}/$lastfile
      if [ -e $fq_lastfile ] ; then
         # get the last modification time for the most recently created dumpfile
         LASTDUMP=`stat -f %m $fq_lastfile`
         if [ $LASTDUMP -ge $TODAYSTARTSTAMP ] ; then
            local dumped_today=true
      # get the first and the last dump level for this directory
      local levelcommand="ls $dumppath | sed -e 's/^[[:digit:]]*_//' | sed -e 's/..*$//'"
      local firstlevel=`eval $levelcommand | head -n 1`
      local lastlevel=`eval $levelcommand | tail -n 1`

      # make sure level zero dump exists;
      # if it doesn't, start over
      if [ "$firstlevel" != "0" ] ; then
         # it doesn't matter if a previous dump exists from today
         # since we're starting over as level 0 dump is missing
         local dumped_today=false
         local dumplevel=0
         rm -f $dumppath/*.gz
         # otherwise just increment the dump level
         # for levels 1-6, i.e. normally Tuesday thru Sunday
         local dumplevel=`expr $lastlevel "+" 1`
      # no dump exists in this week's folder; reset level to '0'
      local dumplevel=0

   # skip the entire dump process if a dumpfile has
   # already been created for this filesystem today
   if [ "$dumped_today" = "false" ] ; then  
      # define the dump filename
      local dumpfn=${DATE}_${dumplevel}

      echo ---------------- >> $LOGFILE
      echo >> $LOGFILE
      echo BEGINNING LEVEL $dumplevel DUMP OF '$fsname' (${fspath}) FILESYSTEM ON $HUMANDATE AT $HUMANTIME >> $LOGFILE
      echo >> $LOGFILE
      echo Creating a snapshot of '$fspath'.. >> $LOGFILE
      # execute the dump
      $DUMP -$dumplevel -$GLOBALDUMPOPTS -f ${dumppath}/${dumpfn} $fspath >> $LOGFILE 2>&1
      local dumpresult=$?
      if [ "$dumpresult" != "0" ] ; then
         # log the dump result to syslog
         $LOGGER "$DUMP LEVEL $dumplevel DUMP OF $fsname (${fspath}) FAILED!"

         echo "*** DUMP FAILED - LEVEL $dumplevel DUMP of $fsname (${fspath}) ***" >> $LOGFILE
         echo >> $LOGFILE
         # log the dump result to syslog
         $LOGGER "LEVEL $dumplevel DUMP of $fsname (${fspath}) COMPLETED SUCCESSFULLY!"

         echo >> $LOGFILE
         # compress the dump
         echo Compressing the dumpfile '${dumpfn}'.. >> $LOGFILE
         $GZIP -v ${dumppath}/${dumpfn} >> $LOGFILE 2>&1
         echo DONE >> $LOGFILE
         echo >> $LOGFILE

         # make sure dumps are not world readable (security risk!)
         echo Updating dumpfile '${dumpfn}.gz' permissions.. >> $LOGFILE
         chmod -v -v 440 ${dumppath}/${dumpfn}.gz >> $LOGFILE 2>&1
         echo DONE >> $LOGFILE
         echo >> $LOGFILE
         # reset current dump dir's timestamp to that of the level 0 dump
         touch -t ${NEWEPOCHISO} ${dumppath}

         # delete old dumps
         echo Deleting old '$fsname' dumpfiles.. >> $LOGFILE
         find $BASEDIR/$fsname -mtime +$KEEPDUMPS -maxdepth 1 -print -exec rm -rf {} ; >> $LOGFILE 2>&1
         echo DONE >> $LOGFILE
         echo >> $LOGFILE
      local lastdump_readable=`date -j -r $LASTDUMP +"%H:%M"`
      local lastdump_readableZ=`date -j -r $LASTDUMP +"%Z"`
      local lastdumpmsg="Autodump for filesystem '$fsname' ($fspath) has already been executed today at $lastdump_readable ($lastdump_readableZ)."
      echo $lastdumpmsg
      $LOGGER $lastdumpmsg

# Dump filesystems defined in 'FSNAMES'
# Monday or Sunday (as selected by 'WEEKSTARTS') starts with 
# the level 0 dump, with incrementals created through the rest of 
# the week (autoincremented). If the level 0 dump is missing in 
# the current week's folder for filesystem currently being backed 
# up, it is created automatically instead of an incremental dump, 
# no matter what day of the week it is.
for f in $FSNAMES
   mk_auto_dump $f

12 thoughts on “FreeBSD Full / Incremental Filesystem Dump Shell Script”

  1. Note that this is a Bourne shell script as indicated by the first line of the script even though the WordPress highlights it as “bash”.

  2. I have posted an update of the script that fixes one rather significant bug: level 0 dump would not be created at the start of the week (it would, however, have been created on first “incremental” date).

    The only functional correction was to remove one extra space (ECMA languages stick hard..) from “local dumplevel = 0” to “local dumplevel=0”. I also made various logging enhancements. The most recent version of the script has been posted.

  3. I have posted yet another update of the script that streamlines the code further. It introduces a configuration variable to select between the start of week either on Sunday or Monday.

    Further changes have been made to logging, and processes such as attempting to compress the dump file or cull the old dump files are now not executed if the dump itself does not succeed.

    I also realized that dump command allows maximum of nine incremental updates. For that reason it is not meaningful to allow infinite runs of the program. Now, instead, the program allows one dump – level 0 or an incremental dump – to be created per day.

    I’m now satisfied with the script myself, and unless I discover bugs in it, 1.5 is likely the final version.

  4. Hi,
    Thanks for the nice script – however, I’m keep getting an error “local: 0: bad variable name”. It seems your script is having difficult time evaluating “fspath” from eval “local fspath=$${fsname}path”. Any idea?


  5. Hello,

    Indeed, the actual code should read:

    eval "local fspath=[backslash]$${fsname}path"

    In other words, there should be a backslash in front of the first dollar sign. The WordPress code display script wasn’t displaying it correctly; I’ve fixed it now. Thanks for pointing it out!

    Later today I’ll post the script in a tarball to make sure nothing is modified by the blog code display.

  6. The tarball is now available for download (see then note above the displayed script).

    I also clarified few points in the script’s documentation.

  7. You’re welcome!

    I found a post by Jerry McAllister to be very enlightening regarding restoring dumps. His post is specifically about reducing a partition size, but because the only way to shrink a partition is to make a dump of it and then restore it (to a smaller partition), he outlines the process very well.

  8. You could create a cronjob that copies the resulting dump file to another system after the dump completes. Or, dump over SSH could be integrated to the script. Here’re some notes about dump over SSH I have that might be useful:

    push dump of /usr:

    dump -0Luanf - /usr | bzip2 | ssh root@targetsystem dd of=/bak/dumps/usrdump_20100309.bz2


    dump -0Luanf - /usr | gzip | ssh root@targetsystem dd of=/bak/dumps/usrdump_20100309.gz

    To pull:

    ssh backupuser@sourcesystem /sbin/dump -0uanLf - /usr | bzip2 | dd of=/bak/dumps/usrdump_20100309.bz2

    And to restore..

    bzcat usrdump_20100309.bz2 | restore -i -f -

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.