AWS CLI Key Rotation Script for IAM Users revisited

In April of this year I published a Bash script for rotating the default AWS API keys configured in `~/.aws/credentials` file. Now I have improved on the original script, adding the following functionality:

  • The script is now fully interactive, and supports multiple profiles. It was an interactive script before, potentially requiring user input during its run, and as such was intended for manual execution. Now the script presents a list of configured profiles from which you can choose a profile whose keys you want to rotate.
  • The script works with Multi-Factor Authentication. At work we have been moving increasingly to MFA/2FA, and have also started enforcing MFA use from the users (this policy works great for that purpose). Since MFA enforcement for AWS console and for the CLI API cannot be separated for a given IAM user for the most part (since the console is just a GUI for the API), there had to be a solution for relatively convenient use of MFA on the command line. My second script (more detail in the next post) offers that capability, and now the key rotation script is aware of it. If you’re rotating the keys for an IAM account whose MFA use is enforced, the script now detects an existing MFA profile (created by my CLI MFA script), and can use it to authorize key rotation for the base profile, which might otherwise not be authorized to execute the key rotation operation.
  • The listing of the configured profiles includes the current keys (two concurrent keys is the recommended maximum by policy), the ages of the keys, the actual IAM username (since the profile name is arbitrary, and as such can be set to anything), and the access status of the profile (‘OK’, or ‘LIMITED’; the latter is displayed when the profile doesn’t appear to have normal access during the query process — for example, it may result from MFA enforcement)
  • Many AWS errors are also masked and translated to more user-friendly outputs. If a profile doesn’t have a valid key, the script handles it gracefully and displays: “CHECK CREDENTIALS!” next to that profile (and obviously it cannot give more detail about such profile, or offer the option to rotate its keys).
  • The script was originally written for macOS, but it has now been tested on Ubuntu, and portability has been added (hence it likely works on other Linux distros as well).

The latest version of the script is available from the same repository as before.

A Convenient AWS CLI Key Rotation Script for IAM Users

It’s a good practice to rotate your AWS CLI keys periodically. Recently I wrote a key rotation shell script to match a company policy where an IAM user is allowed to have the maximum of two concurrent keys. If both “slots” are taken when the script is triggered, it looks at the creation dates/times of the keys, which key is currently active (or if both are), and which one is currently configured in the users’s `~/.aws/config` file (and hence is being used for the rotation operation), and then allows the user to delete the key that is either older, or not currently in use, thus making space for a new key.

Once the new key is generated, the script activates the key, tests that it works, and then removes the key that the new key replaces.

The script was created and tested for use on macOS, but it will likely work on Linux as well (I will soon test it on Linux and make any portability changes if needed).

You can find script on GitHub.

Update 28 October 2017: An improved version of the script has been published.
See the details here!

FreeBSD Full / Incremental Filesystem Dump Shell Script

A FreeBSD shell script to dump filesystems with full, and automatically incremented incremental backups to a given directory location.

I wanted to automate filesystem dumps on my servers running FreeBSD 7.2. After some searching I came across Vivek Gite’s FreeBSD Full / Incremental Tape Backup Shell Script which gave me a lot of ideas. Since I’m not using tape as the backup target I wanted to make a script specifically for that purpose while at the same time improve handling of some error conditions (such as, most importantly, checking for a missing level 0 dump before proceeding with an incremental dump) and add some new features such as autoincrement the dump level so that the dump level is not tied to specific day of the week.

Here’s my version of the script. While it bears some resemblance to Vivek’s script, it is largely rewritten. Read the script header for more information.

NOTE! In his comment James pointed out a possible bug in the script. The displayed script indeed had a problem: it was missing a backslash in front of the first dollar sign at:

eval “local fspath=$${fsname}path”

This was caused by the script display plugin in WordPress that treated the backslash as an escape character (this has now been fixed). To be on the safe side, please download the script as a tarball. To further validate the integrity of the tarball, it should produce a md5 hash of 732ac44f11ba4484be4568e84929bb6a.


# Autodump 1.5a (released 01 August 2009)
# Copyright (c) 2009 Ville Walveranta 
# A FreeBSD shell script to dump filesystems with full, and automatically 
# incremented incremental backups to a given directory location; this script
# was written with the intent of saving the filesystem dumps not onto a tape
# device but on another hard drive such as a different filesystem on the same 
# computer. The resulting dump files can be copied offsite with a separate 
# cron job.
# This script creates the necessary directory structure below the defined 
# 'BASEDIR' as well as the necessary log file. This script also ensures that
# the level 0 dump exists before creating an incremental dump; if it doesn't
# the script automatically erases the incremental files for the current week 
# (if any exist) and starts over with a level 0 dump. This way you can start 
# using the script on any day of the week and level 0 dump is automatically 
# created on the first run.
# When ran daily (such as from a cron job), the script creates level 0 dump
# on every Monday (beginning the ISO week), or Sunday (beginning of the U.S. 
# week) and an incremental dump on all the other days of each week. The dumps 
# are compressed with gzip and saved below the 'BASEDIR' to an automatically 
# created directory whose name is derived from the list given in 'FSNAMES'. 
# Each week's dumps are organized into subfolders with name YYYY-WW ('WW' 
# being the current week). By default three most recent weekly dumps 
# (level 0 + incrementals) are retained.
# The script maintains each weekly folder's date at the _beginning_ date
# of the dump (i.e. Monday or Sunday of the current week) at 00:00, not 
# at the most recent incremental's date/time.
# By default the root (/) and usr (/usr) filesystems are dumped. To add more  
# add a "friendly name" to the 'FSNAMES' list (it is used for the weekly folder
# names, for dump filenames, and to reference the corresponding mount point
# variable); then add the corresponding mount point variable (i.e. if you 
# add "var" to 'FSNAMES', then add a variable varpath=/var). The "path" 
# ending of the mount point variable name is required. 
# Since the number of incremental dumps is limited to nine (level 0 +
# incremental levels 1-9), the script will allow maximum of one dump 
# to be created per day. However, since the level incrementing is dynamic
# you can start the script on any day of the week, and run it on any
# number of days during the rest of the week and you'll always get
# level 0 plus the incremental dumps in sequential order. However, The 
# new weekly folder is always created on Monday or Sunday (as chosen by
# you). Note that the script determines whether "today's" dump exists 
# based on the modification date stamp of the most recent dump. Hence 
# it is a good idea to run this script in the early hours of each day 
# rather than in the very end of each day. Running the script, for 
# example, at 23:50 has the potential to push longer dump processes 
# over the midnight and so potentially cause the next day's dump to 
# be skipped.
# Written for FreeBSD 7.2 but should work on most BSD and *NIX systems with
# minor modifications.
# -------------------------------------------------------------------------
# Copyright (c) 2009 Ville Walveranta 
# This script is licensed under GNU GPL version 2.0 or above, and is provided
# 'as-is' with no warranty which is to say that I'm not liable if it wipes out
# your hard drive clean or doesn't back up your precious data. However, to the 
# best or my knowledge it is working as expected -- I'm using it myself. :-)
# -------------------------------------------------------------------------
# This script was inspired by 
# FreeBSD Full / Incremental Tape Backup Shell Script
# by nixCraft project / Vivek Gite
# at 
# -------------------------------------------------------------------------

#### GLOBAL VARIABLES ###############################################

WEEKSTARTS=Mon      # Accepted values are "Mon" (ISO standard) or "Sun" (U.S.)
KEEPDUMPS=30        # in days; this is evaluated on the weekly level per start
                    # of the week, so '30' keeps 3-4 weekly dumps
GLOBALDUMPOPTS=Lua  # add 'n' for wall notifications

# to add more filesystems to be dumped add the dump name in 'FSNAMES'
# and add the corresponding mount point variable (dumpname+path=mountpoint)
FSNAMES="root usr"  # this is used for dump directory name 
                    # and to ID the path from a variable below



WEEKDAY=$(date +"%a")
DATE=$(date +"%Y%m%d")
HUMANDATE=$(date +"%d-%b-%Y")
HUMANDATE=`echo $HUMANDATE | tr '[:lower:]' '[:upper:]'`
HUMANTIME=$(date +"%H:%M (%Z)")
TODAYYR=$(date +"%Y")
TODAYMO=$(date +"%m")
TODAYDT=$(date +"%d")

# datestamp at midnight today

# default lastdump to midnight today; it will be checked
# and and adjusted later

# do not crete world-readable dumps!
umask 117

# make sure the logfile exists
if [ ! -e $LOGFILE ] ; then
   touch $LOGFILE
   chmod 660 $LOGFILE

# make sure that entire week's incremental dumps are deposted
# in the same directory, even when a week spans new year
# NOTE: When the ending year has a partial 53rd week, there
# won't be a dump folder for the first week of the new year.
# The incremental dumps instead complete the 53rd week folder,
# even when the 1st week of the new year begins mid-week. 
# However, the dates of the incremental dump files in the 
# 53rd week folder correctly reflect the dates of the 
# beginning year.
   local dateoffset=$1
   local epochnow=$(date +%s)
   local offsetsecs=`expr $dateoffset "*" 86400`
   local newepoch=`expr $epochnow "-" $offsetsecs`
   local year=`date -r $newepoch +"%Y"`
   if [ "$WEEKSTARTS" = "Mon" ] ; then
      local week=`date -r $newepoch +"%W"`
      local week=`date -r $newepoch +"%U"`
   NEWEPOCHISO=`date -r $newepoch +"%Y%m%d0000"`

   #system week starts from `0', there is no calendar week `0'
   week=`expr $week "+" 1`

# determines the 'distance' from the level 0 dump in days
if [ "$WEEKSTARTS" = "Mon" ] ; then
   case $WEEKDAY in
      Mon) adjust_date 0;;
      Tue) adjust_date 1;;
      Wed) adjust_date 2;;
      Thu) adjust_date 3;;
      Fri) adjust_date 4;;
      Sat) adjust_date 5;;
      Sun) adjust_date 6;;
      *) ;;
   case $WEEKDAY in
      Sun) adjust_date 0;;
      Mon) adjust_date 1;;
      Tue) adjust_date 2;;
      Wed) adjust_date 3;;
      Thu) adjust_date 4;;
      Fri) adjust_date 5;;
      Sat) adjust_date 6;;
      *) ;;


   local fsname=$1
   # get the current filesystem's path
   # as defined in the corresponding variable
   eval "local fspath=$${fsname}path"

   # composite the dump path
   local dumppath=${BASEDIR}/${fsname}/${YWEEK}

   # make sure the dump directory for this week exists;
   # this automatically creates a new dump directory on 
   # every Monday or Sunday (as selected by 'WEEKSTARTS')
   [ ! -d $dumppath ] && mkdir -p $dumppath

   # get name of the last file in the current dump directory
   local lastfile=`ls -ltr $dumppath | grep -v "^d" | tail -n 1 | awk '{ print $9 }'`

   # assume that the 'lastfile', if it exists, was not created today
   local dumped_today=false
   # if a file exists, check its modification date; 
   # if it is at or after 00:00 today, set a flag to skip the dump
   if [ "$lastfile" != "" ] ; then
      local fq_lastfile=${dumppath}/$lastfile
      if [ -e $fq_lastfile ] ; then
         # get the last modification time for the most recently created dumpfile
         LASTDUMP=`stat -f %m $fq_lastfile`
         if [ $LASTDUMP -ge $TODAYSTARTSTAMP ] ; then
            local dumped_today=true
      # get the first and the last dump level for this directory
      local levelcommand="ls $dumppath | sed -e 's/^[[:digit:]]*_//' | sed -e 's/..*$//'"
      local firstlevel=`eval $levelcommand | head -n 1`
      local lastlevel=`eval $levelcommand | tail -n 1`

      # make sure level zero dump exists;
      # if it doesn't, start over
      if [ "$firstlevel" != "0" ] ; then
         # it doesn't matter if a previous dump exists from today
         # since we're starting over as level 0 dump is missing
         local dumped_today=false
         local dumplevel=0
         rm -f $dumppath/*.gz
         # otherwise just increment the dump level
         # for levels 1-6, i.e. normally Tuesday thru Sunday
         local dumplevel=`expr $lastlevel "+" 1`
      # no dump exists in this week's folder; reset level to '0'
      local dumplevel=0

   # skip the entire dump process if a dumpfile has
   # already been created for this filesystem today
   if [ "$dumped_today" = "false" ] ; then  
      # define the dump filename
      local dumpfn=${DATE}_${dumplevel}

      echo ---------------- >> $LOGFILE
      echo >> $LOGFILE
      echo BEGINNING LEVEL $dumplevel DUMP OF '$fsname' (${fspath}) FILESYSTEM ON $HUMANDATE AT $HUMANTIME >> $LOGFILE
      echo >> $LOGFILE
      echo Creating a snapshot of '$fspath'.. >> $LOGFILE
      # execute the dump
      $DUMP -$dumplevel -$GLOBALDUMPOPTS -f ${dumppath}/${dumpfn} $fspath >> $LOGFILE 2>&1
      local dumpresult=$?
      if [ "$dumpresult" != "0" ] ; then
         # log the dump result to syslog
         $LOGGER "$DUMP LEVEL $dumplevel DUMP OF $fsname (${fspath}) FAILED!"

         echo "*** DUMP FAILED - LEVEL $dumplevel DUMP of $fsname (${fspath}) ***" >> $LOGFILE
         echo >> $LOGFILE
         # log the dump result to syslog
         $LOGGER "LEVEL $dumplevel DUMP of $fsname (${fspath}) COMPLETED SUCCESSFULLY!"

         echo >> $LOGFILE
         # compress the dump
         echo Compressing the dumpfile '${dumpfn}'.. >> $LOGFILE
         $GZIP -v ${dumppath}/${dumpfn} >> $LOGFILE 2>&1
         echo DONE >> $LOGFILE
         echo >> $LOGFILE

         # make sure dumps are not world readable (security risk!)
         echo Updating dumpfile '${dumpfn}.gz' permissions.. >> $LOGFILE
         chmod -v -v 440 ${dumppath}/${dumpfn}.gz >> $LOGFILE 2>&1
         echo DONE >> $LOGFILE
         echo >> $LOGFILE
         # reset current dump dir's timestamp to that of the level 0 dump
         touch -t ${NEWEPOCHISO} ${dumppath}

         # delete old dumps
         echo Deleting old '$fsname' dumpfiles.. >> $LOGFILE
         find $BASEDIR/$fsname -mtime +$KEEPDUMPS -maxdepth 1 -print -exec rm -rf {} ; >> $LOGFILE 2>&1
         echo DONE >> $LOGFILE
         echo >> $LOGFILE
      local lastdump_readable=`date -j -r $LASTDUMP +"%H:%M"`
      local lastdump_readableZ=`date -j -r $LASTDUMP +"%Z"`
      local lastdumpmsg="Autodump for filesystem '$fsname' ($fspath) has already been executed today at $lastdump_readable ($lastdump_readableZ)."
      echo $lastdumpmsg
      $LOGGER $lastdumpmsg

# Dump filesystems defined in 'FSNAMES'
# Monday or Sunday (as selected by 'WEEKSTARTS') starts with 
# the level 0 dump, with incrementals created through the rest of 
# the week (autoincremented). If the level 0 dump is missing in 
# the current week's folder for filesystem currently being backed 
# up, it is created automatically instead of an incremental dump, 
# no matter what day of the week it is.
for f in $FSNAMES
   mk_auto_dump $f