Encrypted Vault in Ubuntu for Your Valuable Data

Recently I set up Bitnami Cloud Tools for AWS to facilitate AWS configuration and use from the command line. After creating an administrative IAM (as not to use the main AWS login), and created and uploaded/associated the necessary X.509 credentials for that IAM login, I realized that anyone who would gain access to the local dev server would also gain full access to several AWS Virtual Private Cloud configurations. Not a terribly likely occurrence, but would I like to risk it? Say, when I have the cloud tools configured on Ubuntu on my laptop, someone could conceivably steal the laptop, and with a little technical expertise, gain access to the Ubuntu instance (running in a VM), and hence to the AWS VPCs.

At least in this case having the IAM credentials and the X.509 keys on a USB drive would be impractical (and would probably increase the likelihood that the keys would get misplaced and end up in the wrong hands). On Windows it’s a simple task to set up an encrypted vault using one of many available utilities to achieve such. But how to do that on Linux? After some digging I came across a Wiki entry Ubuntu: Make a secure vault. It worked fine, but via cut-and-paste that appeared rather cumbersome for daily operations. So I set out to write couple of scripts to make things easier.

First, you need to have cryptsetup package installed. Then you can make use of the setup-crypt script below. These scripts are quick utility scripts that don’t have a separate configuration file; you may want to edit some of the variables on top of the script, namely “CRYPT_HOME” (depending on where you want to place your encrypted vault file), “CRYPT_MOUNTPOINT” (depending on where you want to mount it), and “CRYPT_DISK_SIZE” (the capacity of the encrypted vault in megabytes).

#!/bin/bash

CRYPT_HOME=/root/crypto
CRYPT_DISK=cryptdisk
CRYPT_DISK_FQFN=${CRYPT_HOME}/${CRYPT_DISK}
CRYPT_DISK_SIZE=64	# size in megabytes
CRYPT_LABEL=crypt-disk
CRYPT_MOUNTPOINT=/mnt/crypto
LOOPBACK_DEVICE=`losetup -f`

CRYPTSETUP=`which cryptsetup`
if [ $? -ne 0 ] ; then
  echo "ERROR - cryptsetup not found! Install it first with 'apt-get install cryptsetup'."
  exit 1
fi

IAM=`whoami`
if [ ! "${IAM}" = "root" ]; then
  echo "ERROR - Must be root to continue."
  exit 1
fi 

SETUP_INCOMPLETE=true

function cleanup {
  if [ ! "$1" = "called" ] && [ ! "$1" = "nodelete" ]; then
    echo
    echo
    echo "Crypto-disk setup interrupted. Cleaning up."
  fi
  if [ -b /dev/mapper/${CRYPT_LABEL} ]; then
    cryptsetup luksClose /dev/mapper/${CRYPT_LABEL}
  fi
  
  losetup -d ${LOOPBACK_DEVICE} > /dev/null 2>&1
  if [ "$1" = "nodelete" ]; then
    exit 0
  else
    rm -rf ${CRYPT_HOME}
    exit 1
  fi 
}

mkdir ${CRYPT_HOME} > /dev/null 2>&1

# Capture errors
if [ $? -ne 0 ]; then 
  if [ -d ${CRYPT_HOME} ]; then
    REASON="Directory already exists."
  else
    REASON=""
  fi
  echo "ERROR - Could not create directory '${CRYPT_HOME}'. ${REASON}"
  echo "Continuing..."
else
  echo
  echo "OK - '${CRYPT_HOME}' directory created."
fi

cd /root/crypto

if [ -f $CRYPT_DISK_FQFN ]; then
  echo "ERROR - Crypt disk already exists. Cannot continue."
  exit 1
fi

trap cleanup INT

dd if=/dev/zero of=cryptdisk bs=1M count=${CRYPT_DISK_SIZE}

# Capture errors
if [ $? -ne 0 ]; then 
  echo "ERROR - Could not create raw container. Cannot continue."
  cleanup called
  exit 1
else
  echo
  echo "OK - ${CRYPT_DISK_SIZE}MB raw device created."
fi

losetup ${LOOPBACK_DEVICE} ${CRYPT_DISK_FQFN}

# Capture errors
if [ $? -ne 0 ]
then
  echo "ERROR - Loopback device in use. Cannot continue."
  cleanup called
  exit 1
fi

cryptsetup luksFormat ${LOOPBACK_DEVICE}

# Capture errors
if [ $? -ne 0 ]
then
  echo "ERROR - Could not format the raw container. Cannot continue."
  cleanup called
  exit 1
fi

echo
echo "NOTE: Use the same password you set above!"
cryptsetup luksOpen ${LOOPBACK_DEVICE} ${CRYPT_LABEL}

# Capture errors
if [ $? -ne 0 ]; then
  echo "ERROR - Could not open LUKS CryptoFS. Cannot continue."
  cleanup called
  exit 1
else
  echo "OK - LUKS CryptoFS Opened."
fi

mkfs.ext4 /dev/mapper/${CRYPT_LABEL}

# Capture errors
if [ $? -ne 0 ]
then
  echo "ERROR - File system creation failed. Cannot continue."
  cleanup called
else
  echo "OK - Encrypted file system created."
  echo "Closing handles."
  cleanup nodelete
  exit 0
fi

After you save the above script to a file, and make the file executable (chmod 500 filename), you’re good to go. If you don’t want the encrypted vault file located at /root/crypto/, or want a vault of a different size than the rather small default of 64MB (I’m just saving a handful of AWS keys, so I didn’t need a larger vault file), edit the variables on top of the script before running it. Once started, follow the prompts and the encrypted vault file is created for you. If an error occurs during the vault creation process, if the vault file already exists, or if you cancel the script, any changes made up to that point are rolled back.

To mount and access the vault, save the following two scripts for mounting and unmounting the vault respectively:

#!/bin/bash

CRYPT_MOUNTPOINT=/mnt/crypto
CRYPT_DISK_FQFN=/root/crypto/cryptdisk
CRYPT_LABEL=crypt-disk
LOOPBACK_DEVICE=`losetup -f`

if [ ! -f ${CRYPT_DISK_FQFN} ]; then
  echo "Crypt disk '${CRYPT_DISK_FQFN}' missing. Cannot continue."
  exit 1
fi

if [ ! -d ${CRYPT_MOUNTPOINT} ]; then
  echo "Mountpoint '${CRYPT_MOUNTPOINT}' missing. Cannot continue."
  exit 1
fi

function check_mounted {
  if grep -qsE "^[^ ]+ $1" /proc/mounts; then
    _RET=true
  else
    _RET=false
  fi
}

check_mounted $CRYPT_MOUNTPOINT
if ${_RET} ; then
  echo "Mountpoint '${CRYPT_MOUNTPOINT}' already mounted. Cannot continue."
  exit 1
fi

losetup ${LOOPBACK_DEVICE} ${CRYPT_DISK_FQFN} > /dev/null 2>&1

# Capture errors
if [ $? -ne 0 ]; then
  echo "ERROR - Loopback device in use."
  exit 1
else
  echo "OK - Loopback device mapped."
fi

cryptsetup luksOpen ${LOOPBACK_DEVICE} ${CRYPT_LABEL} > /dev/null 2>&1

# Capture errors
if [ $? -ne 0 ]; then
  echo "ERROR Opening LUKS CryptoFS. Removing the loopback device."
  losetup -d ${LOOPBACK_DEVICE}
  exit 1
else
  echo "OK - LUKS CryptoFS Opened."
fi

mount /dev/mapper/${CRYPT_LABEL} ${CRYPT_MOUNTPOINT} > /dev/null 2>&1

# Capture errors
if [ $? -ne 0 ]; then
  echo "ERROR mounting CryptoFS."
  cryptsetup luksClose /dev/mapper/${CRYPT_LABEL}
  losetup -d ${LOOPBACK_DEVICE}
  exit 1
else
  echo "OK - Mounted CryptoFS."
  exit 0
fi
#!/bin/bash

CRYPT_MOUNTPOINT=/mnt/crypto
CRYPT_DISK=/root/crypto/cryptdisk
CRYPT_LABEL=crypt-disk

LOOPBACK_DEVICE=`losetup -j ${CRYPT_DISK} | awk '{print $1}' | sed '$s/.$//'`

CAN_RELEASE=true
if grep -qsE "^[^ ]+ ${CRYPT_MOUNTPOINT}" /proc/mounts; then
  umount ${CRYPT_MOUNTPOINT} > /dev/null 2>&1
  
  if [ $? -ne 0 ]; then
    echo "WARNING - Could not unmount ${CRYPT_MOUNTPOINT}! Device busy."
    CAN_RELEASE=false
  else
    echo "Crypto-disk was unmounted."
  fi  
else 
  echo "Crypto-disk was not mounted."
fi

if $CAN_RELEASE; then
  if [ -b /dev/mapper/${CRYPT_LABEL} ]; then
    cryptsetup luksClose /dev/mapper/${CRYPT_LABEL} > /dev/null 2>&1
  fi

  losetup -d ${LOOPBACK_DEVICE} > /dev/null 2>&1
fi

Similarly make these scripts executable before running them. If you modified the encrypted vault location/name, or the mount point location during the creation process, you’ll want to make corresponding changes the the variable atop these scripts.

You can place these utility scripts in /usr/local/bin or other location on your path (or symlink from a location on your path) to avoid having to type the full path every time.

With the encrypted vault created using setup-crypt, you can then mount the vault using mount-crypt and access the contents of the vault at /mnt/crypto, and finally unmount the vault with umount-crypt. Since the vault is protected by a single passoword, be sure to set an appropriately safe password to match the required security level.

To further improve the security, you probably want to unmount the vault whenever you’re not logged in. Most likely contents of a vault such as this are intended for interactive use. You can always unmount and hence “lock” the vault with umount-crypt command, but it is a good idea to run umount-crypt automatically at logout. Depending on your shell you can crete/edit .zslogout (zsh), .bash_logout (bash), or .logout (tcsh/csh) at the user home directory (likely in “/root” since opening/closing loopback handles can only be done by the root), and place the following code in it:

#!/bin/zsh
# NOTE: You need to adjust the path to the login shell above

/opt/crypto/umount-crypt

I also close the vault at system shutdown/reboot, by symlinking the following from /etc/rc6.d/S40umount-crypto:

#!/bin/bash
#
# umount-crypto - Unmounts a crypto-drive if mounted
# -> convenience script to be called in the shutdown/reboot sequence of Ubuntu
#    from /etc/rc6.d, e.g. as "/etc/rc6.d/S40umount-crypto"

start() {
	echo "umount-crypto: nothing to do!"
}

stop() {
	echo "Unmounting LUKS CryptoFS filesystem..."
	umount /mnt/crypto> /dev/null 2>&1 
        cryptsetup luksClose /dev/mapper/crypt-disk > /dev/null 2>&1
        losetup -d /dev/loop0 > /dev/null 2>&1
}

status() {
	echo "No status available."
}

restart() {
	echo "restart ..."
	start
}

reload() {
	echo "start ..."
	start
}

force_reload() {
	echo "force-reload ..."
	start
}

case $1 in
	start)
	start
	;;

	stop)
	stop
	;;

	status)
	status
	;;

	restart)
	restart
	;;

	reload)
	reload
	;;

	force-reload)
	force_reload
	;;

	*)
	echo "This is a non-interactive crypto-disk unmount script."
	;;

esac

exit 0

And that’s all there is to it! With your files safely inside a locked, encrypted vault, only you and the NSA have access to them! 😉

P.S.
To utilize the vault with Bitnami Cloud Tools, I have created folders for each AWS account I want to access under /mnt/crypto/, e.g. /mnt/crypto/aws_account_a, /mnt/crypto/aws_account_b, etc. Each folder contains similarly named files (as found in bitnami-awstools-x.x-x/config folder), like so:

aws-config.txt
aws-credentials.txt
ec2.crt
ec2.key

To switch from account to another I (re-)symlink the contents of the desired account from bitnami-awstools-x.x-x/config/, for example:

ln -sf /mnt/crypto/aws_account_b/* /opt/bitnami-awstools-x.x-x/config/

This way, once the vault is locked, the access to any and all of the AWS accounts via cloud tools goes away. Switching between the accounts could, of course, be scripted easily as well.

Introducing duplicity-nfs-backup, or How to Use duplicity-backup Safely with NFS/CIFS Shares

After completing nfs_automount script a bit over a week ago, I soon realized rdiff-backup I had planned to use with the now-nearly-guaranteed-to-be-online NFS shares would not work. I then turned to my other favorite *NIX server backup solution, duplicity with duplicity-backup.sh wrapper script. It utilizes gzip-based archives, which works much better with NFS/CIFS shares. Besides the other odd problems with rdiff-backup and NFS, it resolves the more obvious issue with conflicting users/permissions between the client and the NFS share host as duplicity doesn’t maintain a direct mirrored copy of the files being backed up.

The only problem was that since duplicity creates incrementals, and I generally like to keep backups around for several months, the incrementals are really never needed beyond couple of weeks. Beyond that in my applications the day-by-day backups are overkill, and should be pruned. Duplicity provides an option to do so (“remove-all-inc-of-but-n-full”), but duplicity-backup.sh hadn’t implemented it, so I first contributed a patch to zertrin’s project. Then I proceeded to write a wrapper for the wrapper to add the extra pre-backup checks, and duplicity-nfs-backup was born.

So what is duplicity-nfs-backup? It is a wrapper script designed to ensure that an NFS/CIFS-mounted target directory is indeed accessible before commencing with backup. While duplicity-backup.sh can be used to back up to a variety of mediums (ftp, rsync, sftp, local file…), duplicity-nfs-backup is specifically intended to be used with NFS/CIFS shares as backup targets.

The script that was the impetus for writing duplicity-nfs-backup, nfs_automount, attempts to keep the NFS shares online at all times, but the client system can’t always help with such situations. What if the target system becomes unreachable due to a network problem? Or what if a disk, or a filesystem mount fails on the target while the share is still available? In any of these cases duplicity-backup/duplicity would back up into an empty mountpoint. duplicity-nfs-backup adds the necessary checks to ensure that this won’t happen, and it also issues log/syslog warnings when a backup fails due to a share that has gone M.I.A.

I mentioned earlier that duplicity-nfs-backup is “a wrapper for the wrapper.” Paraphrasing zertrin, it is important to note that duplicity-nfs-backup IS NEITHER duplicity, NOR is it duplicity-backup! It is only a wrapper script for duplicity-backup, also written in bash.

This means that you will need to install and configure duplicity and duplicity-backup.sh before you can utilize duplicity-nfs-backup. I also recommend that you would make use of nfs_automount as it significantly improves the chances that the NFS target share will be online when duplicity-nfs-backup attempts to access it.

This script is intended to be run from crontab. duplicity-nfs-backup takes no arguments, simply set the configuration parameters in duplicity-nfs-backup.conf and you’re done!

Like nfs_automount, duplicity-nfs-backup is also distributed under MIT license.

Clone or download duplicity-nfs-backup from my GitHub repository, and let me know if you come across any problems (or also if it works fantastically and saves the day! :)). Pull requests are always welcome.

NFS Automount, The Fourth Iteration (the complete rewrite)

** Note: This post has been significantly altered on 18 July 2013 from the original, posted a few days earlier.

A few days ago I released the fourth iteration of the NFS Automount script, with some minor changes to the previous version from December 2011. The earlier versions were released May 2011 (first CentOS Linux version), and July 2010 (originally written for FreeBSD).

Upon releasing the fourth version I realized the script was becoming brittle, the logic was, well, somewhat illogical, and minor refactoring would not help. Hence this complete rewrite of the script, now called “nfs_automount”, was born. It is conceptually based on the older versions, and I also borrowed some ideas from AutoNFS script on Ubuntu’s Community Wiki.

Like the earlier version, the goal of this script is to provide static (i.e. /etc/fstab-like) NFS mounts, while at the same time supporting cross-mounts between servers.

The other non-fstab alternative is to lazy-mount NFS shares with autofs (where available), but with it NFS shares are not continually maintained. When a remote share is accessed, it takes a few moments for it to become accessible as autofs mounts the share on-demand. While autofs times out a mounted share after some time of inactivity, it does not unmount the share before the timeout has lapsed in the event the remote server becomes inaccessible. While on-demand mounting may save some bandwidth, it is not suitable for all applications. Furthermore, when a system has one or more active mounted shares off of a server that goes offline, unexpected behavior is often observed on the client server until the now-defunct NFS shares are unmounted, or the remote server becomes available once again.

nfs_automount offers a solution:

  • The NFS shares are not statically defined in /etc/fstab so that the system startup is not delayed even when the remote server is not available. As soon as the shares become available they’re automatically mounted. If multiple servers cross-mount NFS shares from each other, and the servers are turned on at the same time, nfs_automount ensures that all mounts are established as soon as the shares become available.
  • The shares are monitored at a frequency you define, for example, every 60 seconds. If a share has become dismounted, stale, or their exporting server has become inaccessible, nfs_automount takes action to correct the situation: dismounted and stale shares are attempted to be remounted (stale shares are first immediately unmounted), and shares whose remote NFS service has disappeared are unmounted to prevent impact on the client system stability. Once a remote NFS service returns online, or definition of a previously stale share is reinstated, any shares that were unmounted as a result of those conditions are remounted.
  • The script is intended to run as a daemon (an upstart job script is provided for Ubuntu), and it reads its configuration from /etc/nfs-automount.conf where you can conveniently define the shares to be mounted and monitored along with some other options. You can also set ‘RUNTYPE’ option to ‘cron’, and run the script from crontab if you so choose.
  • You can define the shares to be mounted either as Read/Write, or Read Only. Of course, a share will be Read Only regardless of this setting if it has been exported as Read Only on the remote server.
  • An option to define a remote check file is provided. If provided in the configuration for a share, its unreachability can alert of a problem on the exporting server, such as a failed filesystem mount, even when the NFS share is otherwise working correctly. You can easily expand this feature to add additional functionality.
  • Provides clear logging which provides alerts by default, and more informative detail if you turn ‘DEBUGLOG’ setting to ‘true’.
  • Written in bash script with modular and clear syntax.
  • Tested on Ubuntu 12.x (should also work on Debian) and CentOS 6.x (should also work on RedHat). The service installation instructions (available on GitHub) have been written for Ubuntu, so if you’re installing the script for CentOS/RedHat, you will need to alter the installation steps somewhat. FreeBSD is no longer explicitly supported, but I believe it should work with minor modifications. I have not tested with Solaris or other *NIX environments. If you try, please post comments here!
  • Can be easily run as a service (upstart script is provided), or from crontab; the script works with crontab with just a single configuration switch change.
  • Distributed under MIT license.

Rather than posting the code (now 400+ lines) here, I have created a repository on GitHub from where it is easy to download or clone.

Enjoy! 🙂

Finding public IP on Linux command line

Here’s a handy command to display the Internet facing IP on a Unix/Linux command line. This is particularly useful on systems where lynx is not available, and where the system might be behind a firewall so that the public IP cannot be discerned from ifconfig output.

curl -s myip.dk | grep 'IP Address' | egrep -o '[0-9.]{3,}+'