macOS: ‘dig’ vs. ‘dscacheutil’ while using split DNS with Viscosity VPN client

If you’re using Viscosity VPN client on Mac, and have enabled the split DNS for the VPN domains, `dig` doesn’t work quite as someone familiar with Linux/*NIX would expect. This is because Apple has replaced the traditional resolver on macOS with something “more advanced”. Granted, it handles the split DNS gracefully, but as a result, using `dig` for the VPN domains without defining a DNS server for a query doesn’t produce any result even though resolution works otherwise in macOS.

This works:

~ dig +short random.ac a
52.4.179.30

But this does not:

~ dig +short internal.ville.sh a

However, this does:

~ dscacheutil -q host -a name internal.ville.sh
name: internal.ville.sh
ip_address: 10.50.50.10

To make things easier, I have created the following `zsh` alias:

alias dnsquery='dscacheutil -q host -a name'

However, I’ve forgotten this a few times when some time has passed since the last encounter with dig on internal domains. And then time goes down the drain trying to figure out if something is wrong with the DNS.. but it’s working all along! So I added a reminder for myself in form of another `zsh` alias:

dig() { echo && echo -e '\033[0;97m\033[41m Remember, this is a Mac! Use "dnsquery" instead! \033[0m' && /usr/bin/dig $@ } #macOS

And so:

~ dig +short internal.ville.sh a

 Remember, this is a Mac! Use "dnsquery" instead!

#oh right! :-)

~ dnsquery internal.ville.sh
name: internal.ville.sh
ip_address: 10.50.50.10

Protecting Internet-facing SSH access with MFA on Ubuntu 16.04 (while running standard SSH for allowed addresses)

When it comes to secure access to a remote server, such as an AWS EC2 instance, you have couple of options. The preferred option is to have the instance (or the server in a data center or other similar environment) within a private network (such as a VPC in AWS), only accessible by SSH over a VPN (either your own OpenVPN setup, or IPsec available from AWS). However, not having a fall-back SSH connectivity is not always practical or feasible, even if it be just to access a gateway instance that likely serves as a bastion host. This is obviously not applicable to environments where nothing is strictly ever done over SSH, and everything is only ever done over configuration management, but such environments are far and few between.

The following outlines my preferred method of setting up SSH access on the gateway instance. Because the configuration parameters differ between the MFA-protected, but IP-unrestricted SSH server, and the one that servers connections from the allowed addresses/CIDR ranges, it is best to run two separate SSH daemons.

Before starting this process, make sure that the normal OpenSSH access to your server/instance has been configured, as this article mainly outlines the deltas from the standard SSH setup. So let’s get started!

  1. Install `libpam-google-authenticator`:
    sudo apt-get install libpam-google-authenticator
  2. Make a copy of the existing `sshd_config`:
    sudo cp /etc/ssh/sshd_config /etc/ssh/sshd_config_highport
  3. Modify the newly created sshd_config_highpoprt:
    • Select a different port, such as 22222:
      Port 22222
    • Allow users who should be allowed to use the MFA-protected, but IP-unrestricted SSH access:
      AllowUsers myusername
    • Set Google Authenticator -compatible authentication method:
      # require Google Authenticator after pubkey
      AuthenticationMethods publickey,keyboard-interactive
    • Set `ChallengeResponseAuthentication`:
      ChallengeResponseAuthentication yes
    • If you have configured any `Match address` or `Match user` entries in your primary SSH server’s sshd_config file (whose copy we’re editing), remove them from the copy. For example, you might have something like this configured for the primary SSH instance:
      Match address 10.10.10.0/24
      PasswordAuthentication yes
      
      Match User root Address 10.10.10.0/24
      PermitRootLogin yes
      
      Match User root Address 100.100.100.50/32
      PermitRootLogin prohibit-password
      

      If you do, remove them from `sshd_config_highport`.

  4. Make a copy of `/etc/pam.d/sshd`, and modify the copy, like so:
    sudo cp /etc/pam.d/sshd /etc/pam.d/sshd2

    Then add on top of the `sshd2` file:

    auth required pam_google_authenticator.so

    .. and comment out the following line in the file:

    @include common-auth

    like so:

    # @include common-auth
  5. Now run `google-authenticator` as the user who you want to be able to log in as over the MFA-protected, but IP-unrestricted SSH-service. Do not use root; use an unprivileged user account instead! Once you run it, you will be prompted to answer: `Do you want authentication tokens to be time-based (y/n)`. Answer `yes`, and the system will display a QR code. If you don’t have a Google Authenticator -compatible app on your smart phone/tablet yet, install one. I recommend Authy (Android / iOS). Once you have installed it and created an account in it, scan the QR code off the screen, and also write down the presented five “emergency scratch codes” in a safe place. Then answer the remaining questions, all affirmative:

    – Do you want me to update your “~/.google_authenticator” file (y/n) y
    – Do you want to disallow multiple uses of the same authentication token? .. y
    – By default, tokens are good for 30 seconds and in order to compensate .. y
    – .. Do you want to enable rate-limiting (y/n) .. y

  6. Create a `ssh2` symlink to the existing `ssh` executable (a requirement by service autostart):
    sudo ln -s /usr/sbin/sshd /usr/sbin/sshd2
  7. Add the following in `/etc/default/ssh`:
    # same for the highport SSH daemon
    SSHD2_OPTS=
  8. Create a new `systemd` service file for the `ssh2` daemon:
    sudo cp /lib/systemd/system/ssh.service /etc/systemd/system/sshd2.service

    .. then modify the `sshd2.service` file; modify the existing ExecStart and Alias lines, like so:

    ExecStart=/usr/sbin/sshd2 -D $SSHD2_OPTS -f /etc/ssh/sshd_config_highport
    Alias=sshd-highport.service

    Note that the Alias name (as set above) must be different from the file name for the service. In other words, as in the example above, the file name is `sshd2.service`, the “Alias” must be set to something else that doesn’t previously exist in the `/etc/systemd/system` directory (i.e. in the above example `sshd-highport.service`).

    Then start the service, test it, and finally enable it (to persist reboots):

    sudo systemctl daemon-reload
    sudo systemctl start sshd2
    sudo systemctl status sshd2
    
    sudo systemctl enable sshd2
    sudo systemctl status sshd2

When you enable the service with `sudo systemctl enable sshd2`, a symlink is created by the name of the alias you defined. Following the above example, it would be like so:

`/etc/systemd/system/sshd-highport.service` [symlink] -> `/etc/systemd/system/sshd2.service` [physical file]

You’re all set! Other considerations: If you’re running this in AWS, remember to configure the Security Groups to allow access to the high port SSH from any source (assuming you want it to be accessed from any source), as well as adjust the ACLs and iptables/ufw if you use them. Furthermore, since the MFA-protected SSH port will be accessible publicly, it’s a good idea to install sshguard (some install info here) to prevent port knocking (besides stressing the system some, brute force attacks aren’t much of a threat since the port is now protected by the MFA which also implements access rate limiting).

Finally, since the MFA is time-based, it’s a good idea to make sure `ntp` is running on the your server/instance. Additionally I run `ntpdate` from system crontab once a day to make sure a possible greater drift than what the maximum ntp slew rate can correct in a reasonable amount of time is corrected at least once a day:

sudo apt-get update
sudo apt-get install ntp
sudo systemctl enable ntp

.. and in `/etc/crontab:`

# Force time-sync once a day (in case of a time difference too big for stepped adjustment via ntp)
0       5       *       *       *       root   service ntp stop && ntpdate -u us.pool.ntp.org && service ntp start

(`ntpdate` won’t run if `ntp` service is active)

And there you have it! Now when you connect to the server/instance over SSH from a random external IP (such as from a hotel) using a public key to your chosen user account, you will be prompted for an MFA code (which you have to enter from the Authy app) before the connection is authorized. And because the primary SSH daemon still serves the default SSH port 22, and because it is restricted to the private networks (plus possibly some strictly limited public IPs), those “known”, or “internal” IPs are not presented with an MFA challenge in order to get connected.

OpenVPN with FreeRADIUS: How To Use the CN from the User Cert as the Login Name (i.e. the reverse of “username-as-common-name”)

I recently set up handful of OpenVPN servers to provide access to various LAN and AWS VPC resources. Initially I had just the certificate validation configured, but I felt slightly uneasy about not having a password. Especially in the environments where multiple people need access to a resource, in the event one of them no longer should have access (such as when leaving an organization) the only way to block such user would be to add their cert into the CRL. While that should be done anyway when a user’s privilege needs to be revoked, a password would provide a more immediate and easy way to make such changes.

The next step was to install FreeRADIUS which proved to be a very easy task. I’m initially running it with just text-based back-end and will later add MySQL, perhaps with daloRADIUS GUI to make user administration even easier. On Ubuntu/Debian there is a package “openvpn-auth-radius”, which makes it possible to add FreeRADIUS authentication to OpenVPN server with one simple line:

plugin /usr/lib/openvpn/radiusplugin.so /etc/openvpn/endpoint_server_radiusplugin.conf

Of course, the client side also needs the auth-user-pass statement in their OpenVPN client configuration.

But there is a problem: The user cert can be that of Bob while the login username/password is that of Alice, and the login would still be valid. Apparently I’m not the only one who has thought about this. While I didn’t want to hack the pam auth plugin, the post had enough clues to help creating a simple bash script that sets the username based on the common-name from the validated user’s certificate:

#!/bin/bash

# $1 provides the temp file name provided by OpenVPN
# file has two lines: username and password, as entered by the user.
# We get the username from the user cert's CN (available via an envvar).

export PATH="/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin"

if [ ! -z ${common_name} ]; then
  username=`echo ${common_name}`
else
  username="-"
fi

if [ ! -z $1 ] && [ -f $1 ]; then
  password=`tail -1 $1`
else
  password="-"
fi

radius_server=localhost
# shared secret for localhost (or your RADIUS server) from /etc/freeradius/
shared_secret="XXXXXXXX"

AUTHCHECK=`cat << EOF | /usr/bin/radclient -s ${radius_server} auth ${shared_secret} | grep approved | tr -d 'n' | tail -c 1
User-Name=${username}
User-Password=${password}
EOF`

if [[ $AUTHCHECK = 1 ]]; then
  exit 0
else
  exit 1
fi

To use this script, simply save it to /etc/openvpn/endpoint_server_radius_auth.sh, make it executable, and edit the file to add the shared secret for the RADIUS server from /etc/freeradius/clients. Finally, add the following lines in your OpenVPN server configuration that already authenticates the users by their certificates:

tmp-dir /dev/shm
auth-user-pass-verify /etc/openvpn/endpoint_server_radius_auth.sh via-file

Now the login name for RADIUS authentication is taken from the CommonName (CN) of the user's certificate and, in fact, the username that the user enters when prompted for auth-user-pass username/password is ignored, only the password is significant.

The bottom line of this script: It utilizes RADIUS to provide a server-side password validation for the certificate's CN. A user can always remove the password protection from their private key, so this approach functions as an extra layer of security while making it easier to quickly revoke user's access to a resource.

Note: For this to work, the CommonName set in user certificates obviously must be a valid RADIUS login name. A user can't modify the CN in their certificate (unless they're NSA since they apparently have access to RSA-keys, too 🙁 ), so they're locked to use that specific username.

Also note that I wrote this script on Ubuntu, and did not necessarily observe portability, so you may need to modify the script some for other platforms. It is primarily intended as an example (although it does work), as finding something like this would have saved me a few hours of work.

Replacing a Firewall/Gateway and Purging the Upstream ARP Cache with arping in Ubuntu

Over the years I have had to replace various firewall devices at co-location racks, and have equally many times been annoyed by the time it has taken to to clear the upstream (co-lo) router/gateway of the apparent stale ARP entries that point to the MAC of the retiring device. Since the external IP normally stays the same, the upstream router/gateway becomes confused and it takes some time, say, half an hour, until the upstream device cache expiration is reached and the traffic starts to flow normally again.

Facing once again such replacement I this time had to figure it out because the traffic of this particular installation could not be interrupted for 30 minutes (or however long it would take for the upstream cache to clear). I then came across Brian O’Neill’s 2012 article Changing of the Guard – Replacing a firewall and gratuitous ARP that introduced a solution in situations where there is no administrative access to the upstream devices (so that an immediate purge of the ARP cache could be triggered). Exactly what I was looking for!

In the article Brian uses a Linux server temporarily with a spoofed MAC address of the new firewall appliance to trigger the ARP cache flush with help of arping command. In my case I was installing Shorewall on Ubuntu 12.04, so I could use arping from the firewall server itself. I went ahead and installed arping (apt-get install arping), but it turned out the default arping package on Ubuntu does not include the required “-U” (‘unsolicited’, or gratuitous ARP). Fortunately an alternative package “iputils-arping” implements the unsolicited switch. With iputils-arping installed the command is still “arping”, and so the command Brian offered works as-is:

arping –U –c 5 –I eth1 192.168.1.1

Where “-c” indicates how many times the information is broadcast, “-I” obviously defines the interface connected to the upstream router/gateway, and the IP is the external IP of your firewall/gateway device.