• My Links at Diigo
  • Ergonomics Resource

RANDOM.AC/CESS

Musings of a Web Technologist

Home Archives for freebsd

FreeBSD: Installed ports in chronological order

Posted on 20 January 2012 by Ville Leave a Comment

An easy way to list the installed ports in FreeBSD in chronological order (most recent first):

Shell
1
ls -latT /var/db/pkg | less

Filed Under: Technical, UNIX Tagged With: chronologically, freebsd, installed, ports

NFS automount evolves

Posted on 19 December 2011 by Ville Leave a Comment

** NOTE: This version is obsoleted! The latest version can be found here.

I’ve updated the NFS automount script that provides “self-healing” NFS mounts. The script now allows a mount to be defined as read-write or read-only, and then subsequently monitors that the share is mounted as R/W or R/O (of course, it can’t mount a share that has been shared as R/O as R/W). Both Linux (tested on CentOS 6.1) and FreeBSD versions are provided.

Since various systems can provide cross-mounts via NFS, and they may be started/rebooted at the same time, various shares may or may not be available at each system’s boot time. By utilizing this script the mounts become available soon after the respective share becomes available (simply adjust the run frequency in crontab to the needs of your specific application). Also, by not adding the NFS mount points in fstab the boot process is not delayed by a share that is not [yet] available.

First for CentOS/Linux:

Shell
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
#!/bin/sh
 
SHELL=/bin/sh
PATH=/sbin:/bin:/usr/sbin:/usr/bin
 
# set mount/remount request flags
mount=false
remount=false
 
# remote system name
remotesystem="$1"
 
# rw/ro
if [ "$2" = "rw" ]; then
    mountmode="-w"
else
    mountmode="-r"
fi
 
# remote share name
remoteshare="$3"
 
# local mount point
mountpoint="$4"
 
# file to indicate local mount status
testfile=${mountpoint}/"$5"
 
# rw test file
rw_testfile=${mountpoint}/nfs_enforcer_rw_testfile
 
# command locations
pingcmd=/bin/ping
showmountcmd=/usr/sbin/showmount
grepcmd=/bin/grep
mountcmd=/bin/mount
umountcmd=/bin/umount
statcmd=/usr/bin/stat
touchcmd=/bin/touch
rmcmd=/bin/rm
 
# --- end variables ---
 
# make sure the mountpoint is not stale
statresult=`${statcmd} ${mountpoint} 2>&1 | ${grepcmd} "Stale"`
 
if [ "${statresult}" != "" ]; then
   #result not empty: mountpoint is stale; remove it
   ${umountcmd} -f ${mountpoint}
fi
 
# ping the remote system (2 sec timeout)
${pingcmd} -w2 -c1 -q ${remotesystem} > /dev/null 2>&1
 
# make sure the remote system is reachable
if [ "$?" -eq "0" ]; then
 
   # query the availability of the remote share; not empty result indicates OK  
   offsiteshare=`${showmountcmd} -e ${remotesystem} | ${grepcmd} "${remoteshare}"`
   if [ "${offsiteshare}" != "" ] ; then
 
      # make sure the local mount point (directory) exists (so that [re-]mount, if necessary, is valid)
      if [ -d ${mountpoint} ] ; then
 
         localmount=`${mountcmd} | ${grepcmd} "${mountpoint}"`
 
         # make sure the share test file is _not_ present (to make sure the mountpoint is inactive)
         if [ ! -f ${testfile} ] ; then
 
            # make sure the local mountpoint is inactive (double checking)
            if [ "${localmount}" = "" ] ; then
 
               # all set to go; request mount
               mount=true
            fi
        
         else
            
            # make sure the local mountpoint is active (double checking)
            if [ "${localmount}" != "" ] ; then
 
               # attempt to create a test file..
               ${touchcmd} ${rw_testfile} > /dev/null  2>&1
 
               # ..and test its existence; first handle RW mounted shares:
               if [ -f ${rw_testfile} ] ; then
 
                  # share was RO requested
                  if [ "$2" = "ro" ]; then
                     remount=true
                  fi
 
                  # Delete the testfile
                  ${rmcmd} ${rw_testfile}
 
               # hanle RO mounted shares:
               else
 
                  # share was RW requested
                  if [ "$2" = "rw" ]; then
                     remount=true
                  fi
               fi
            fi
         fi
      fi
   fi
fi
 
# perform remount (unmount, request mount)
if $remount ; then
   ${umountcmd} -f ${mountpoint}
   mount=true
fi
 
# perform mount when so requested
if $mount ; then
   ${mountcmd} ${mountmode} -t nfs ${remotesystem}:${remoteshare} ${mountpoint}
fi
 
exit 0

Then for FreeBSD/UNIX:

Shell
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
#!/bin/sh
 
SHELL=/bin/sh
PATH=/etc:/bin:/sbin:/bin:/usr/bin:/usr/sbin:/usr/local/bin
 
# set mount/remount request flags
mount=false
remount=false
 
# remote system name
remotesystem="$1"
 
# rw/ro
if [ "$2" = "rw" ]; then
    mountmode="-w"
else
    mountmode="-r"
fi
 
# remote share name
remoteshare="$3"
 
# local mount point
mountpoint="$4"
 
# file to indicate local mount status
testfile=${mountpoint}/"$5"
 
# rw test file
rw_testfile=${mountpoint}/nfs_enforcer_rw_testfile
 
# command locations
pingcmd=/sbin/ping
showmountcmd=/usr/bin/showmount
grepcmd=/usr/bin/grep
mountcmd=/sbin/mount
umountcmd=/sbin/umount
statcmd=stat
touchcmd=/usr/bin/touch
rmcmd=/bin/rm
 
# --- end variables ---
 
# make sure the mountpoint is not stale
statresult=`${statcmd} ${mountpoint} 2>&1 | ${grepcmd} "Stale"`
 
if [ "${statresult}" != "" ]; then
   #result not empty: mountpoint is stale; remove it
   ${umountcmd} -f ${mountpoint}
fi
 
# ping the remote system (2 sec timeout)
remoteping=`${pingcmd} -c1 -o -q -t2 ${remotesystem} | grep " 0.0%"`
 
# make sure the remote system is reachable
if [ "${remoteping}" != "" ] ; then
  
   # query the availability of the remote share; not empty result indicates OK  
   offsiteshare=`${showmountcmd} -e ${remotesystem} | ${grepcmd} "${remoteshare}"`
   if [ "${offsiteshare}" != "" ] ; then
  
      # make sure the local mount point (directory) exists (so that [re-]mount, if necessary, is valid)
      if [ -d ${mountpoint} ] ; then
 
         localmount=`${mountcmd} | ${grepcmd} "${mountpoint}"`
      
         # make sure the share test file is _not_ present (to make sure the mountpoint is inactive)
         if [ ! -f ${testfile} ] ; then
        
            # make sure the local mountpoint is inactive (double checking)
            if [ "${localmount}" = "" ] ; then
 
               # all set to go; request mount
               mount=true
            fi
              
         else
 
            # make sure the local mountpoint is active (double checking)
            if [ "${localmount}" != "" ] ; then
 
               # attempt to create a test file..
               ${touchcmd} ${rw_testfile} > /dev/null  2>&1
 
               # ..and test its existence; first handle RW mounted shares:
               if [ -f ${rw_testfile} ] ; then
 
                  # share was RO requested
                  if [ "$2" = "ro" ]; then
                     remount=true
                  fi
 
                  # Delete the testfile
                  ${rmcmd} ${rw_testfile}
 
               # hanle RO mounted shares:
               else
 
                  # share was RW requested
                  if [ "$2" = "rw" ]; then
                     remount=true
                  fi
               fi
            fi
         fi
      fi
   fi
fi
 
# perform remount (unmount, request mount)
if $remount ; then
   ${umountcmd} -f ${mountpoint}
   mount=true
fi
 
# perform mount when so requested
if $mount ; then
   ${mountcmd} ${mountmode} -t nfs ${remotesystem}:${remoteshare} ${mountpoint}
fi
 
exit 0

You should run the automount script from a runfile, like so:

Shell
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
#!/bin/sh
 
NFS_ENFORCE=/usr/local/sbin/nfs_enforcer
 
# Separate the following parameters with spaces:
#
# - nfs enforcer command (set above)
# - remote system name (must be resolvable)
# - read/write (rw) or read-only (ro); NOTE: share may be read-only regardless of how this is set
# - remote share name (from remote's /etc/exports)
# - local mount point (existing local directory)
# - share test file (an immutable file on the share)
 
# e.g.
# $NFS_ENFORCE dbsysvm rw /nfs4shares/conduit /mnt/dbsys_conduit .conduit@dbsysvm
# or (for local remount read-only)
# $NFS_ENFORCE localhost ro /var/web/projects/repository /mnt/rorepo .repository@localhost
 
$NFS_ENFORCE localhost ro /var/web/projects/repository /mnt/rorepo .repository@localhost
 
exit 0

..and call the the above runfile from crontab:

Shell
1
*/10  *  *  *  *  root  /usr/local/sbin/nfs_enforcer.batch > /dev/null

Filed Under: Technical, UNIX Tagged With: automount, enforcer, freebsd, Linux, monitoring, nfs, self-healing, unix

Things I didn’t know about ESXi

Posted on 06 June 2010 by Ville 1 Comment

I’m setting up a development server using vmware ESXi virtual server running CentOS 5.5 x64 and FreeBSD 8.0 x64. Currently, the second installation pass is in progress. Being fresh to ESX/ESXi there were couple of things I didn’t realize:

First (the reason for the reinstall), if there is plenty of hard drive space available, it’s good idea not to deplete it all for the sytem installations. I split a 1.3Tb RAID 5 array between the two operating systems until I realized that 1) you can’t shrink vmfs partitions and 2) by consuming all hard drive space one limits the flexibility of the system down the line. Let’s say you want to install a newer version of an operating system and decide to do a fresh install. You need space for it while you want to keep the old version around at least long enough to migrate settings and data over.

Second, while I was aware of that ESXi doesn’t offer console access beyond the “yellow and grey” terminal, I didn’t realize you have no access to the VM consoles, either. So, with CentOS or FreeBSD installed, the only way to access their consoles is via the vSphere client (someone correct me if I’m wrong — I wish I were as I’d like to have local console access to the guest OS’es).

Finally, VMware Go “doesn’t currently support ESXi servers with multiple datastores”. So if you have, say, a 3ware/LSI/AMCC RAID controller which isn’t currently supported under ESXi as a boot device but which you likely still want to use as a datastore, you’ll end up with at least two datastores. So vSphere is really the only way to go for VM management also for this reason (since LSI provides a vmware-specific driver, one may also be able to direct-connect the LSI RAID array to the VM without it being an ESXi datastore, but that’s not the configuration I’m looking for—the boot device is small and houses just ESXi while the VMs and their associated datastores are located on the array).

In the end everything’s working quite well. I like the flexibility virtualization offers.. and consolidation is useful even in a small environment (one dev machine is less than two or three dev machines :)).

Filed Under: Technical, UNIX, Virtualization Tagged With: centos, esxi, freebsd, virtual, vmware

Explorations in the World of Linux

Posted on 05 September 2009 by Ville 1 Comment

I’ve been a FreeBSD admin for the past decade, and during this time have become quite familiar with the *BSD system. It has its quirks, but overall it’s very clean and easy to maintain.

From time to time – usually when I’ve been getting ready to upgrade to the next major revision of FreeBSD – I’ve taken some time to research what the current pros and cons are for FreeBSD vs. some Linux distro. Always, in the end, FreeBSD has won. However, a development project I’m starting to work on will utilize Zend Server, which is only supported on handful of common Linux distros and on Windows (which is, by default, not an option as I strongly maintain that Windows is not suitable as a web server platform). There is, of course, Linux compatibility layer in FreeBSD, but as Zend doesn’t currently support it as a platform for Zend Server, I wouldn’t feel comfortable using it in a production environment.

So even though I find FreeBSD superior to Linux in many ways, I’ve now spent some time getting acquainted with Linux. I first started with Red Hat, then moved to CentOS which is the Linux distribution I’m currently testing. Now it’s not bad, per se, but I frequently come back to the thought: “Why would someone, anyone prefer THIS over a BSD system?!” The package management with yum, rpm, and the GUI overlays is easy enough, but it’s chaotic! Having to enable and disable repos, set their priorities, etc. seems unnecessarily complicated. On the FreeBSD side there is the ports collection which provides most of the software that one can imagine ever needing. The odd few items that either aren’t available in ports, or whose configuration is somehow not complete enough through ports can be easily compiled from the source tarball. Everything’s quite easy to keep track of, and to duplicate if one’s building a new system.

I’m sure some of this feeling stems from the fact that I have been using a BSD system for so long, and from the fact that I probably don’t yet know Linux well enough (say, to build the system from a scratch..). But as far as I can tell, package management is done with yum and rpm (on CentOS, say), by adjusting repository priorities, and enabling/disabling repositories. That is messy!

Well, I now have a functional development server running Zend Server with Apache, Subversion, and MySQL, and as the vendor (Zend) dictates the rules, I must continue development on Linux. Perhaps in six months time I’ll have more favorable comments about it as compared to FreeBSD… but I sort of doubt it. My guess is I’ll just learn to live with it, every now and then wistfully glancing to the direction of the BSD server.

Filed Under: Technical, UNIX Tagged With: bsd, centos, freebsd, Linux

  • 1
  • 2
  • 3
  • Next Page »

Blog Author, Ville Walveranta

Information Architect, Application Developer, Web Technologist

Social

Follow me on:

StackExchange

profile for Ville on Stack Exchange, a network of free, community-driven Q&A sites

Recent Posts

  • macOS: ‘dig’ vs. ‘dscacheutil’ while using split DNS with Viscosity VPN client
  • Remove DRM Easily (?) from Your Audible Purchases
  • Exploring GitHub Flavored Markdown local preview
  • Interactive AWS CLI Query Filtering with JSONPath
  • Easy MFA and Profile Switching in AWS CLI

Tags

2fa automatic automount aws bash beyond compare boot centos co-lo co-location comparison diff DNS enforcer esxi freebsd ftp fusemail Hardware iam install key Linux mailtrust microsoft monitoring multi-factor nfs RELEASE-7.0 rotation script security self-healing shell software sublime tbe trackball ubuntu unix vista vmware Windows windows 7 workflow

Blog archive

April 2018
M T W T F S S
« Jan    
 1
2345678
9101112131415
16171819202122
23242526272829
30  

PGP/GPG

Keybase: vwal
(PGP Key ID 2E99 86D7 7ED9 9C13)

Copyright © Ville Walveranta 2018 - All Rights Reserved · Powered by Wordpress and Genesis Framework