Importing Installer ISOs to ESXi 5.x Datastore

VMware ESXi 5.x service console no longer offers ftpget command. But all is not lost.. 🙂 Follow the steps below to transfer your favorite OS installer ISO to the datastore from where you can conveniently mount it as if it were DVD/CD-ROM:

  1. Download 64-bit ncftp binary distribution to your local (Windows) workstation.
  2. Open vSphere, highlight the hostname, select the the desired storage unit from the Storage list under Resources (I usually use the datastore on the “system” disk for this purpose), and right click on it to open the context menu, then choose “Browse Datastore”.
  3. With the root highlighted, click on the “Upload files to this datastore” (the fourth icon from the left, with green “up” arrow). Select the ncftp binary package (that you download in step 1) from your local Windows hard drive.
  4. After making sure SSH login is enabled on the Yellow and Grey screen (Troubleshooting Options > Enable SSH), log in to the service console via SSH.
  5. Move the ncftp binary archive to /opt from the Datastore root, decompress it, and cd to the ncftp folder:
    cd /opt
    mv /vmfs/volumes/yourdatastore/ncftp* .
    tar zxvf ncftp*
    cd ncftp*
  6. Open ESXi firewall egress (outbound) rules so that the FTP client can access the outside FTP server:
    esxcli network firewall ruleset set -r ftpClient -e true
  7. With your favorite installer ISO available on an FTP server (perhaps on the distribution server, or on your local FTP server), you can now access it, like so:
    ./ncftpget -v -E -u ftpuser -p password some.remotehost.com /vmfs/volumes/mydatastore ‘/remotefilename.iso’

    Command from left to right:
    ncftp command (requires path, here “./” since you’re in the ncftp directory)
    -v for progress indicator
    -E for NOT passive mode (omitting it makes connection passive)
    -u ftpuser for remote ftp server
    -p password for remote ftp server (note: this is NOT sent encrypted!)
    some.remotehost.com is the ftp host name or IP address
    /vmfs/volumes/mydatastore is the path where you want to store the data on the local system
    /remotefilename.iso is the remote file (with full remote path if needed) to be transferred; the local file name will be the same

  8. Optinally, if you want to close the firewall egress rule for the FTP client, you can do so by issuing the following command:
    esxcli network firewall ruleset set -r ftpClient -e false

Now, when starting OS installation in ESXi you can highlight a created blank server instance, click on the CD-ROM icon on the toolbar (“CD/DVD drive 1” > “Connect to ISO image on a datastore…”), and then navigate to and select the ISO image you just transferred to your favorite datastore!

LSI/3Ware 9650SE and ESXi 4.1

I needed to reinstall dev ESXi 4.1. The system has LSI (AMCC, 3Ware..) 9650SE on it, and after a fresh ESXi install the array was nowhere to be found. Oh yes, the drivers are not part of the ESXi installation package (it had been a while since I did the initial install..)! A quick tour around the web quickly produced the patch command:

perl vihostupdate.pl -server x.x.x.x -username root -password “xxxxxxx” -b c:AMCC_2.26.08.035vm40-offline_bundle-179560.zip -i

I recalled I had in the past renamed the lengthy bundle file to ‘offline_bundle.zip’, and did so this time, too, to make it easier to type the command. Executing the command (with the driver bundle named as c:offline_bundle.zip), I got an error message: “No matching bulletin or VIB was found in the metadata.” Some more Googling, and I found a mention: »After shorting [sic] the name of the original file to offline-bundle.zip and re-running the command, I did get positive feedback in the form of this message: The update completed successfully.» So the name has something to do with it!

Interestingly my experience was exactly the opposite than that I found in a blog post from 2009: the driver bundle only worked with its original name (so the above command—which can also be found in the instructions—is the correct one). So not only should one read the instructions, but follow them, too! 😉

Marvell 88E8056 and ESXi 4.1

So I have an older development/experimental server that runs couple of VMs on ESXi 4.1. The server’s motherboard (ASUS P5BV-C/4L) is from an old workstation, and it has integrated quad NICs which would be nice to be able to use.. except that the default build of ESXi 4.1 doesn’t see them (even though ESXi 4.1 technically supports Marvell 88E8056 NICs).

There are several pages that discuss the issue extensively, and have a lot of good information on them. Yet another page has a quick low down on how to get the driver properly installed.

However, having not worked on ESXi CLI for some time I had forgotten, for example, that busybox that ESXi uses wipes the root files on every reboot. After a while I recalled (from an old note) that to save changes to the /etc/vmware/simple.map I would need to execute /sbin/backup.sh 0 /bootbank/ after making the edits. But even that was unecessary.

One sentence on the brief recap page, would have saved me couple of hours tonight. So here it is: »Just upload the attached oem.tgz into /bootbank folder with scp, then reboot, and you’re done!» And when you do that, you are done – the pre-prepared oem.tgz works perfectly!

Yes, had I known, I would’ve known, but I didn’t. 🙂 Hopefully this saves time for someone else!

Expanding VMware Workstation VM partition

Few days ago I set up CentOS 5.5 on VMware Workstation 7.1 for PHP debugging. During the installation I shrunk the suggested VM disk size from the default 20Gb to 10Gb thinking that there’ll be plenty of space (being more familiar with FreeBSD systems that generally have a rather small footprint). But once I had completed the installation the root partition had just couple of hundred megabytes of free space remaining. Argh!

After looking around for the solution for a few moments I downloaded the free Parted Magic Live CD that includes GParted, Clonezilla and number of other utilities in an easy to use package. In the end extending the CentOS partition was a snap. After shutting down the CentOS VM, I first extended the VM disk in Workstation 7.1 VM settings from 10Gb to 20Gb in VM settings > Hard Disk > Utilities > Expand.

Then I edited the VM’s .vmx file by adding the following statement:

bios.bootDelay = “10000”

This slows down the VM’s boot sequence by adding a 10 second delay so that it’s easier to focus the VM screen (with a click of a mouse) and hit F2 before the VM startup commences. Note that you need to click the area of the VM screen where POST (boot) info is being displayed to give it focus; clicking on the similarly colored (black) area closer to the edges of the VM display is at ths point (during the boot) actually outside of the VM “screen”, hence it will not focus the VM, and without focus clicking on F2 does nothing. The other alternative (to ensure that VM enters its virtual BIOS settings) is to add a statement:

bios.forceSetupOnce = “TRUE”

.. in the .vmx file.

Once in VM BIOS settings I changed the boot order so that the CD/DVD drive is now the first boot device. I then popped the Parted Magic CD in the drive and rebooted the VM. With Parted Magic up and running I started Partition Editor (GParted), and moved the 1.4Gb linux-swap partition to the end of the newly expanded 20Gb disk space. Next I expanded the third partition (“/home”) to total of 6.8Gb, and moved it also to the right, back to back with the swap partition. Finally I gave the root partition (“/”) the rest of the free space giving it total size of 11.7Gb. Once the operations had been applied (requested changes written to the disk) I exited GParted and shut down Parted Magic choosing “reboot system” on exit.

Once the CentOS finished boothing, I checked the partition sizes with ‘df -h’ to confirm that root and home partitions reflect the extra space assigned to them. You might also want to restore the hard drive as the primary boot device for the VM in the VM BIOS settings so that a CD/DVD you might later forget in the CD/DVD drive won’t try to boot instead of the VM.