Importing Installer ISOs to ESXi 5.x Datastore

VMware ESXi 5.x service console no longer offers ftpget command. But all is not lost.. 🙂 Follow the steps below to transfer your favorite OS installer ISO to the datastore from where you can conveniently mount it as if it were DVD/CD-ROM:

  1. Download 64-bit ncftp binary distribution to your local (Windows) workstation.
  2. Open vSphere, highlight the hostname, select the the desired storage unit from the Storage list under Resources (I usually use the datastore on the “system” disk for this purpose), and right click on it to open the context menu, then choose “Browse Datastore”.
  3. With the root highlighted, click on the “Upload files to this datastore” (the fourth icon from the left, with green “up” arrow). Select the ncftp binary package (that you download in step 1) from your local Windows hard drive.
  4. After making sure SSH login is enabled on the Yellow and Grey screen (Troubleshooting Options > Enable SSH), log in to the service console via SSH.
  5. Move the ncftp binary archive to /opt from the Datastore root, decompress it, and cd to the ncftp folder:
    cd /opt
    mv /vmfs/volumes/yourdatastore/ncftp* .
    tar zxvf ncftp*
    cd ncftp*
  6. Open ESXi firewall egress (outbound) rules so that the FTP client can access the outside FTP server:
    esxcli network firewall ruleset set -r ftpClient -e true
  7. With your favorite installer ISO available on an FTP server (perhaps on the distribution server, or on your local FTP server), you can now access it, like so:
    ./ncftpget -v -E -u ftpuser -p password /vmfs/volumes/mydatastore ‘/remotefilename.iso’

    Command from left to right:
    ncftp command (requires path, here “./” since you’re in the ncftp directory)
    -v for progress indicator
    -E for NOT passive mode (omitting it makes connection passive)
    -u ftpuser for remote ftp server
    -p password for remote ftp server (note: this is NOT sent encrypted!) is the ftp host name or IP address
    /vmfs/volumes/mydatastore is the path where you want to store the data on the local system
    /remotefilename.iso is the remote file (with full remote path if needed) to be transferred; the local file name will be the same

  8. Optinally, if you want to close the firewall egress rule for the FTP client, you can do so by issuing the following command:
    esxcli network firewall ruleset set -r ftpClient -e false

Now, when starting OS installation in ESXi you can highlight a created blank server instance, click on the CD-ROM icon on the toolbar (“CD/DVD drive 1” > “Connect to ISO image on a datastore…”), and then navigate to and select the ISO image you just transferred to your favorite datastore!

Marvell 88E8056 and ESXi 4.1

So I have an older development/experimental server that runs couple of VMs on ESXi 4.1. The server’s motherboard (ASUS P5BV-C/4L) is from an old workstation, and it has integrated quad NICs which would be nice to be able to use.. except that the default build of ESXi 4.1 doesn’t see them (even though ESXi 4.1 technically supports Marvell 88E8056 NICs).

There are several pages that discuss the issue extensively, and have a lot of good information on them. Yet another page has a quick low down on how to get the driver properly installed.

However, having not worked on ESXi CLI for some time I had forgotten, for example, that busybox that ESXi uses wipes the root files on every reboot. After a while I recalled (from an old note) that to save changes to the /etc/vmware/ I would need to execute /sbin/ 0 /bootbank/ after making the edits. But even that was unecessary.

One sentence on the brief recap page, would have saved me couple of hours tonight. So here it is: »Just upload the attached oem.tgz into /bootbank folder with scp, then reboot, and you’re done!» And when you do that, you are done – the pre-prepared oem.tgz works perfectly!

Yes, had I known, I would’ve known, but I didn’t. 🙂 Hopefully this saves time for someone else!

Things I didn’t know about ESXi

I’m setting up a development server using vmware ESXi virtual server running CentOS 5.5 x64 and FreeBSD 8.0 x64. Currently, the second installation pass is in progress. Being fresh to ESX/ESXi there were couple of things I didn’t realize:

First (the reason for the reinstall), if there is plenty of hard drive space available, it’s good idea not to deplete it all for the sytem installations. I split a 1.3Tb RAID 5 array between the two operating systems until I realized that 1) you can’t shrink vmfs partitions and 2) by consuming all hard drive space one limits the flexibility of the system down the line. Let’s say you want to install a newer version of an operating system and decide to do a fresh install. You need space for it while you want to keep the old version around at least long enough to migrate settings and data over.

Second, while I was aware of that ESXi doesn’t offer console access beyond the “yellow and grey” terminal, I didn’t realize you have no access to the VM consoles, either. So, with CentOS or FreeBSD installed, the only way to access their consoles is via the vSphere client (someone correct me if I’m wrong — I wish I were as I’d like to have local console access to the guest OS’es).

Finally, VMware Go “doesn’t currently support ESXi servers with multiple datastores”. So if you have, say, a 3ware/LSI/AMCC RAID controller which isn’t currently supported under ESXi as a boot device but which you likely still want to use as a datastore, you’ll end up with at least two datastores. So vSphere is really the only way to go for VM management also for this reason (since LSI provides a vmware-specific driver, one may also be able to direct-connect the LSI RAID array to the VM without it being an ESXi datastore, but that’s not the configuration I’m looking for—the boot device is small and houses just ESXi while the VMs and their associated datastores are located on the array).

In the end everything’s working quite well. I like the flexibility virtualization offers.. and consolidation is useful even in a small environment (one dev machine is less than two or three dev machines :)).