Expanding VMware Workstation VM partition

Few days ago I set up CentOS 5.5 on VMware Workstation 7.1 for PHP debugging. During the installation I shrunk the suggested VM disk size from the default 20Gb to 10Gb thinking that there’ll be plenty of space (being more familiar with FreeBSD systems that generally have a rather small footprint). But once I had completed the installation the root partition had just couple of hundred megabytes of free space remaining. Argh!

After looking around for the solution for a few moments I downloaded the free Parted Magic Live CD that includes GParted, Clonezilla and number of other utilities in an easy to use package. In the end extending the CentOS partition was a snap. After shutting down the CentOS VM, I first extended the VM disk in Workstation 7.1 VM settings from 10Gb to 20Gb in VM settings > Hard Disk > Utilities > Expand.

Then I edited the VM’s .vmx file by adding the following statement:

bios.bootDelay = “10000”

This slows down the VM’s boot sequence by adding a 10 second delay so that it’s easier to focus the VM screen (with a click of a mouse) and hit F2 before the VM startup commences. Note that you need to click the area of the VM screen where POST (boot) info is being displayed to give it focus; clicking on the similarly colored (black) area closer to the edges of the VM display is at ths point (during the boot) actually outside of the VM “screen”, hence it will not focus the VM, and without focus clicking on F2 does nothing. The other alternative (to ensure that VM enters its virtual BIOS settings) is to add a statement:

bios.forceSetupOnce = “TRUE”

.. in the .vmx file.

Once in VM BIOS settings I changed the boot order so that the CD/DVD drive is now the first boot device. I then popped the Parted Magic CD in the drive and rebooted the VM. With Parted Magic up and running I started Partition Editor (GParted), and moved the 1.4Gb linux-swap partition to the end of the newly expanded 20Gb disk space. Next I expanded the third partition (“/home”) to total of 6.8Gb, and moved it also to the right, back to back with the swap partition. Finally I gave the root partition (“/”) the rest of the free space giving it total size of 11.7Gb. Once the operations had been applied (requested changes written to the disk) I exited GParted and shut down Parted Magic choosing “reboot system” on exit.

Once the CentOS finished boothing, I checked the partition sizes with ‘df -h’ to confirm that root and home partitions reflect the extra space assigned to them. You might also want to restore the hard drive as the primary boot device for the VM in the VM BIOS settings so that a CD/DVD you might later forget in the CD/DVD drive won’t try to boot instead of the VM.

Pidgin on CentOS 5.x

If you follow Pidgin installation instructions for RHEL/CentOS on CentOS, you’ll get the following error with XMPP (GoogleTalk, Openfire): “Server does not use any supported authentication method.”

The pidgin.repo for yum available on the Pidgin download page does not work. Instead, install pidgin from CentOS [base] updates repository. Of course, it’ll be a few versions behind the released version as one might expect from CentOS base distribution.

Things I didn’t know about ESXi

I’m setting up a development server using vmware ESXi virtual server running CentOS 5.5 x64 and FreeBSD 8.0 x64. Currently, the second installation pass is in progress. Being fresh to ESX/ESXi there were couple of things I didn’t realize:

First (the reason for the reinstall), if there is plenty of hard drive space available, it’s good idea not to deplete it all for the sytem installations. I split a 1.3Tb RAID 5 array between the two operating systems until I realized that 1) you can’t shrink vmfs partitions and 2) by consuming all hard drive space one limits the flexibility of the system down the line. Let’s say you want to install a newer version of an operating system and decide to do a fresh install. You need space for it while you want to keep the old version around at least long enough to migrate settings and data over.

Second, while I was aware of that ESXi doesn’t offer console access beyond the “yellow and grey” terminal, I didn’t realize you have no access to the VM consoles, either. So, with CentOS or FreeBSD installed, the only way to access their consoles is via the vSphere client (someone correct me if I’m wrong — I wish I were as I’d like to have local console access to the guest OS’es).

Finally, VMware Go “doesn’t currently support ESXi servers with multiple datastores”. So if you have, say, a 3ware/LSI/AMCC RAID controller which isn’t currently supported under ESXi as a boot device but which you likely still want to use as a datastore, you’ll end up with at least two datastores. So vSphere is really the only way to go for VM management also for this reason (since LSI provides a vmware-specific driver, one may also be able to direct-connect the LSI RAID array to the VM without it being an ESXi datastore, but that’s not the configuration I’m looking for—the boot device is small and houses just ESXi while the VMs and their associated datastores are located on the array).

In the end everything’s working quite well. I like the flexibility virtualization offers.. and consolidation is useful even in a small environment (one dev machine is less than two or three dev machines :)).

Explorations in the World of Linux

I’ve been a FreeBSD admin for the past decade, and during this time have become quite familiar with the *BSD system. It has its quirks, but overall it’s very clean and easy to maintain.

From time to time – usually when I’ve been getting ready to upgrade to the next major revision of FreeBSD – I’ve taken some time to research what the current pros and cons are for FreeBSD vs. some Linux distro. Always, in the end, FreeBSD has won. However, a development project I’m starting to work on will utilize Zend Server, which is only supported on handful of common Linux distros and on Windows (which is, by default, not an option as I strongly maintain that Windows is not suitable as a web server platform). There is, of course, Linux compatibility layer in FreeBSD, but as Zend doesn’t currently support it as a platform for Zend Server, I wouldn’t feel comfortable using it in a production environment.

So even though I find FreeBSD superior to Linux in many ways, I’ve now spent some time getting acquainted with Linux. I first started with Red Hat, then moved to CentOS which is the Linux distribution I’m currently testing. Now it’s not bad, per se, but I frequently come back to the thought: “Why would someone, anyone prefer THIS over a BSD system?!” The package management with yum, rpm, and the GUI overlays is easy enough, but it’s chaotic! Having to enable and disable repos, set their priorities, etc. seems unnecessarily complicated. On the FreeBSD side there is the ports collection which provides most of the software that one can imagine ever needing. The odd few items that either aren’t available in ports, or whose configuration is somehow not complete enough through ports can be easily compiled from the source tarball. Everything’s quite easy to keep track of, and to duplicate if one’s building a new system.

I’m sure some of this feeling stems from the fact that I have been using a BSD system for so long, and from the fact that I probably don’t yet know Linux well enough (say, to build the system from a scratch..). But as far as I can tell, package management is done with yum and rpm (on CentOS, say), by adjusting repository priorities, and enabling/disabling repositories. That is messy!

Well, I now have a functional development server running Zend Server with Apache, Subversion, and MySQL, and as the vendor (Zend) dictates the rules, I must continue development on Linux. Perhaps in six months time I’ll have more favorable comments about it as compared to FreeBSD… but I sort of doubt it. My guess is I’ll just learn to live with it, every now and then wistfully glancing to the direction of the BSD server.