I have been meaning to write this up for a while; Scott Drummonds’ ‘Love Your Balloon Driver’ post today at his Virtual Performance blog gave me a nice reminder. I actually caught a sneak peak at the graphs with an explanation from Scott at his instructor-led lab at VMworld 2009. Scott calls out that the only workload they discovered suffers from balloon driver activity is Java. The reason for Java’s problems with balloon driver activity is that Java itself runs in a VM and so the guest OS cannot properly determine which pages should be swapped out when the balloon driver calls for it.
My experiences causes me to agree with Scott and the whitepaper he cites – in a properly designed and equipped environment the balloon driver is not detrimental for most every workload to a point. However, I recently discovered in a client site that the balloon driver can cause significant issues when the environment is poorly designed and under-sized. Here the background:
I was called into an already established environment where the client was running on an older blade with VMware ESX 3.5. The blade maxed out at 16GB RAM and had dual dual-core CPU’s with no hope for an upgrade. On the blade was a single guest VM running Windows 2003 with SQL 2005, in it’s full 32-bit glory. The VM was configured with 4 vCPU’s and 16GB of memory. Some of you can probably already guess where this is going….
The x86 Windows guest had PAE configured, and SQL took advantage of AWE to use the additional memory beyond the 4GB limit of a 32-bit system. Additionally, the Windows guest had the /3GB switch enabled in boot.ini. Finally, as per SQL best practices, the ‘Lock Pages in Memory‘ permission was granted to the SQL Server service account. What the guest was left with was 1GB of kernel mode memory and 15GB of User Mode/Extended addressable memory.
And here’s the problem. The client was using ESX, not ESX 3.5, so the Service Console required memory. In this case, the service console had approximately 512MB allocated to it. Futhermore, VM’s require some overhead on ESX to run. The memory overhead consumed by a Windows guest on ESX 3.5 with 4 vCPU and 16GB of memory is a bit more than 512MB. On a properly sized ESX server with multiple similar guests/workloads, you could probably gain much of the overhead back through transparent page sharing; but in this case I had a 1:1 P2V ratio. If you are any good at math you see that the environment is running about 1GB short of memory. A quick check of the balloon driver stat in vCenter show that the balloon driver was constantly active and demanding about 1GB back from the guest… constantly.
Under normal circumstances this might not be an issue, but in this case the Windows guest was being absolutely punished. The guest CPU’s were pegged at 100% with an excessive amount of kernel time, often indicating IO issues. And indeed I did experience terrible disk and network performance on the guest. At the root of the problem is this – the Lock Pages in Memory permission allows SQL to get a firm grasp on the user mode memory available to it (15GB) and lock it up. This left the already starved (because of the 3GB switch in the boot.ini) guest kernel with it’s 1GB the only thing the balloon driver could really swap out.
The client suggested a reservation of 16GB on the VM, knowing that memory reservations prevent balloon driver activity. I calmly asked them to back away from the keyboard as I explained how if a starved guest was bad, how much worse a starved Service Console would be. In the end the fix was quiet easy – I convinced the customer that they should reduce the amount of memory allocated to the guest by about 1GB, enough to let the 512MB SC and the 512MB of overhead run without contention. I was able to show them the difference between allocated and active memory in vCenter – the 1GB being surrendered was not really being actively used, SQL just had it locked up. In fact, surrendering the 1GB of memory back to ESX breathed new life into the guest VM, bringing its performance back in line with expectations.
Ideally, I would have brought in a bigger ESX server that could serve additional VM’s, driving greater levels of efficiency across the environment. It just wasn’t an option for the client in this case. In the end, the problem was fixed and I was reminded just how fun it can be to explain some of these backwards sounding virtualization concepts to customers – fewer vCPU’s can lead to better performance of guests, less guest memory can fix performance issues, and increasing the quantity of similar guests on a host can drive better performance to a point because of transparent page sharing.
Stay tuned over the next few weeks as I digest and write on my VMworld experience – from VMUG activities to Paul Maritz’s press conference announcing the vCloud Express, and plenty of great sessions in between. Like many of you, I returned from VMworld with quite a backlog of work but I’ll do my best to squeeze in some posts and tweets.