Capital Network Solutions, Inc Sacramento

Planning for VMWare

Virtualization has been around for some time now and is only becoming more popular. The economics make sense, especially when a number of specialized services are involved that do not play well together. As a whole, those services may use scant memory or CPU resources, but vendor compatibilities or other support concerns prevent the loading of multiple services on one system. Rather than bulk up on several dedicated servers for each application, a virtual cluster makes great sense. A VMWare ESX hardware cluster can utilize several hardware resources for the purpose of running virtual machines. What's even better is that should one of those hardware components fail, the virtual machines can continue to run off the available systems!

The problem with what has been seen in the field is that many organizations tend to view virtualization as a cure all. "Hey, I have 3 servers, and now I can have 6 or more on VM!", as if the hardware was suddenly able to defy laws of I/O, memory and CPU boundaries. So how can you be sure you're sizing the solution appropriately? As many of the white papers suggest, knowing your environment and objectives is a good start, however, understanding the usage fundamentals of the environment is key. Things like CPU utilization, disk I/O, memory use and other metrics are gathered through basic performance monitoring. While not a slick or new concept, properly gathering and analyzing performance data will help ensure you don't run into a bottleneck in your new VM environment. If you're lucky, you have an automated solution in place, if not, it's time to fire up the basic system tools, like perfmon, to gather than information.

Here's an example, lets say a company has 2 servers, but would like to add 2 more. Rather than purchase 2 new servers, they decide to opt for a VM solution on the two existing servers. What they don't realize, however, is that each server is already running at 70% CPU utilization on average and only has 256Mb of ram left on each. Adding two more servers with those resources is likely to result in poor performance. Additionally, disk I/O is well above the physical recommendations for the current drives and no new drives were specified.

If this were running in a test lab, it may be acceptable, but for production, the addition of two new servers would certainly result in unacceptable performance. In another post, I'll get into SANs (storage area networks) and how they can be properly sized for an environment, virtual or otherwise.