Few technologies have become a fundamental part of the data center as quickly as server virtualization. That’s because the basicvalue proposition is so easy to grasp: When you run many logical servers on a single physical server, you get a lot more out of your hardware, so you can invest in fewer physical servers to handle the same set of workloads. It almost sounds like found money.
The details, of course, are more complicated. The hypervisor, a thin layer of software upon which you deploy virtual servers, is generally wrapped into a complete software solution that incurs some combination of licensing, support, and/or maintenance costs (depending on which virtualization software you chose). And you very likely will need to upgrade to server processors that support virtualization.
[ Get the full scoop on doing server virtuaization right in the InfoWorld “Server Virtualization Deep Dive” PDF special report. | Better manage your company’s information overload with our Enterprise Data Explosion newsletter. ]
On the other hand, reducing the number of servers yields indirect cost savings — less space to rent, less cooling to pay for, and of course lower power consumption.
Even more compelling is virtualization’s inherent agility. As workloads shift, you can spin up and spin down virtual servers with ease, scaling to meet new application demands on the fly.
The path to rolling out a virtualized infrastructure has its share of pitfalls. You need to justify the initial cost and disruption in a way that does not create unrealistic expectations. And you need to know how to proceed with your rollout, to minimize risk and ensure performance stays at acceptable levels.
Making the case for server virtualization It’s pretty easy to sell server virtualization. Who doesn’t want to get the most possible use out of server hardware? In fact, the basic idea is so compelling, you need to be careful not to oversell. Make sure you account for the likely capital equipment, deployment, training, and maintenance costs. The real savings achieved by virtualization, as with so many other new technologies, tend to accrue over time.
Most virtualization deployments require new hardware, mainly because hypervisors require newer processors that support virtualization. So the best time to roll out virtualization is when you need to add servers to your existing infrastructure or when it’s time to replace aging hardware.
The superior efficiency of newer servers will help make your case. Begin by calculating the power consumption and cooling levels the current infrastructure requires. (Ideally, this should be conducted on a server-by-server basis, which can be time-consuming, but will result in far more accurate numbers.) Then check the same specs for the hardware you plan to buy to get an idea of any power and cooling cost savings.
Add the fact that you will be using fewer physical servers for the same workloads, and your proposed virtualized infrastructure will look very, very good compared to the existing one. If the new hardware is sufficiently powerful, you may be able to run many logical servers on each physical unit.
Unfortunately, determining how many virtual servers will fit on a physical host is never an exact science. There are tools that can help, though, and some server consolidation tools will allow you to specify the make and model of your current and planned hardware, and will monitor your existing infrastructure for a period of time.
Armed with all that data, you can run reports that show exactly how many virtualization hosts you’ll need, what type, and your expected ratio of virtual servers to physical hosts. Some will even calculate the expected power consumption and cooling capacity for the new infrastructure. Investigate the options available from EMC VMware, Microsoft, and others in order to get the most accurate data before you leap into any virtualization project.
But again, don’t oversell. It’s important for everyone to realize that reducing the number of physical servers does not mean reducing logical servers — and does not necessarily lead to reducing IT staff. In fact, it’s generally beneficial to hire a competent consultant to help plan any virtualization endeavor. Although the basic concepts are simple, the planning, design, and implementation stages can be quite tricky without proper knowledge and experience.
Train before you fire it up It’s also important to take into account training for existing staff. Virtualizing an existing IT infrastructure means changing the structural foundation of the entire computing platform; in a sense, you’re collecting many eggs into a few baskets. It’s vitally important that IT admins are well versed in managing this infrastructure when it goes live, as virtualization introduces a number of hazards that must be avoided.
If at all possible, make sure your staff is trained before you embark on a full-blown virtualization implementation. Your chosen vendor should provide many options for specific training, or online classes at the very least.
In addition, take advantage of the evaluation periods that many virtualization platforms offer. For example, VMware’s enterprise framework can be downloaded, installed, and run for 60 days without purchase, and that time can prove invaluable to familiarize admins with the tools and function of the proposed environment. There’s no substitute for this type of hands-on experience.
Don’t make the rookie mistake, however, of letting your sandbox training implementation turn into your production platform. When it’s time to fire up a production virtualization foundation for the first time, make sure it’s with a clean install of all components, not a migration from a training tool.
It’s also essential to ensure that training isn’t limited to the software. Hardware considerations are crucial to a virtualization implementation, from the number of Ethernet interfaces, to CPU choices, RAM counts, local and shared storage — the whole works. It’s vitally important that your admins are well versed in the day-to-day operation and functions of supporting tools like SAN array management interfaces, Ethernet, or Fibre Channel switches. In a virtualized environment, a mistake that affects a single port on a single server can affect all the virtual servers running on that particular host.
Out with the old One major benefit of embarking on a virtualization project is that it gives IT the opportunity to jettison old hardware and old frameworks. There’s never a better time to inspect the whole infrastructure and identify components that have fallen through the cracks, aren’t necessary anymore, or can be folded into other tools or projects.
As you step through the planning stages of virtualization, you should pay close attention to anything that can be culled from the back-room herd without too much pain. It will ease the transition and cut down on the number of servers that need to be migrated or rebuilt on the virtualized foundation.
It’s also a good time to inspect the network requirements of the proposed solution. Ethernet trunking to the physical hosts is generally a must in any reasonably sized infrastructure. By trunking, you enable the virtual machines to participate on any trunked network, rather than just the Layer 2 network that the host is directly connected to. You can also switch hosts between networks on the fly. It’s a very easy way to bring a substantial amount of flexibility into the mix.
Are you planning on running any virtual servers that need to be linked to a DMZ network? If so, it’s best that they have a dedicated interface for that traffic on each host, although it’s possible to trunk those connections as well. Generally speaking, you should maintain physical separation of trusted and untrusted networks; adding another network interface to your hosts is a minimal cost.