Addressing Virtualization’s Achilles Heel
The benefits of virtualization are quite obvious but when you start to really increase the density of virtual machines in order to maximize utilization suddenly it ain’t such a simple proposition. The latest CPUs from AMD and Intel are more than up to the task of running 10-20 or more applications at a time. Most servers run out of memory and I/O bandwidth well before processing power. Recent announcements from the leading server vendors have been made to address the memory side by packing more DIMMs onto a single motherboard (including blade server boards), but you can only add so many Ethernet cards and Fibre Channel HBAs. Oh yeah, and then there’s the switch ports to go with them (blade systems help a lot here).
If you are part of the elite group of infrastructure and operations managers who are pushing the VM density envelope, then 10GbE may be your better option. Most VMs individually don’t consume the full bandwidth of a single GbE NIC but we are quickly seeing that the standard network configuration of ESX is 6 NICs and 2 FC ports per VM. The NICs are for console, vm kernel, and vm network and you need two of each, for redundancy, for a total of 6. And each of these NIC connections requires a separate data center uplink cable. On top of this, the more VMs you add the more bandwidth is consumed which requires…more ports and that means a lot of connections. And even if each connection only consumes 10% of that 1 GbE of bandwidth each, you’re running out of I/O very quickly. Plus every VM is sharing a limited set of physical NICs – heaven forbid you might actually want to do quality of service or give any of these VMs their own physical NIC, as is often the case.
10GbE can address the NIC sharing scenario and Ethernet storage solutions such as iSCSI and the forthcoming Fibre Channel over Ethernet (FCOE) – yes, I know Cisco says it’s ready today – can save you tremendously on HBA costs. The need for more true physical connections is more of an issue.
The NIC vendors are addressing this scenario with SR-IOV (single-root I/O virtualization) technology that splits 10GbE NICs granulary and dynamically so you can set quality of service parameters for the virtual NICs that share these pipes. But it’s a virtual solution; if you still need physical NICs you’re out of luck.
To address this, HP has released Flex-10 Virtual Connect modules for its c-Class blade systems. These 10GbE switch modules (and this technology in implemented on its 10GbE NICs in the BL495c blade too) can physically split a single 10GbE connection into 4 physically discrete connections with tunable bandwidth (100Mbps increments up to 10Gbps per connection).
With Flex-10 modules and BL495c blades each physical server gets 8 "physical" NICs (up to 24 with an expansion cards), which fan out to 384 "physical" connections coming out of a full bank of switch modules. You of course can blow out this number with virtual NICs per VM as not every VM will need its own physical NICs. And each of these connections can replace a FC port in an Ethernet storage configuration. If you want to pack a ton of VMs into a tiny package without sacrificing I/O performance this is an intriguing way to go. Even if you don’t use Flex-10 for storage, the density benefits here are worth considering.
As we stated in our report on 10GbE futures, earlier this year, the move to 10 is a pricey upgrade today but more easily justified in IT infrastructure consolidation moves since so much more consolidation can be achieved. Blade servers and even VMware constantly face similar price justification challenges but are winning more and more customers through this same cost analysis. You’ll have to include the switch upgrades in your analysis but if you can achieve 2x or greater consolidation in doing so, the investment may be well worth it.
By James Staten
Check out James’ research