Single-Sourcing Storage — A Way To Increase Consistency And Control Costs, Or A Dumb Idea…
I was recently taken to task in the twittersphere (I post as @reichmanIT) by @Knieriemen, a VP at virtualization and storage reseller Chi Corporation and the host of the Infosmack podcast. Mr. Knieriemen took exception to statements about storage single sourcing I made to the press related to a recent document about storage choices for virtual server environments, including the following:
RT @markwojtasiak: Forrester says "single source when possible" <-what an incredibly dumb thing for @reichmanIT to say
He followed up his comments with the following clarifications:
@reichmanIT In context to managing data center costs, single sourcing is probably the worst suggestion I've heard from an analyst
And
@reichmanIT In a virtual server environment, storage is a commodity and should be purchased as such to control costs
I don’t know Mr. Knieriemen personally, and I must admit that I was taken aback by his blunt approach, but I’m from New York and have a thick skin, so I can look past that. The reason I bring this up on Forrester’s blog is solely to take a look at the actual points of view brought out in the exchange. The crux of the argument is whether or not storage is commoditized, and in my opinion, it’s not (yet anyway). There are three reasons why:
- Storage is all about software and features, which are not commodities. X86 servers certainly have reached a point where they struggle for differentiation. Many of the components that go into storage arrays are true commodities, especially disk drives. But enterprise storage, the combo of hardware and software used to deliver high performance and high availability in enterprise settings, is not a commodity. There’s a high cost of gaining proficiency in management and troubleshooting within a given vendor’s products, so switching is not trivial. Migrating data is much easier going from old storage to new when both boxes come from the same vendor. Features such as snapshots, thin provisioning, deduplication, and protocol support are differentiated and deliver varied results depending on the specific vendor implementation.
- Storage virtualization doesn’t commoditize storage, it just shifts the control to somebody else's software. There’s been a great deal of talk of storage virtualization freeing users from lock-in and high cost of storage hardware, but it hasn't panned out that way. Yes, companies use IBM SVC or Hitachi USP/VSP or NetApp V-series to virtualize external storage systems, and then the underlying storage does behave like dumb commoditized disk. But few environments multi-source the back end systems. There are troubleshooting issues among the vendors, and it doesn’t make sense to spend money on features embedded in arrays from full feature vendors when you won’t use them. Even if you did muti-source the back-end hardware, you would be locked in to the virtualization platform of the virtualization vendor, so it wouldn’t make storage a commodity, just trade one lock-in for another. There are real benefits to be had in storage virtualization such as easier data migration or consistency across disparate physical devices, but calling it true commoditization or an exponential reduction in total cost of ownership is an overly ambitious assessment of the impact.
- Server virtualization uses APIs to call features in differentiated storage systems. In terms of server virtualization, there certainly is some effect of hypervisors controlling and to some extent commoditizing storage. With VAAI integration, VM admins can take on storage management tasks directly in the VCenter console, taking away some of the complexity of managing disparate storage systems. But for the most part, VAAI calls underlying storage features with APIs, meaning that you’re still dependent on the strength of features within the underlying arrays. We may someday see VMware or other server virtualization vendors managing vast tracts of dumb disk as commodities, but for now, there are real differences among vendor capabilities.
I’m by no means an apologist for the shortcomings of the big storage vendors. But in my opinion, the industry is not yet ready to throw out the trusted relationships that govern storage architectures and purchasing. So yes, I do think that it makes sense to pick a single storage vendor for each major workload stack (server virtualization, mainframe, file, data warehouse, non-virtualized OLTP databases, etc.) and stick with it. While you may be able to shave off some percentage points in negotiation through increased multi-sourcing, the complexity you add is likely to increase TCO in the long run and diminish the benefits of private cloud and virtualization initiatives.
In the end, I’m curious whether Mr. Knieriemen is just spouting clichéd pipe dreams about virtualization, commoditization, and multi-sourcing, or whether he really does have examples of better solutions. Is he delivering SAN-less architectures based on commodity servers with external virtualization layers managing data protection and advanced features that deliver equal or better performance and resiliency at lower price and without lock-in? If so, I’d love to hear about it. More importantly, I’d like to hear directly from infrastructure & operations teams. What are your thoughts on this? I’m always willing to have my eyes opened to new ideas, but calling me dumb just isn’t enough to get me there.