On April 18th, IBM announced its intent to acquire virtual tape library (VTL) and deduplication vendor Diligent Technologies. For IBM, Diligent is a good fit. The company offers both mainframe and open systems virtual tape libraries and they are a pioneer of deduplication. However, IBM already offers a market leading mainframe VTL based on its own intellectual property and an open systems VTL based on FalconStor technology — although the open systems VTL has very limited adoption — so there is also a lot of overlap. Because Diligent is a software solution, IBM can quickly integrate Diligent with any of its storage systems and bring new VTLs to market relatively quickly. It’s very likely that IBM will in fact pursue this route so it can bring an inline deduplicating VTL to market as quickly as possible.
The result of this acquisition? Good news for customers, who will have a choice between two open system VTLs, the FalconStor-based VTL that has much broader ecosystem interoperability and better integration with physical tape and the Diligent VTL that is focused on tape elimination. In the short-term, this will create some confusion for customers, so it’s important to know which camp you fall into — tape integration or tape elimination. You also have to fully embrace deduplication with Diligent; the technology is inline, you can’t selectively decide which backups to dedupe, and all your backups will be deduplicated. Inline deduplication isn’t right for everyone or for all backups. Also, before investing in a non-Diligent VTL, customers should ask IBM for a roadmap that shows at least one year of planned product updates.
In the long-term, IBM will try to find additional opportunities for taking advantage of Diligent’s deduplication technology, either as embedded technology or as a deduplicating gateway to any of its storage systems. This is ultimately a good thing for customers; storage capacities are growing at 50% per year, sometimes 100% per year, and most of this data is redundant and it doesn’t need to be stored on expensive, high performance storage. It would be nice to store this avalanche of data more cost effectively in the future. Customers can expect about 10-15X reduction in data using deduplication, sometimes more depending on the data set.
Check out Stephanie’s research