What’s the Fastest, Most Scalable Data Warehouse Platform? Well, If You Must Ask…..
Welcome to the life of a data warehousing (DW) industry analyst. I’m often asked by Information and Knowledge Management (I&KM) professionals to address the perennial issues of which commercial DW solution is fastest or most scalable. Vendors ask me too, of course, in the process of their attempting to suss out rivals’ limitations and identify their own competitive advantages.
It’s always difficult for me to provide I&KM pros and their vendors with simple answers to such queries. Benchmarking is the blackest of black arts in the DW arena. It’s intensely sensitive to myriad variables, many of may not be entirely transparent to all parties involved in the evaluation. It’s intensely political, because the answer to that question can influence millions of dollars of investments in DW solutions. And it’s a slippery slope of overqualified assertions that may leave no one confident that they’re making the right decisions. Yes, I’m as geeky as the next analyst, but I myself feel queasy when a sentence coming out of my mouth starts to run on with an unending string of conditional clauses.
If we offer any value-add in the DW arena’s commentary cloud, industry analysts can at least clarify the complexities. Here is how I frame the benchmarking issues that should drive I&KM pros’ discussions with DW vendors:
- Vendor DW performance-boost claims (10x, 20x, 30x, 40x, 50x, etc.) are extremely sensitive to myriad implementation factors. No two DW vendors provide performance numbers that are based on the same precise configuration. Also, vendors vary so widely in their DW architectural approaches, that each vendor can claim that no rival could provide a comparable configuration to its own. For the purpose of comparing vendor scalability for the recently completed Forrester Wave on Enterprise Data Warehousing Platforms (to be published imminently), I broke out the approaches into several broad implementation profiles. Each of those profiles (which you’ll have to wait for the published Wave to see) may be implemented in myriad ways by vendors and users. And each specific configuration of hardware, software, and network interconnect of each of those profiles may be optimized to run specific workloads very efficiently–and be very suboptimal for others.
- Vendor DW apples-to-apples benchmarks depend on comparing configurations that are processing comparable workloads. No two DW vendors, it seems, bases their benchmarks on the same specific set of query and loading tests. Also, no two vendors’ benchmarks incorporating the exact same set of parameters in their benchmark tests — in other words, the same query characteristics, same input record counts, same database sizes, same table sizes, same return-set sizes, same number of columns selected, same frequency distribution of values per column, same number of table joins, same source-table indexing method, same mixture of relational data and flat/text files in loads from source, same mixture of ad-hoc vs. predictable queries, same use of materialized views and client caches, and so forth.
- Vendor DW benchmark comparisons should cover the full range of performance criteria that actually matter in DW and BI deployments. No two DW vendors report benchmarks on the full range of performance metrics relative to users. Most offer basic metrics on query and load performance. But they often fail to include any measurements of other important DW performance criteria, such as concurrent access, concurrent query, continuous loading, data replication, and backup and restore. In addition, they often fail to provide any benchmarks that address various mixed workloads of diverse query, ETL, in-database analytics, and other jobs that execute in the DW.
- Different vendors’ DW benchmarks should use the same metrics for each criterion. Unfortunately, no two vendors in the DW market use the same benchmarking framework or metrics. Some report numbers framed in proprietary benchmarking frameworks that may be maddeningly opaque–and impossible to compare directly with competitors. Some report TPC/H, but often only when it puts them in a favorable light, whereas others avoid that benchmark on principle (with merit: it barely addresses the full range of transactions managed by a real-live DW). Others report “TPC/H-like” queries (whatever that means). Still others publish no benchmarks at all, as if they were trade secrets and not something that I&KM pros absolutely need to know when evaluating commercial alternatives. Sadly, most DW vendors tends to make vague assertions about “linear scalability,” “10-200x performance advantage [against the competition], and “[x number of] queries per [hour/minute/sec] in [lab, customer production, or proof of concept].” Imagine sorting through these assertions for a living–which is what I do constantly.
- DW benchmark tests should be performed and/or vouched for by reliable, unbiased, third parties–i.e,. those not employed by or receiving compensation by the vendors in question. If there were any such third parties, I’d be aware of them. Know any? Yes, there are many DW and DBMS benchmarking consultants, but they all make their living by selling their services to solution providers. I hesitate to recommend any such benchmark numbers to anybody who seeks a truly neutral third-party.
- DW solution price-performance comparisons require that you base your analysis on an equivalently configured/capacity solution stack–i.e., hardware, software–for each vendor and also the full lifetime total cost of ownership for each vendor/solution. That’s a black art in its own right. Later this year, I’ll be doing a study that provides a methodology for estimating return on investment for DW appliance solutions.
As an entirely separate issue, it does no good, competitively, for a DW vendor to assert performance enhancements that are only relative to a prior configuration of a prior version of its own product or technology. The customer has no easy assurance that the vendor is comparing its current solutions against a well-configured/engineered example of the prior solution. The vendor’s assertion of order-of-magnitude improvement over a prior version of its own product may be impressive, but only as a statement of how much they’ve improved its own technology, not how it fares against the competition. And such “past-self-comparisons” can easily backfire on the vendor, as customers and competitors may use it to insinuate that there were significant flaws or limitations in your legacy products.
Here’s my bottom-line advice to all DW vendors on positioning your performance assertions. Frame them in the context of the architectural advantages of your specific DW technical approach. Publish your full benchmark numbers with test configurations, scenarios, and cases explicitly spelled out. To the extent that you can aggregate 100s of terabytes of data, serve thousands of concurrent users and queries, process complex mixtures of queries, joins, and aggregations, ensure sub second ingest-to-query latencies, and support continuous, high-volume, multiterabyte batch data loading, call all of that out in your benchmarks. To the extent that any or all of that is in your roadmap, call that out, too.
Here’s my bottom-line advice to I&KM pros: Don’t expect easy answers. Think critically about all vendor-reported DW benchmarks. And recognize that no one DW platform can possibly be configured optimally for all requirements, transactions, and applications.
If there were any such DW platforms, I’d be aware of them. Know any?