Three Takeaways From Forrester’s 2024 Evaluation Of AI Infrastructure Solutions
As AI gets bigger and better, it requires much more than mundane commodity compute, network, and storage infrastructure. GPUs are only one part of this — AI workloads demand a finessed optimization of raw compute, data throughput, and power consumption. Whether consumed as a cloud service, footprinted on-premises, or a hybrid of both, AI infrastructure is an important piece of the strategic puzzle that enterprise technology leaders face. The decisions tech leaders make about AI infrastructure will determine whether their enterprises become AI leaders or laggards. Forrester defines AI infrastructure solutions as computer systems and cloud services designed to maximize the performance of AI workloads: data preparation, model training, and inferencing.
Twelve AI Infrastructure Vendors To Choose From
In our recently released evaluation, The Forrester Wave™: AI Infrastructure Solutions, Q1 2024, we looked at the offerings of 12 vendors in this space: Alibaba Cloud, Amazon Web Services, Cerebras Systems, Dell Technologies, Google, Graphcore, Hewlett Packard Enterprise, IBM, Lenovo, Microsoft, NVIDIA, and Oracle. Here are three of the key insights that emerged over the course of researching and writing this latest evaluation:
- Think in terms of workloads. Although generative AI (genAI) has introduced new ways to leverage AI, the three core AI workloads have remained the same: data preparation, training, and inferencing. For many organizations, it will be necessary to leverage different providers for each of the workloads they are tackling. For example, when it comes to data management, an organization may choose a vendor with a strong on-premises solution, but for inferencing, they may select a large cloud services provider.
- Match your infrastructure to your computational needs. GenAI has increased reliance on GPUs, which partly explains the astronomical growth of some chip makers since we published our first Wave on AI infrastructure in December 2021. Although GPUs have been critical to genAI’s success, they may not be critical to the success of your AI strategy. Within the three core AI workloads, computational requirements vary significantly. When it comes to training workloads, deep learning models such as computer vision and large language models require access — either through the cloud or on-premises — to chips optimized for AI (typically GPUs), but predictive models may not benefit from such chip architectures.
- Plan for how you’ll integrate a new solution with existing tools. Before committing to a solution, enterprises must understand how they’ll incorporate a vendor’s AI infrastructure management layer with their own existing technology infrastructure management tools. AI infrastructure comes with management software to help operations professionals monitor the system, control access, allocate usage, and provision/deprovision infrastructure to optimize costs. If an enterprise has already standardized on a vendor’s IT infrastructure, then using that vendor’s AI infrastructure can be attractive from a management point of view.
As with all technology purchases, no single vendor will be right for all enterprises, and the highest performers in our evaluation may not necessarily be the best choice for your organization. Forrester clients can read the full report on our website. You can easily download the spreadsheet and customize the results for your needs and priorities.
Forrester clients can discuss these findings — or AI infrastructure generally — by scheduling a guidance session with me.
Aaron Suiter contributed to the content of this blog post.