The Commodity Cloud Era Is Over — The AI-Native Cloud Is Here
CIOs questioning whether hyperscale cloud providers can deliver on their AI promises should follow the money. The numbers tell the story:
- Microsoft is spending $80 billion in data centers specifically for AI workloads in 2025.
- Amazon Web Services (AWS) has announced Project Ranier, a giant AI cluster based on proprietary GPUs, for Anthropic that has received $4 billion in AWS investment.
- Google Cloud’s multiple data center buildouts include a $2 billion facility in Indiana for AI workloads.
- Alibaba Cloud has announced new data center architecture specifically for AI workloads.
Here’s the kicker: Every public cloud customer is already funding this AI transformation. You are already paying either directly for managed AI services or indirectly through standard cloud bills. Those commodity services cost less to deliver now, thanks to billions saved by extending server lifespans — savings that hyperscalers are plowing straight into AI infrastructure.
The New Economics of Cloud
Certainly, cloud providers aren’t pulling the plug on commodity cloud capacity — the globe-spanning data centers with network, compute, and storage based primarily on x86 and ARM technologies. Cloud hyperscalers boast staggering capacity: that’s their defining feature. Yet most of their core services remain commodity offerings that generate razor-thin margins or outright losses. These services exist not as profit centers, but as essential hooks to lure enterprises away from their on-premises data centers.
Generative AI (genAI) changed the game. Cloud providers finally found their premium play: genAI services command higher margins than commodity infrastructure. While hyperscalers have long deployed custom chips — Google’s TPUs, AWS’s Trainium and Inferentia, earlier NVIDIA GPUs — genAI triggered a spending frenzy. Established hyperscalers, cloud divisions of tech giants, and venture-backed upstarts are all racing to deploy unprecedented capital for this AI gold rush.
The result: the AI-native cloud. In the previous generation of cloud services, AI was only one isolated cloud service category. AI-native clouds are making AI by design as an architecture principle. They build AI capabilities into all major cloud service categories, spanning infrastructure, development, and applications. This requires new, more complex physical plants — even bigger than the already-huge commodity cloud data centers — to minimize network latency. Then there are new climate controls to cool dense racks of GPUs sizzling with genAI workloads. The power required for these efforts has led to a revival of nuclear power in the US. For example, Amazon is not only building a data center next to a nuclear power plant but is also investing to develop more of them. A centerpiece of the $500 billion Stargate AI project with OpenAI, Oracle, and Softbank is a 5-gigawatt data center in Abilene, Texas. This follows the Oracle Cloud Infrastructure partnership with NVIDIA to create what it claims is the largest AI supercomputer in the world. xAI is powering up to compete, having obtained licenses in a controversial process for 15 natural gas generators for its Colossus data center in Tennessee.
The Innovation Evolution Of Representative Technology Domains Powering AI In The Cloud
New Competitors Change the Game
While hyperscalers must continue to run their less profitable commodity clouds, new competitors — AI cloud platforms, aka neoclouds — are avoiding the commodity cloud business entirely and focusing solely on AI. For example, Netherlands-based Nebius, formerly the holding company for Russia’s Yandex, has both investment dollars and GPUs from NVIDIA to support AI workloads. CoreWeave, a GPU-only cloud, continues to rack up investment and data center spending and is valued at $23 billion. Vultr, formerly a specialist commodity cloud provider, received $333 million in capital from AMD and a VC to build out AI data centers based on AMD chips, putting its value at $3.5 billion. NVIDIA is building its AI cloud that it says will be bigger than AWS.
Your Path Forward
The AI-native cloud is thus much broader than the hyperscaler offerings. It can be built by customers directly on cloud provider infrastructure with the open-source AI ecosystem; AI-centric neo-PaaS from providers such as Heroku, Mirantis, or Red Hat; managed AI services from the major cloud provider; AI/data cloud platforms like Databricks and Snowflake; or the neoclouds.
The maturation of the Kubernetes-based cloud-native open-source ecosystem is foundational to the AI-native cloud; consider for example, OpenAI’s ChatGPT deployment on Microsoft’s Azure Kubernetes Service. The AI-native cloud upstarts — backed by big investments and partnerships — are accelerating the development of the open-source AI cloud ecosystem.
Our reports on these trends, The Key Challenges Of Open-Source Software In AI and Navigate The Open-Source AI Ecosystem In The Cloud, combined with Embrace The AI-Native Cloud Now and How To Get Started With AI-Native Cloud, highlight how enterprises and government organizations can take advantage of a growing number of options to find their own path to the AI-native cloud.
We look forward to the opportunity to discuss our findings. Forrester clients can access the full reports and schedule a guidance session or inquiry for further engagement.