The 10th anniversary of KubeCon + CloudNativeCon North America in Atlanta was both a celebration of progress in the open-source community and a moment of reflection about the challenges facing that same community amid change. Specifically, as “the cloud” becomes “the AI-native cloud” and one powered principally by NVIDIA’s AI hardware and related proprietary software stack.

The hubbub surrounding NVIDIA’s recent GTC conference in Washington, D.C., highlighted this change. KubeCon has long been one of the hottest tickets in tech— the place where hyperscalers meet as frenemies to build out open-source IT infrastructure. As of late it has turned from a hotbed of innovation to an industry gathering focused on fine tuning a mature ecosystem of solutions that span from edge to data center to cloud. But ho-hum was always the KubeCon goal/plan: As a key player in the Kubernetes project, Kelsey Hightower, said back in 2017, infrastructure should be boring.

This year, however, KubeCon was brimming with optimism and innovation as the community tackles the challenge of adapting Kubernetes from its origins as an orchestration platform for loosely coupled services into a tightly coupled technology foundation suitable for AI workloads at scale.

The CNCF’s AI Conformance Project is an effort to standardize how AI workloads run across diverse platforms and infrastructure, and hopefully inject competition and interoperability between vendor solutions into the AI market. The reality is that these standards are in danger of being dictated by a very small number of companies, namely the US-based hyperscalers who have the scale and technical expertise to create highly polished AI solutions and NVIDIA as the default supplier of enterprise AI infrastructure and software platforms. It’s a reminder that while open-source remains foundational to the modern tech stack, closed-source power is growing as a strategic moat for the firms driving the transformation of enterprise technology solutions.

The public cloud giants are not immune from the challenges of a paradigm shift in computing. They have to compete with the likes of Meta, Apple, Tesla, X.AI and a bevy of neoclouds to procure GPUs, AI talent, data center infrastructure as well as the electricity to power it all.

Amid this frenzied backdrop there were several announcements and trends we saw at KubeCon worth noting:

  • AI-native workloads take the spotlight. While the AI Conformance effort will unfold over years, Kubernetes is already evolving into an AI-serving substrate. Features like Dynamic Resource Allocation (DRA) for GPU scheduling and AI inference address cost efficiency and performance. These capabilities promise flexibility but also underscore the dependency on proprietary hardware and hyperscale cloud ecosystems.
  • Platform engineering ups its game. Once tailored to developer velocity, platform engineers must now also steward corporate resources, balancing speed with governance, compliance, and cost optimization. Intuit showed how it’s done, highlighting their migration of legacy systems to container-based applications and services.
  • Observability everywhere. Observability solutions popped up across the expo floor. Grafana Labs announced capabilities for AI-assisted tracing for natural-language debugging, among other capabilities. These and related announcements highlight how observability is central to increasingly elaborate AI use cases.
  • eBPF  keeps expanding its brief. Isovalent, the Cisco-owned company behind the open-source Cilium eBPF project, rolled out updates that consolidated eBPF’s place at the center of Kubernetes and container networking, security, observability, and more. Isovalent juiced up KubeVirt networking for VMs with its release of Address Family Express Data Path (AF_XDP) that allows Linux kernel bypass for high-performance networking. The Cilium project marked advances in cluster connectivity, runtime security, and scalability.
  • vCluster moves into AI infrastructure tenancy. vCluster Labs — the startup formerly known as Loft Labs — first got attention for its method of virtualizing Kubernetes for greater efficiency. It’s now officially on the AI bandwagon with its Infrastructure Tenancy Platform for AI for running GPU-intensive workloads on NVIDIA DGX systems. The effort integrates vCluster’s signature approach to virtualization with NVIDIA Base Command Manager for lifecycle management, dynamic GPU autoscaling with Karpenter-based automation and private node capabilities for security.
  • Security and policy-as-code steps up its game. Kubernetes security has ramped up in recent years. Key developments at KubeCon include Agentic security workflow based on eBPF and deep system telemetry for runtime protection. Policy as code moved forward, too: Kyverno’s latest release introduced common expression language (CEL)-based policies, namespace enforcement, and SDK support, while Open Policy Agent (OPA) Gatekeeper added Validating Admission Policy integration and enhanced auditing.
  • WASM continues to mature, startups expand partnerships. Bailey Hayes of Cosmonic and Luke Wagner of Fastly discussed the WebAssembly System Interface (WASI) 0.3 release and future roadmap. Researcher Elizabeth Gilbert introduced a method for observability between components in a composed WASM application. And Fermyon, a WASM-powered PaaS, announced their partnership for running their Spin platform on Akamai’s Linode powered edge infrastructure. And recently, Akamai took that partnership further by announcing the acquisition of Fermyon.

Zooming Out To The Big Picture

KubeCon NA 2025 showcased how the cloud-native ecosystem is adapting to AI’s rise and enabling further advances. The challenge now is to preserve open-source dynamism and collaboration while a select few companies with strategically closed source technologies are at the center of the AI-native cloud. The AI Conformance Project is a big step forward in meeting this challenge for the next 10 years of the open-source cloud native ecosystem. Enterprise users will shape this future too as they seek to diversify infrastructure choices, invest in open-source AI, and demand interoperability.

Forrester clients who would like to further discuss the AI-native cloud can reach out to our analysts to understand how KubeCon’s announcements affect their business.