I passed a barefoot man strolling down a sidewalk on a chilly day in May. His hair was short. He was clean-shaven, wearing a multicolored blanket hoody. He exuded serenity as he grounded himself in the concrete beneath his feet, granting his chi permission to exchange greetings with the earth. Through that lens, every day, all 8 billion of us could be grounding (also known as “earthing” by believers — all I know is that it feels good in the summertime). There is little competition for grounding with the earth.

But in the corporate world, grounding AI agents in reality rather than hallucinations is neither natural nor free. It will cost money and establish the foundations for your future. Already, the battle for grounding your AI agents in the proprietary knowledge that differentiates your business has begun. Every company that expects its AI agents to work must invest in its knowledge capacity — the knowledge infrastructure — to ground them. Just as you train employees, you will ground AI agents in your proprietary knowledge assets to represent and differentiate your business.

That knowledge infrastructure consists of structured databases (systems of record, managed for rigorous quality and consistency), content stores (unstructured documents, wikis, emails — the world of knowledge management), vector databases storing unstructured doc “embeddings” (multidimensional fingerprints, essentially), and graph databases hosting interlinked information. All four are essential ingredients to ground agents in the reality of data rather than the musings and hallucinations of a language model.

To build AI agents that work, companies must commit their knowledge assets to a vendor that can translate those assets into AI-ready language models, retrieval-augmented generation, and agent guardrails. Every AI agent platform vendor — hyperscalers such as AWS, Google, and Microsoft; software giants like Salesforce, SAP, ServiceNow, and Workday; automation anchors such as Appian, Pegasystems, and UiPath; model providers like NVIDIA and OpenAI; and many others — want you to host your knowledge assets (documents, conversations, audio, video) to ground agents that work. All want to host your knowledge infrastructure. And that means a battle is forming among vendors from every direction to host the knowledge that grounds your AI strategy.

Three questions frame the hard choices you face in creating your AI grounding strategy:

  1. Where will you host your AI agents? Framework: Host your agents in platforms closest to your data and the systems that they interact with. For example, we would recommend that you host your B2B sales agents in your AI-ready CRM.
  2. Where will you put your knowledge assets? Framework: If latency or real-time performance matters, put your knowledge assets in the same cloud as your AI agents. If you care less about response times and more about costs, you can host your knowledge assets in a common infrastructure that feeds more than one AI agent platform.
  3. How do you optimize your knowledge infrastructure for performance and cost? Framework: Prepare to selectively replicate knowledge assets across clouds or runtimes so they can ground agents in different processes and systems. Be prepared to invest in storage governance, software, security, and synchronization to optimize agent performance.

Thank you to Charlie Betz, Rowan Curran, and Leslie Joseph for their help with this post.