From Prompts To Plans: Overcoming The Complexity Gap Between GenAI And AI Agents
In technology circles, few metaphors have endured as well as Geoffrey Moore’s “Crossing the Chasm“. His framework describes the perilous gap between early adopters and the ultimate majority. This gap, in fact originally espoused by Everett Rogers, occurs where promising innovations stall due to complexity, unclear value, or the inability to scale beyond pilot projects. Diffusion of innovation stops being a sprint to a new world and enters what feels like a grinding uncertain marathon — an analogy that resonates strongly as we consider the evolution from generative AI (genAI) to agentic AI. While genAI’s adoption curve has been steep and accelerated, propelled by its immediacy and accessibility, the move to agentic AI — autonomous, goal-oriented systems that can reason, plan, and act — is far less straightforward.
The Complexity Gap From GenAI To AI Agents
GenAI thrives on discrete prompts and outputs: a well-crafted question leads to a coherent answer, image, or draft. Its failure modes — including hallucinations, bias, and data quality issues — are visible and relatively easy to mitigate with humans in the loop or other oversight mechanisms under the banner of responsible AI. But agentic AI and AI agents introduce new layers of complexity. These systems aren’t just generating content; they’re orchestrating multi-step tasks, making autonomous decisions, and interacting with real-world systems.
As my Forrester colleague, Leslie Joseph, highlights in his seminal report, “Why AI Agents Fail (And How To Fix Them),” a simple genAI application doesn’t present the risk of inter-agent collusion, but agentic AI does. Indeed, agentic AI carries this and a whole host of additional failure modes, including:
-
-
- Task orchestration risks: Poor sequencing or logic breakdowns can derail an entire process.
- Goal misalignment: Agents may optimize for the wrong objectives, creating unintended consequences.
- Error compounding: Minor flaws in early steps magnify as agents execute downstream tasks.
- Integration fragility: Reliance on APIs, retrieval-augmented generation, or legacy systems increases operational risk.
- Testing and governance challenges: Validating and auditing an agent’s decision pathways is exponentially harder than reviewing a single genAI output.
-
Why The AI Adoption Timeline Will Stretch Over The Next Decade
Proponents of agentic AI argue that we’re on the cusp of mainstream deployment; however, we believe this transition to a new mode of interaction between humans and technology will take far longer. Unlike genAI, which can often be trialed in isolation, agentic AI requires robust guardrails, trust frameworks, and deep integration with enterprise workflows. It moves from “generate and review” to “plan, act, and potentially fail autonomously,” a leap that many risk-averse organizations will hesitate to take. We anticipate several years of messy experimentation and careful refinement before agentic AI fully crosses its adoption chasm. Enterprises will need to invest heavily in new architectures, testing, security, and governance practices that are barely understood today. The ‘hollow enterprise or hollow state’ risk during outsourcing based on AI solutions is just one real and present example.
This void between the promise and reality isn’t just about technology maturity; it’s about organizational readiness, cultural change, and the ability to manage new forms of technological and operational risk. Change means new or as yet not understood risks, risk means fear, and fear means resistance. Now you have a kind of adoption friction that didn’t exist when Open AI released ChatGPT, with the firm continuing to allow their core offering to scale almost exponentially. The hard truth is the path from genAI to agentic AI isn’t a straight line. It’s a difficult leap across a canyon of complexity. And while the destination holds immense potential, getting there will require patience, discipline, and a much deeper appreciation of the risks and failure modes than currently acknowledged by the overconfident techno-optimistic narratives coming out of Silicon Valley.
But this doesn’t mean we should all sit back and stare powerlessly into the chasm as if slow march to this new future is a fait accompli. We’re human, not AI, and as technology leaders we have agency and autonomy. We know that by adopting the principles of high-performance IT that organizations can maximize the available outcomes of early-stage agentic AI and prepare for those opportunities on the horizon for truly autonomous AI solutions.
Want to know how? Join me at Technology & Innovation Summit APAC 2025 for my keynote, “Machines, Gods and Kaos: High Performance IT… Because Prayer Isn’t A Strategy“, on August 19th in Sydney at the Sheraton Hyde Park or online on our Digital Events Platform. Register here.