The Flowchart Is Dead, Long Live The Flowchart
About a year ago, a few of my Forrester colleagues and I coined a joke term for the dominant pattern vendors were passing off as “agentic.” The term was “agent-ish”, and it described workflows that were largely comprised of deterministic, flowchart-driven processes with LLM-based components embedded in them. The “agents” in agent-ish systems had little to no actual agency because most of the heavy lifting was still done by the deterministic flowchart. These designs made sense for their time; LLMs had weak reasoning powers, limited tool use capabilities, shallow memory, and unreliable self-correction. Agent-ish workflows traded autonomy for auditability and predictability, which, given the constraints, was a rational trade.
That was then. Today’s models and runtimes are beginning to absorb more of the capabilities that enterprises previously had to assemble externally through orchestration scaffolding and deterministic crutches. In some domains, especially coding, research, and bounded forms of computer use, systems can now plan, use tools, recover from errors, and sustain longer task execution with less explicit deterministic wiring than before. Increasingly, agency is manifesting in software-defined workflows as well as in emerging ‘personal AI’ interaction paradigms such as OpenClaw and its variants, Perplexity Computer, ByteDance’s DeerFlow and so on. This does not mean that fully autonomous systems are now universal, nor that deterministic process logic has become obsolete. It does mean that the rationale for using the flowchart as the primary carrier of intelligence is weakening in a growing class of work.
As LLMs Become More Agentic, The Flowchart Is Falling Behind
A few advances in AI are driving that shift:
- Context windows have expanded materially. The move from small context windows from smaller to larger context windows (4K-8K tokens to 128K-1M+ tokens) means a model can hold much more code, documentation, or conversational state in working memory than before. This reduces some of the pressure that older systems placed on retrieval pipelines, chunking logic, and external memory workarounds, even if it does not eliminate the need for them.
- Models are increasingly trained on task trajectories. Leading models are increasingly being trained on sequences of screenshots, clicks, and keystrokes that show how humans complete real tasks across applications and optimized against more complex task trajectories, including multi-step interactions across tools, interfaces, and intermediate decisions. All this improves their ability to navigate unfamiliar environments and recover from failure.
- Reasoning is now a trained capability. Models like OpenAI’s o-series and DeepSeek R1 allocate compute at inference time to generate internal reasoning chains before acting. This lets the model decompose goals into subgoals, evaluate multiple strategies, and backtrack on its own, internalizing the planning logic that agent-ish systems encoded in flowchart runs.
- Reinforcement learning now rewards outcomes. Older post-training relied on human preference labels (RLHF) or supervised examples of correct tool use. Current approaches (reinforcement learning with verifiable rewards, or RLVR) reward the model for reaching a correct outcome, such as code that passes its tests or answers that satisfy an external check. The model learns when to reach for a tool and how to recover when the tool returns something unexpected, replacing the deterministic retry logic that agent-ish systems built externally.
- Standardized tool connectivity paradigms are reducing some integration friction. The Model Context Protocol (MCP) gives models a universal interface for exposing tools and data to models at runtime, allowing tool surfaces to be defined and externalized rather than each integration being hard-coded into the workflow definition or into the orchestration layer. It is not a universal replacement for integration work, but it does shift some tool connectivity away from bespoke workflow wiring and toward more portable, externalized tool surfaces. In some cases, that allows deterministic workflows themselves to be exposed as tools rather than hard-coded as the primary execution logic.
- Deployment is shifting from stateless turns to persistent agent sessions. Because models can now plan, use tools, and recover from errors natively, they can sustain long-running execution inside managed runtimes like secure virtual browsers, Docker sandboxes, or desktop environments. In these settings, agents can maintain state across steps, spawn subtasks, manage their own subgoals and carry work forward over longer sessions, reducing how much step-by-step execution logic must be imposed from outside.
The cumulative effect is that capabilities that enterprises used to assemble externally through orchestration scaffolding (tool selection, multi-step sequencing, memory management, error recovery, planning, screen interaction) now ship inside the model or its immediate runtime environment.
Each shift chips away at a specific piece of the “agent-ish” architecture. Together, they weaken (if not eliminate) the case for building the flowchart as the default carrier of intelligence.
Orchestration Itself Is Changing
The market is already responding, even if unevenly. Three shifts illustrate how.
- Autonomy is arriving first where outputs are verifiable. The shift is furthest along in coding, research, and computer use, where task boundaries are clearer and outputs are verifiable; in process-heavy enterprise domains like finance operations, supply chain, and customer service, agent-ish architectures remain the dominant production pattern and the transition will be slower and more contested. In coding, systems such as Claude Code, Codex, and Devin can sustain extended development sessions across large codebases, run tests, and recover from some classes of failure with far less deterministic scaffolding than would have been required a year earlier. Computer-use agents can now operate browsers and desktops through screen-level interaction, carrying out bounded forms of autonomous work that previously would have required dedicated RPA pipelines and custom integration. These examples do not prove that the orchestration layer is disappearing. They do show that, in some domains, useful autonomy is already being delivered without the flowchart prescribing every step.
- Vendors are decoupling coordination from cognition. Consider UiPath, a company widely known for its deterministic automation roots. When UiPath built Maestro, its agentic orchestration product, it did not simply extend its legacy deterministic Orchestrator. Instead, it built Maestro on Temporal, a general-purpose execution engine designed for long-running, stateful, failure-resilient workloads. The implication is simple enough: old orchestration encoded the work logic; newer orchestration increasingly provides durable coordination, while the agent handles cognition.
- Process knowledge is separating from execution logic. In the agent-ish model, process knowledge lived as the flowchart: step 1, then step 2, branch if condition X, call tool Y. Knowledge and execution logic were the same artifact. A different pattern is emerging. Conventions like AGENTS.md (now a founding project of the Linux Foundation’s Agentic AI Foundation) and similar skills files externalize process knowledge as declarative context: coding conventions, build steps, testing requirements, domain constraints. The agent reads those constraints and decides how to sequence its work against them at runtime. The knowledge is separated from the execution logic. This is the instruction-driven to objective-driven transition made concrete in a single file convention, and it is already the default interaction model for the most widely deployed coding agents in production today.
We Need A Different Model For Control And Oversight
Here’s the other thing. In the old flowchart model, governance was baked into execution. The process definition controlled what could happen, in what order, under which conditions. That worked because the flowchart itself carried the logic of the work. In a more agentic model, that no longer holds. When agents plan at runtime, choose tools dynamically, and adapt their path as they go, control cannot depend on a pre-defined sequence. Governance has to be separated from execution and applied independently through policy enforcement, runtime oversight, and cross-platform visibility. The less the flowchart determines behavior, the more the enterprise needs an independent control plane for agents.
The design question for enterprise architects is shifting from “how do I wire this process into an AI workflow?” to “which parts of this outcome require determinism, which require adaptive cognition, and what governance spans both?” That is an operating model question.