A2A And MCP: What They Are

The emerging agentic AI market is experiencing its infrastructure inflection point. Enterprise builders are already getting exhausted by the prospect of hard-coding all of the tools and data an agent needs to use. This hard-coding creates fragile systems that can be challenging to make secure and flexible. Today, we are seeing communication and interoperability standards emerge at two foundational layers: intra-agent with the Model Context Protocol (MCP) and inter-agent with agent-to-agent (A2A) protocols.

MCP controls how agents manage and share structured memory, task state, and environmental assumptions across sessions and models. A2A protocols specify the rules for inter-agent communication, including negotiation, delegation, and task synchronization. Though MCP and A2A can enable enterprise agent interoperability, they also create new vulnerabilities and challenges in security, performance, and governance.

What They Aren’t

Knowing what A2A and MCP are is just as important to clarify what these protocols aren’t. Some security pros have misinterpreted each of these protocols to be:

    1. A control plane
    2. A policy engine

These protocols don’t orchestrate agents; they enable interoperation. Think of A2A like RPC or Kafka in a microservices architecture as a transport and serialization layer, not a scheduler or a source of truth.

Similarly, MCP isn’t a governance layer. It’s more like a distributed cache or a shared memory abstraction, akin to how systems such as Apache Ignite or Memcached provide fast, ephemeral access to state but don’t enforce business logic or access policy.

If you treat MCP like a control plane, you’ll end up with brittle coupling and security blind spots. One common joke that already exists is that the “S” in MCP stands for security. Hat tip to our colleague Carlos Casanova for the title, based on his comment that MCP should stand for “many critical problems.”

The real control plane for agents (when one exists) will likely emerge as a higher-order construct. It will be layered on top of these protocols, with its own lifecycle, observability, and trust models.

As Always, Security Forces Trade-Offs

Security is never free. Security taxes performance, flexibility, and (sometimes) reliability. The same is true for agentic architectures. Modifying the output of a large language model to meet a new security standard might result in significantly higher token use because of a prompt change. In A2A systems, introducing authentication and authorization mirrors adding TLS to microservices. You gain confidentiality and trust at the expense of latency and overhead related to certificate management.

MCP faces similar constraints. Imagine it as a distributed cache or shared state layer used by agents to store and retrieve context. If that context must be signed, versioned, and verified for integrity, then suddenly this resembles a blockchain-light architecture. You gain tamper resistance, but you pay in throughput and latency. Stale or poisoned context can propagate errors across the agent mesh unless strong validation and rollback mechanisms exist.

In scenarios where two agents operate within separate execution environments and collaborate on a task without a shared trust anchor or federated identity, they typically need to: 1) negotiate credentials; 2) validate scopes; and 3) establish secure channels. This process is similar to service mesh architectures such as Istio, in which mutual TLS (mTLS) secures communication between pods but introduces additional complexity for routing, observability, and debugging.

MCP Security Flaws Identified

The Model Context Protocol is rapidly becoming a standard and critical layer in agentic systems, but it’s also emerging as a surface for exploitation. Several CVEs discovered recently showcase this. In addition, Trend Micro discovered 492 and Knostic AI found over 1,800 MCP servers exposed to the internet, reminding security leaders of unsecured S3 buckets in AWS in the not so distant past.

Because MCP governs how agents share and retrieve context, it becomes a prime target for context poisoning, impersonation, and unauthorized inference. If an agent can inject misleading or malicious context into the shared memory, it can manipulate downstream behavior similar to how poisoned DNS entries or corrupted configuration maps can destabilize distributed systems.

Worse, many current MCP implementations lack strong guarantees around context provenance. Without cryptographic signatures or verifiable lineage, agents have no way to determine whether a piece of context is authentic, recent, or relevant. This is the equivalent of a distributed system relying on unsigned messages in a gossip protocol: fast but far too trusting.

And because MCP often operates beneath the application layer, these flaws are hard to detect and even harder to remediate. There’s no: 1) centralized audit trail; 2) no rollback mechanism; and 3) no standard for revocation. In effect, we’re building shared memory for autonomous systems without the isolation or integrity guarantees we take for granted in container orchestration or distributed databases.

Static Security Models Don’t Fit The Needs Of Ephemeral Autonomous Agents

Securing agentic systems will require a redesign of: 1) trust; 2) identity; and 3) control. It requires dynamic trust that enables temporary, scoped identities; context-aware permissions; and cryptographically verifiable provenance. Some potential approaches include:

    • Agentic AI should use just-in-time credentials with clear constraints on use, duration, and scope that are easy to issue and revoke and are fully auditable.
    • Agentic AI should use root-cause analysis across agentic supply chains, including distributed tracing on actions, decisions, and reasoning.

In Agentic Systems, Failure Isn’t A Crash … It’s A Cascade

One agent misinterprets context, another acts on flawed assumptions, and a third amplifies the error. By the time a human notices, the trail is cold. That’s why we need a new kind of root-cause analysis that’s designed for autonomous, distributed decisions.

The system must include full traceability for every agent interaction, not just the what but the why and how. Each decision could carry a cryptographic breadcrumb — a signed reference to the context it used, the agent that provided it, and the logic path it followed.

The Securing Agents And Agentic Gold Rush: Picks And Shovels

Every emerging technology has its infrastructure moment. For the cloud era, it was containers, observability, and continuous integration and continuous delivery pipelines. For this instance of the AI era, it’s GPUs, vector databases, and fine-tuning frameworks. For agentic systems, the next frontier isn’t just smarter agents; it’s the tooling that makes them secure, testable, and trustworthy.

This is the “picks and shovels” phase of the agent economy. The real opportunity lies in building the scaffolding: agent debuggers, context validators, permission brokers, simulation environments, and trust observability layers.

Performance And Capability: Why Testing Comes First

To understand agentic systems, we have to test and trust them (and their supply chains). And that testing must possess the same characteristics of agentic systems. So testing will need the following characteristics: 1) relentless; 2) systematic; and 3) scalable.

Examples of testing include:

    1. Benchmarking context fidelity
    2. Measuring decision latency
    3. Stress-testing permission boundaries

Making Good Choices Now Sets Us Up For The Future

We’re standing at the edge of a new human hybrid computing paradigm. Agents will do more than execute code. They will make decisions, collaborate, and evolve. The protocols, tests, and security measures we design today will shape how these agents interact, how they’re trusted, and how they’re held accountable.

With that in mind, we need to make trust a first-class primitive for AI agents and agentic AI.

Let’s Connect

Forrester clients who have questions on implementing or securing AI agents and agentic AI can request an inquiry or guidance session with either of us.

See Jeff and Rowan at Technology & Innovation Summit North America on November 2–5 and at Security & Risk Summit on November 5–7, both being held in Austin, Texas.