Agentic Development Security: Why AppSec Needs A New Operating Model
Application security testing (AST) has reached an inflection point. The market is crowded, capabilities overlap, and detection alone is no longer a source of durable differentiation. DevOps platforms embed security features; cloud-native application protection platform vendors continue to push left; application security posture management specialists offer open-source scanning technologies; and AI frontier labs such as Anthropic and OpenAI experiment with new approaches to code security. The result is a noisy ecosystem where most tools can find issues but far fewer can reliably tell teams which ones matter and how to fix them.
- Detection is becoming commoditized; context is not.
Static application security testing, dynamic application security testing, software composition analysis, secrets scanning, infrastructure-as-code scanning, and container image scanning are table stakes. What separates leaders from laggards is the ability to correlate findings with real world context: exploitability, reachability, runtime exposure, and business impact. Buyers increasingly expect security tools to identify which vulnerabilities are actually exploitable in production and to produce fixes that developers can trust. This shift explains why prioritization, validation, and remediation are now the battlegrounds of application security. - LLMs are reshaping how security tools reason about risk.
Large language models excel at correlating disparate data sources such as code repositories, dependency heuristics, security scanners, runtime signals, and workflows, into coherent insights. Applied well, this enables lower false positives, more actionable findings, and remediation that reflects how software is actually built and deployed. New entrants can leverage these strengths to address long-standing criticisms of legacy AST approaches but typically are not replicating their depth or breadth of coverage. The value is no longer in how much you detect but in how well you understand and act on what you detect. - Software development itself is becoming agentic, generating insecure code at scale.
AI coding assistants, autonomous coding agents, and AI driven workflows are moving from experimentation to daily use. These systems generate code, select dependencies, modify infrastructure, and execute instructions at machine speed. But AI coding agents commonly ship unauthenticated or improperly authorized endpoints, trust client-supplied data for security critical decisions (e.g., prices, roles, state), and omit basic controls such as input validation, rate limiting, and server-side checks, resulting in code that works functionally but is exploitable by default. They also frequently reuse insecure patterns (string-built queries, unsafe file handling, eval/exec) because they optimize for correctness and brevity, not risk.
Traditional application security (AppSec) models designed for human-paced development and discrete scanning stages are poorly suited to this reality. Securing agentic development requires controls that operate continuously, reason autonomously, and intervene in real time.
Introducing Agentic Development Security (ADS)
ADS is not a single product category or a rebranding of existing tools. It is a new security paradigm focused on protecting AI-powered software development end to end. ADS spans prevention, detection, prioritization, and remediation while providing continuous intelligence across code, dependencies, workflows, and running applications. Crucially, it treats security decisions as autonomous, policy-driven actions, not just alerts handed to overburdened teams.
ADS platforms must identify and mitigate application layer risks unique to AI-driven applications. This includes detecting classes of flaws outlined in the OWASP Top 10 for Large Language Model Applications such as prompt injection, unsafe output handling, excessive agency, and missing controls across both development and runtime contexts. As agentic applications mature, this capability will need to extend beyond single-model interactions to analyze multiagent workflows, tool invocation chains, autonomous decision paths, and policy enforcement gaps. The goal is not just model safety but assurance that AI-powered applications behave predictably, securely, and within intended operational boundaries.
Core ADS Capabilities Cluster Around A Few Themes
Rather than isolated tools, ADS platforms combine multiple intelligence and control layers that will continue to evolve:
- AI-driven code and dependency analysis that goes beyond pattern matching to assess exploitability, logic flaws, and real risk in context
- Guardrails for AI-assisted coding that guide agents and developers toward secure outcomes and prevent unsafe instructions from executing
- Intelligent triage and prioritization that continuously ranks findings based on exposure and business impact
- Automated remediation for both code and dependencies, producing validated fixes that preserve functionality
- Dynamic testing of live applications and APIs that adapts to application behavior and modern architectures to detect OWASP Top 10 for LLM Applications flaws
- Policy-driven software development lifecycle quality gates enforced by autonomous agents rather than manual review
- Supply chain and toolchain protection, including AI coding agents, extensions, Model Context Protocol servers, agent skills, pipelines, and artifacts
- Governance, reporting, and risk analytics that provide durable insight over time, not just point-in-time results
Today, no single vendor delivers the full ADS vision.
Some vendors excel at analysis of the code, others at the analysis of the supply chain, others at runtime intelligence or governance. What’s missing is a unified operating model that treats security as an autonomous, continuous function aligned to agentic development. This fragmentation is not surprising; the paradigm is still forming, but it creates both risk and opportunity for buyers and vendors alike.
Forrester will evaluate this emerging space.
Our upcoming agentic development security landscape report and Forrester Wave™ evaluation will identify the vendors pushing the market forward, clarify how capabilities align to this new model, and help security and development leaders understand where today’s tools fall short — and where they lead.
As development becomes agentic, security must do the same. Incremental improvements to legacy AppSec will not be enough. If you’re evaluating how AI coding agents change your application security strategy, creating AI applications, or want to understand which vendors are shaping agentic development security, watch for Forrester’s upcoming ADS landscape and Wave and reassess whether your current AppSec model is built for an agentic future — or schedule a meeting with me.