Claude Code Security Causes A SaaS-pocalypse In Cybersecurity
We have seen this pattern before, even if the specifics look different. Think back to the day AWS introduced GuardDuty, when Microsoft folded Defender for Endpoint into its enterprise licensing commitments and launched Microsoft Sentinel, or when Google acquired Mandiant and eventually Wiz. Sure, the launch of fully autonomous AI agents that can ingest entire enterprise codebases and surface hundreds of previously unknown flaws in a single sweep feels novel, yet the strategy is familiar. AI companies are vying to prove they can collapose that disruption window from years to months by using their own innovations.
Forrester warned years ago that hyperscalers were not building security features to complement the market but were instead building to replace it. The model was simple: Bundle the capability into spend that the enterprise was already committed to, neutralize renewal cycles, and force every incumbent to defend pricing that no longer matched value. Our security platform research — which Jess Burn and Jeff Pollard will feature in a talk at RSAC 2026 — shows that ease of integration and productivity gains from automation are key drivers of security platform consolidation efforts. In addition, the AI coding agent providers offering application security tools to identify vulnerabilities that could have been written by their own assistants and agents is a top trend in our application security trends report, which will be publishing soon. Anthropic’s February 20 announcement is the next iteration of the same playbook.
A Market Shock That Is Both Accurate And Overreaching At The Same Time
The market’s reaction to the Claude Code Security launch was fast and blunt. JFrog dropped nearly a quarter of its value in a single session. Okta, CrowdStrike, and Zscaler lost meaningful ground. The Global X Cybersecurity ETF closed at its lowest point in over two years. The 24% hit to JFrog was the clearest signal because its value proposition depends on specialized software supply chain controls that AI agents now directly threaten.
| Company | Ticker | Open ($) | Close ($) | Drop ($) | Drop (%) |
|---|---|---|---|---|---|
| JFrog | FROG | $50.07 | $37.75 | -$12.32 | -24.61% |
| GitLab | GTLB | $28.91 | $26.39 | -$2.52 | -8.72% |
| Okta | OKTA | $81.05 | $74.29 | -$6.76 | -8.34% |
| CrowdStrike | CRWD | $419.28 | $388.60 | -$30.68 | -7.32% |
| Cloudflare | NET | $190.61 | $177.14 | -$13.47 | -7.07% |
| Zscaler | ZS | $167.00 | $159.75 | -$7.25 | -4.34% |
| Palo Alto Networks | PANW | $150.36 | $148.70 | -$1.66 | -1.10% |
The read across to identity, runtime detection, and network security was less accurate. CrowdStrike does not analyze code. Okta does not repair injection flaws. SailPoint does not audit dataflows in distributed codebases. Their declines were sentiment contagion, and the market will correct, though those corrections can take quarters, not days.
The category that took real damage is the one most perceived to be dependent on pattern matching: SAST, SCA, and ASPM. These vendors sell structured detection. Frontier models now generate meaning as a native property. Enterprises that once paid six-figure contracts for rules-based scanning will not ignore platforms that include similar or better reasoning as part of a broader subscription they already purchased. We’ve seen this pattern before, as well: CFOs will run that math before CISOs do — and CEOs and board members will start pushing.
How AI Code Security Tools From Google, OpenAI, And Anthropic Rewrite AppSec
Google moved first with CodeMender, a system that blends Gemini reasoning with traditional program analysis techniques.
OpenAI followed with Aardvark, which embeds semantic analysis directly into the CI pipeline.
Anthropic, finally, delivered the most consequential shift by bundling Claude Code Security into an existing licensing path, albeit as a research feature for now, complementing Claud Code Security Reviewer, which runs on pull requests.
When this happens, incumbents do not compete feature to feature. They compete against economics, efficiency, and productivity. The important part of these releases was not benchmark performance or the quantity or quality of fixes submitted to open-source repositories, although those were noteworthy. Instead, it was how these launches collapsed the separation between engineering and security. When systems that write code can also reason about flaws in that code, iterate, and correct inside the same workflows, traditional development and AppSec boundaries will erode.
The Counterpunch Arrived The Same Day: AWS Kiro And Autonomous Agent Risk
Market euphoria for autonomous agents met a hard reality check when the Financial Times reported that Amazon internal AI coding tool Kiro caused a 13-hour outage by deleting and recreating a production environment. Amazon blamed user error due to excessive permissions, but that explanation falls flat when humans in the loop are the primary preventative control that autonomous agents rely on. The reality is that an autonomous agent made an irreversible choice. The permissions mattered only because the agent made a decision that no reasonable human developer would have made. As discussed in keynotes and tracks at Forrester’s Security & Risk Summit 2025 and in our AEGIS research: Users are predictable. Their willpower is finite. Agents are relentless. Their willpower is infinite.
This is the tension every enterprise will face. Autonomy creates value and risk. Vendors point to guardrails, human-in-the-loop defaults, and authorization workflows. All of that is necessary; none of it is sufficient. If mistakes in permission modeling turn agentic autonomy into production impact, the real risk is not the tool … it is scale. Every seasoned practitioner knows: Permission drift is the baseline condition in every mature environment.
Impact On Application Security And Legacy Security Processes
Security teams cannot wait for the market to stabilize. They must take inventory of their AppSec stack and confront uncomfortable questions. If a tool provides little incremental value beyond what a platform agent can already reason about, its renewal becomes a discretionary decision for the CFO and one that is difficult to argue against for CISOs.
When vulnerability discovery, analysis, and remediation are handled by fast, agentic systems, traditional resource constraints begin to collapse — but only for new code that is intentionally AI‑generated, AI‑maintained, and continuously corrected. In those environments, AI agents can identify, fix, test, and deploy changes in near real time, reducing the need for human prioritization and preventing technical debt from accumulating in the first place.
How these tools impact legacy code, COTS, open source, new codebases, and infrastructure will also vary. They were not designed for autonomous refactoring, and most organizations lack the context, confidence, or risk tolerance to allow AI systems to make large‑scale changes without oversight. In these environments, resource constraints persist, technical debt already exists, and risk‑based prioritization remains essential. AI can surface issues and assist with analysis, but it cannot yet autonomously remediate legacy software at scale.
CISOs must collaborate closely with application security professionals to assess how new tools add value and which capabilities they might replace or complement. While Claude Code Security employs reasoning to function more like a human security expert helping to identify vulnerabilities that traditional methods like fuzz testing or SAST scanners may overlook, it is not intended to replace these tools or established DevSecOps best practices — for now.
Furthermore, CISOs should engage with their existing application security vendors to explore how they are integrating large language models into their solutions. This integration augments deterministic scanning by uncovering vulnerabilities that were previously difficult to detect, thereby enhancing overall security capabilities. In addition, application security vendors have been prioritizing what must be fixed, curating remediation guidance, and offering automated pull requests for SAST and SCA findings in the developer workflow.
SAST and SCA platforms have already shifted away from “find everything” toward prioritizing what actually must be fixed, curating remediation guidance, and generating automated pull requests directly in developer workflows. They increasingly account for the reality that every code change — whether made by a human or an AI agent — can introduce regressions, and they embed controls, validation, and context to manage that risk. Fixing every flaw remains aspirational, but managing which fixes are safe, necessary, and valuable is where AppSec tooling continues to matter.
The future state is not AI replacing SAST but AI amplifying the pressure to stop buying separate application security tools in favor of investing in agentic software development platform bundles that include security and remediation discipline as a feature. As agentic systems accelerate discovery and code change, the value of tools that constrain, validate, and contextualize remediation only increases.
Do we think Anthropic is focused on conquering the AppSec market? No. We think Anthropic views trust in the code it generates as a dependency and inhibitor to increase the adoption of Claude Code and that these releases are designed to satisfy those concerns. This is asynchronous for now, but as GitHub Copilot coding agent shows, this can be performed synchronously during code generation.
What CISOs Should Do Now About AI Agent Security Risks
At the same time, enterprises must evaluate AI security tools using their full vendor risk frameworks. This includes data residency, code persistence policies, prompt caching behavior, and the reliability of the agent itself. The AEGIS framework already flags unresolved issues around agent trust boundaries and prompt injection exposure, which are already part of the attack surface.
Identity, runtime detection, and network security remain essential. The market overreacted in its impact to these categories — security leaders should not (barring concerns about their portfolio of personal investments). This does, however, reinforce the promise that agents will proliferate making detection of abnormal machine behavior more important, not less.
Governance work cannot wait. Shadow AI already creates unsanctioned data exposure and untracked code modifications. Enterprises must define who is authorized to run autonomous agents, what audit trails must exist, and which code classes are prohibited from external processing.
SOC disruption is coming next. The same companies that just entered code security will eventually automate triage and detection.
CISOs must brief their boards, model scenarios, and understand that talent requirements will shift again as agentic systems move into operations. Forrester research already shows that compensation premiums for security talent with AI skills fall between 10–30%.
The Bottom Line
February 20, 2026 will be remembered as the day markets finally recognized that AI platforms intend to own the security value chain the same way hyperscalers did before them. They do not need to outperform incumbents. They need only to be good enough while bundled into a product the enterprise already pays for. The economics and productivity incentives will handle the rest.
The cybersecurity market will not contract, but value will redistribute. In much the same way that Microsoft and Alphabet became mega security platform plays, the AI titans hope to achieve the same results. Niche solutions will become a key component of acquisitions to expand platforms but won’t remain standalone companies for too long, with so many vendors vying for spend and relevance.
Forrester clients who want to continue this discussion or dive into Forrester’s wide range of AI research can set up a guidance session or inquiry with us.