AI Vendor Threat Research And Cybersecurity’s Cynicism Problem
For years, the security community decried the lack of transparency in public breach disclosure and communication. But when AI vendors break with old norms and publish how attackers exploit their platforms, that same community’s reaction is split. Some are treating this intelligence as a learning opportunity. Others are dismissing it as marketing noise. Unfortunately, some security pros have existed too long in the universe of The Blob.
You can’t necessarily blame security practitioners for their reaction. Cybersecurity vendors are anything but transparent, only revealing their own breaches when they are forced to and rarely discussing the kinds of attacks adversaries launch against them. Plenty of calls for information sharing happen, but getting details seems to require NDAs from customers and prospects.
Cynicism Became A Core Cybersecurity Skill Along The Way
Let’s be clear: The cynicism is not harmless. It creates blind spots. Security teams that dismiss vendor disclosures as hype can miss valuable insights. Cynical attitudes lead to complacency, leaving organizations unprepared. Every practitioner expects adversaries to use generative AI, AI agents, and agentic architectures to launch autonomous attacks at some point. Anthropic’s recent report reveals how close that day is. And there’s value in knowing that. We are closer to a fully autonomous attack today than yesterday. It’s not speculation, because we have evidence that early attempts exist — evidence we wouldn’t have otherwise, because only the LLM providers have that visibility. These releases also taught us that attackers:
- Bolt AI onto old, proven playbooks. Vendor reports show that adversaries use AI to accelerate traditional tactics such as phishing, malware development, and influence operations rather than inventing new attack classes. As always, cybersecurity pays too much attention to “novel attacks” and “zero days” and not enough attention to the fact that those are rarely necessary for successful breaches. The use of common social engineering tactics like authority, novelty, and urgency are often good enough.
- Use scale and speed to change the game. AI amplifies attack velocity, enabling adversaries to produce malware, scripts, and multilingual phishing campaigns much faster than before. AI makes adversaries more productive — just like it makes employees more productive. And yes, we can also all take comfort in the fact that somewhere a sophisticated adversary is sludging through mountains of AI workslop generated by a low-effort colleague just like the rest of us.
- Are keenly aware of product security problems. One only needs to review recent updates to cybersecurity vendor support portals to see that we have a bit of a “cobbler’s children” problem with cybersecurity vendors and product security flaws. The AI vendors also have product security problems, and not only are these vendors aware of them; they are actively attempting to address them. Self-disclosure of product security issues should stand out as a breath of fresh air for cybersecurity practitioners in an industry where it seems to take a government action for a vendor to admit that it has yet another security flaw that puts customers at risk.
Effective But Not Entirely Altruistic
AI vendors do not release details as to how adversaries subvert their platforms and tools solely because they have an unwavering commitment to transparency. It is marketing, and we can’t forget that. Trust is one major inhibitor of enterprise AI adoption. These releases are designed to show that the vendors: 1) detected; 2) intervened; 3) stopped the activity; and 4) implemented guardrails to prevent it in the future. To gain trust, the AI vendors have turned to transparency, and they deserve some credit for that, even if (some of) their motives are self-serving.
But these AI vendors also act as a forcing function to bring more transparency to cybersecurity. AI providers such as OpenAI and Anthropic are not cybersecurity vendors. Yet when they release a report like this, some act as though it should be written to the same specifications of the top security vendors in the world, especially when compared with the likes of Microsoft, Alphabet, and AWS. These vendors are contributing to cybersecurity information sharing and the community in impactful ways.
AI vendors shifting from secrecy to structured disclosure by publishing detailed reports on adversarial misuse put pressure on other providers to do the same. Anthropic’s Claude case and OpenAI’s “Disrupting malicious uses of AI” series exemplify this trend, signaling that transparency is now a baseline expectation for responsible AI providers. Additional benefits for providers include:
- Demystifying AI risks for the public. In an era of “black box” AI concerns, companies that pull back the curtain on incidents can differentiate themselves as transparent, responsible partners. This builds brand reputation and can be a market advantage as trust and assurance become part of the product value.
- Showing the ability to proactively self-regulate. By voluntarily reporting abuse and enforcing strict usage policies, companies demonstrate self-regulation in line with policymakers’ goals. It highlights that transparency being fundamental to trust is not just a security talking point; it is an actual requirement. This extends beyond adversary use (or misuse) of AI into other policy domains such as economics. Anthropic’s “Preparing for AI’s economic impact: exploring policy responses” and OpenAI’s Economic Blueprint offer extensive policy positions on how to handle the economic impact of AI.
- Encouraging collective defense. When OpenAI publishes information about how scammers used ChatGPT for phishing and Anthropic details an attack analysis of AI agents with minimal “human in the loop” involvement, it creates a “whole of industry” approach that echoes classic threat intel sharing (such as ISAC alerts) now applied to AI.
Public Disclosures From AI Vendors Are More Than Cautionary Tales
Vendors sharing details of adversarial misuse hand security leaders actionable intelligence to improve governance, detection, and response. Yet too many organizations treat these reports as background noise rather than strategic assets. Use them to:
- Educate boards and executives. Boards and the C-suite will love hearing about these types of attacks from you. AI isn’t just something that we all can’t get enough of talking about (while simultaneously being tired of talking about it). Use these disclosures as ammo for your strategic planning to get more budget, defend headcount, and showcase securing AI deployments: “Here’s what Anthropic, Cursor, and Microsoft have to deal with. We need security controls, too. And by the way, these regulatory bodies require them.”
- Adopt AEGIS framework principles for AI security. Apply guardrails such as least agency, continuous monitoring, and integrity checks to AI deployments. Vendor case studies validate why these controls matter and how they prevent escalation of misuse.
- Run AI-specific red team exercises. Test defenses against prompt injection, agentic misuse, and API abuse scenarios highlighted in vendor reports. AI red teaming uncovers gaps before attackers do and prepares teams for real-world AI threats.
The cybersecurity community came by its cynicism honestly. But it might be time to trade in that C-word for another — like curiosity — and capitalize on the candor of AI vendors to further enterprise and product security programs.
Forrester clients who want to continue this discussion or dive into Forrester’s wide range of AI research can set up a guidance session or inquiry with us.