A cascading supply chain attack did not start with a zero-day exploit, an unpatched vulnerability, or a brute-force attack. It started with a bored employee wanting to get ahead in an online game. A Context.ai employee downloaded a Roblox game cheat, an unofficial script for an online game that came bundled with Lumma Stealer malware that exposed corporate credentials and OAuth tokens. Attackers then harvested AWS tokens and other system credentials, allowing them to get into Context.ai customer environments. A Vercel employee who had signed up for the Context AI Office Suite gave attackers the opportunity to pivot into Vercel’s internal systems, retrieve customer environment variables of Vercel’s customers, and exfiltrate the credentials that were not encrypted.

Why CISOs and CIOs should care: 

  • SaaS proliferation leads to overextension of trust. The Vercel incident reinforces that SaaS adoption has outpaced SaaS security maturity. Organizations no longer operate a bounded infrastructure, they operate an ecosystem of delegated trust. Managing that ecosystem requires shifting from perimeter defense to identity-centric, integration-aware security, where every SaaS connection is treated as a potential supply chain risk.
  • Shadow IT meets shadow AI. When employees connect to AI tools and click “allow all access,” they are making a security decision, but to them, it seems more like making an obvious choice to increase their own productivity. AI can deploy in multiple parts of your technology stack: endpoint, network, SaaS, identity, applications, and data. Visibility is necessary across each one, and there is no “silver bullet” sole source to help with this.
  • Vibe coding and deployment platforms break the shared responsibility model. Vercel’s design required users, both developers and nontechnical, to manually mark environment variables as “sensitive” for protection, placing the burden on customers to ensure that secure defaults are used.
  • Machine identities are an underestimated business risk. Machine identities, including OAuth tokens used in SaaS integrations, have become a primary target for bad actors. OAuth tokens are especially powerful because they bypass multifactor authentication, are often overscoped, and are more difficult to detect.
  • Human-element breaches still exist in the AI world. This incident started with the simple act of a user downloading malware-laden software. It’s not clear if the download was to a user-owned device or a corporate-issued device, but the results were the same.

First:

Determine if you are compromised by reviewing the Context.ai and Vercel incident reports and follow recommendations. Vercel customers must audit all projects, prioritizing those with sensitive data, to identify environment variables lacking the “sensitive” flag. Treat credential-containing variables as exposed and rotate them immediately. Vercel now defaults variables to “sensitive,” but users may still uncheck it. Monitor logs for potential data leaks.

Next, take this opportunity to:

  • Revisit the scope and meaning of “least privilege.” In this context, there are two distinct issues: 1) the ability of the Vercel employee to modify permissions at all and 2) the ability to assign “allow all” permissions to shadow IT/AI. The Zero Trust principle of continuous monitoring includes discovery of an overprivileged consumer service, while default-deny would have prevented a user from delegating the privileges in the first place. Improving constraining delegation and auditing Relationship-Based Access Control (ReBAC) settings will become increasingly important for AI agents.
  • Secure endpoints with a deny by default approach. On any endpoint used for work, take a positive security approach — if it’s not explicitly allowed to execute, it’s denied. While EDR/XDR solutions can stop a wide range of threats, those systems don’t have to get involved if users can’t run random files they downloaded. Start with least privileged access as a default position and only adjust as use cases demand it.
  • Prioritize safekeeping of your secrets. Utilize commercially available secrets managers to store API keys, secrets, and other long-lived credentials, injecting them at runtime. Implement a regular rotation schedule for all API keys and other secrets used in third-party integrations to reduce the window of exposure.
  • Keep credentials short-lived and constrained. Where possible, move to short-lived credentials and sender-constrained access tokens. To detect rogue AI-tool grants and reduce the risk of lateral movement by attackers, implement ITDR capabilities that provide real-time visibility into OAuth grants, privilege escalations, and anomalous usage patterns. Use capabilities within SaaS security posture management (SSPM), identity management and governance (IMG), and privileged identity management (PIM) solutions to help discover and maintain visibility to machine identities.
  • Avoid the “freemium fallacy” of third-party risk. Free of cost does not mean free of risk. Stop treating “vendors we pay” as your universe of risk. At regular intervals, inventory every external app with OAuth, SSO, or API access to employee data, customer data, intellectual property, or internal systems, and tier them by access — not invoice size — ensuring that they are in scope for your third-party risk management (TPRM) program. Aggressively block or tightly govern self‑service and “just trying it” tools.
  • Review software supply chain security practices. This double software supply chain attack highlights risks beyond direct or contractual suppliers. Maintain a complete software inventory, including open-source components, third-party tools, purchased apps, and deployment tools. Ensure a detailed software bill of materials (SBOM) for each application to monitor security vulnerabilities, license changes, and operational risks.
  • Implement product security for vibe coding. These platforms grow quickly, adding new features and capabilities all the time, which Software developers, citizen developers, and everyone in between, will want to take advantage of with our without the security teams knowledge. Purchase and manage these platforms to ensure secure software development and secure by design When connecting code repositories to these platforms, ensure security tests and scanners are still run on the code, automate remediation, or deploy fixes for critical issues. Ask what safeguards are in place protect the applications from prompt injection, sensitive information disclosure, and improper output handling.
  • Factor in human risk when securing … well, everything. The embrace of AI may accelerate the ability of individual users to cause harm through accidental, negligent, or malicious actions. Review user-created, company published and maintained agents to understand intent and check that appropriate guardrails are in place before allowing their use.

Connect With Us

Forrester clients with questions related to this can connect with us through an inquiry or guidance session.