Vision Report

Use AI Red Teaming To Evaluate The Security Posture Of AI-Enabled Applications

An AI Red Team Engagement Playbook For CISOs

 and  three contributors
Sep 30, 2025

Summary

AI red teaming is a new security engagement that mixes tried and true offensive cybersecurity practices but also incorporates new types of testing to evaluate for bias, harm, toxicity, and reputational damage from genAI-enabled applications. This testing requires new testing methodologies, changes the way testing results are delivered, and can vary considerably based on the type of providers selected. CISOs can use this report to select the right testing provider for their organization’s needs for testing the security posture of AI-enabled applications.

Log in to continue reading
Client log in
Welcome back. Log in to your account to continue reading this research.
Become a client
Become a client today for these benefits:
  • Stay ahead of changing market and customer dynamics with the latest insights.
  • Partner with expert analysts to make progress on your top initiatives.
  • Get answers from trusted research using Izola, Forrester's genAI tool.
Purchase this report
This report is available for individual purchase ($1495).