AI red teaming is a new security engagement that mixes tried and true offensive cybersecurity practices but also incorporates new types of testing to evaluate for bias, harm, toxicity, and reputational damage from genAI-enabled applications. This testing requires new testing methodologies, changes the way testing results are delivered, and can vary considerably based on the type of providers selected. CISOs can use this report to select the right testing provider for their organization’s needs for testing the security posture of AI-enabled applications.