CISOs Have Plenty Of Work To Do In An AI-Driven Future
As AI becomes more deeply embedded in the processes that support employees and customers, it won’t fail quietly. It will fail at speed, at scale, and with consequences the business must own. And as autonomous systems begin making decisions that affect revenue, organizations can no longer settle for “secure enough.” They must prove that AI outcomes are correct, explainable, and protected from corruption. That proof must be engineered in from the start.
This reality is redefining the role of the CISO from protector of systems to provider of trust and assurance. Our new report, The AI CISO, describes the drivers of this shift, including:
- AI creates outcomes that must be trusted, not just systems that must be protected. As AI agents act independently across the enterprise, traditional security controls and policy enforcement cannot scale to oversee hundreds of autonomous agents and aren’t looking to see if decisions have the right intent and purpose.
- Agentic sprawl is overwhelming human oversight. In Forrester’s Q4 2025 AI Pulse Survey, 56% of generative AI decision-makers called agentic sprawl a current challenge for their organization. It will become an even bigger challenge in the future as employees and third parties deploy even more AI agents across functions, systems, and supply chains.
- Regulation and accountability are converging. When autonomous agents cause incidents, breaches, or financial harm, organizations must be able to prove documented guardrails, continuous assurance, and auditable behavior, including across third‑party AI supply chains. In many enterprises, that accountability will land squarely with the CISO, making trust and assurance not just a capability gap but a personal risk.
What CISOs Need To Do Right Now
CISOs cannot outrun this shift by changing jobs or waiting to get started. The strongest leaders are moving forward now, even while the operating model is still evolving. Some of the actions that CISOs must take today include:
- Mapping how their business actually delivers value today. CISOs should understand which customer and employee services truly matter and how they are delivered end to end through technology. Business continuity and operational resilience programs are a great starting point. Without clarity into how services flow across systems, CISOs cannot design the guardrails that future AI agents will require or assess the scope of autonomous decisions.
- Defining the future security org and training for it immediately. CISOs must clearly articulate what a trust and assurance function looks like, which roles will evolve or disappear, and what new skills are required. Start reskilling now through AI experimentation so that fear and inertia don’t slow the organization down. Our report highlights the new roles to staff for, what your security team can de-emphasize, and what they need to ramp up on.
- Leading the transition by example. CISOs who personally use AI to automate reporting, analysis, and decision support build the instincts needed to govern AI at scale. Firsthand experience helps leaders understand where automation adds value, where it fails, and what must be controlled. Using AI to reduce personal drudgery also creates capacity to focus on the higher‑order work of trust, assurance, and organizational redesign that must be done sooner rather than later.
Learn more about what’s coming for the CISO, including new security team roles and org structures as well as what is expected now with AI and what will be happening in the future. Read the full report, The AI CISO, and schedule a guidance session with us.