Proactive security has always been based on three principles: visibility, prioritization, and remediation. But in the age of AI, each principle will continue to experience challenges. In our latest research, The Future of Proactive Security, we found that the future of proactive security hinges on how well teams answer six foundational questions across each principle: what, when, where, why, how, and who. Since AI accelerates our ability to answer questions, we’ve reached our biggest opportunity to modernize our proactive security program, but first we need to align it with the various subjective perspectives from different stakeholders across our business.  

The Six Questions Every Proactive Security Program Must Answer 

To trust AI — and to scale proactive security — teams must map the six foundational questions to the three principles: 

  • Visibility: What do we have? When does it matter? 
  • Prioritization: Where could an attacker move? Why should this exposure be fixed? 
  • Remediation: How do we fix it? Who fixes it? 

None of that works without accurately documented context, which will be the primary cause for the failure of proactive security programs. While modern prioritization methods, like attack path assessments and continuous security testing improve the prioritization of the likelihood of an event, the impact of an exposure is still subjective – it changes depending on which person – or AI – you ask. AI needs machine readable context solicited from the human beings in the business to analyze the impact of an exploit. This is required to improve prioritization (impact for if the exploit is executed by an adversary) and remediation efforts (impact of a change or automation gone wrong). Many organizations still don’t have this consistently documented or even known. But this context is required, to answer:  

  • What and When for Visibility. Visibility is shifting from static lists of assets, to signals that make up environments — from endpoints, cloud platforms, identities, configurations, detections, and opensource intelligence. Threat intelligence feeds such as KEVs shape urgency, while continuous AI led vulnerability discovers run the risk of accelerating noise. But faster discovery only heightens the need for better prioritization. 
  • Where and Why for Prioritization. Proactive security platforms now blend attack surface management, threat intelligence, risk scoring, and attack path analysis to show where an exposure is and why it matters. Continuous security testing validates whether exposures are truly reachable and exploitable. But business context — which vendors still rely on from tags, spreadsheets, and tribal knowledge — must become machine-readable for AI to model realistic consequences. 
  • How and Who for Remediation. Proactive security platforms must evolve from providing lists of what’s wrong into providing lists of what to do. Granular steps are a prerequisite to help answer one of the most difficult questions in proactive security – who – needs to fix it. Most remediation action items are owned by different team members across engineering, developers, DevOps, or cloud teams – which is why relying on tagged owners in a CMDB have proven ineffective. Even as organizations drive towards automating the remediation process, this who is still required – because someone needs to approve and monitor the automation!  

AI won’t fix proactive security on its own. It will amplify the good and bad foundations you already have. To modernize securely, teams must strengthen how they answer the six questions across visibility, prioritization, and remediation, and ensure context is documented and readable for AI agents.   

Forrester clients can view our full Future of Proactive Security report here and schedule a guidance session with me to discuss these trends and your program further.