The Expanding Universe Of GRC For AI: Key Questions From Technology Leaders
In 1929, astronomer Edwin Hubble discovered something unsettling. The universe isn’t static; it’s expanding everywhere, simultaneously, at every scale. His simple equation (Hubble’s law) shows that galaxies are accelerating away from each other, and the farther they are, the faster they recede. Eventually, galaxies become so distant that they cross our observable horizon entirely — forever beyond our ability to see, measure, or explore.
AI governance is following the same law. The further you look into how your organization actually uses AI (e.g., the models, the agents, the autonomous decisions running behind the scenes), the faster the governance, risk, and compliance (GRC) problem accelerates beyond your current frameworks. Static approaches such as policies, committees, and status reviews were never built for a universe that expands this fast. And right now, for many organizations, critical parts of their AI risk landscape are drifting past the horizon.
Two Truths About GRC For AI
- GRC for AI is a deeper and more technical domain than you think. Many organizations treat AI governance generally as a compliance exercise. They write a policy, document use cases, assign an AI leader, etc. While warranted, these activities are usually detached from operational reality. As organizations move toward autonomous agentic behavior, you can’t just rely on “people and process.” You need integrated technologies to monitor model drift, enforce agent guardrails, and mitigate AI-related risks. If you can’t show governance in action, it doesn’t exist.
- GRC for AI is at the core of modern risk programs. With AI scaling at all levels of business, AI governance is now a core GRC use case. If you treat “AI risk” as just another category in a risk register, you’ll fail to see how AI reshapes your organization’s enterprise, ecosystem, and external risks. But success depends on a level of radical integration between business units and IT, privacy, security, and data teams that enterprises still struggle to achieve. If your GRC platform isn’t tightly coupled with infrastructure and security, you’re guessing, not governing.
Questions Security And Risk Leaders Are Asking Today
I speak with security and risk leaders every week about GRC for AI. While the situations and solutions differ for each organization, their questions reflect common pain points that all leaders should consider. Here’s what’s top of mind today and what you should also consider:
- “Who owns AI, and who owns AI risk?” AI has landed everywhere in the enterprise, with nobody formally claiming the liability that came with it. The result is a GRC vacuum filled by assumption: Everyone thinks someone else is accountable. But ownership is an operational question, not a philosophical one. Without named roles, explicit decision authorities, and escalation paths, accountability diffuses until an incident forces it into the light. Ungoverned ownership leads to ungoverned risk.
- “How do we enforce policies and guardrails for AI agents?” Writing a policy is straightforward. Enforcing it technically, however, is as varied as your tech stack and entirely dependent upon it. AI agent guardrails, such as Forrester’s AEGIS framework, require continuous, automated enforcement mechanisms, not periodic human review. We’ve mapped all AEGIS guardrails to major regulations and control frameworks to streamline your GRC approach. But don’t forget to close the gap by translating GRC into infrastructure and system-level requirements.
- “How do we govern AI we didn’t build ourselves?” Most AI exposure isn’t coming from internal models; it’s arriving embedded in the software that organizations already rely on. Third-party AI is the dark matter of enterprise risk: invisible on most asset inventories yet actively influencing decisions and handling sensitive data. Don’t assume that vendors’ existing risk management processes protect you. Accounting for third-party AI must be core to your vendor risk program for GRC to succeed.
- “How do we ensure AI agent actions are auditable?” As AI moves to act autonomously, the audit trail becomes more complex. Most logging and monitoring infrastructure focuses on human actions and application events, capturing what happened. Agent auditing, on the other hand, must record why it happened, including reasoning, tool usage, and additional context. While this satisfies a compliance requirement today, it’s invaluable for continuous improvement and incident response efforts in tomorrow’s agentic enterprise.
- “How do we prevent shadow AI adoption?” Employees aren’t waiting for IT approval to use AI. They’re already using it. Governance sets the tone from the top to outline acceptable use cases broadly, informed by responsible AI use, security, and regulatory considerations. Monitoring and prevention tools (i.e., DLP, IAM, etc.) provide visibility and protect data. Successful organizations focus on safely enabling rather than banning AI use based on business needs and trade-offs.
- “How do we connect AI governance to our broader risk program?” GRC for AI is frequently stood up as a sole initiative (e.g., implementing ISO 42001, chartering a committee, buying a GRC tool). It stays functionally disconnected from related programs like enterprise risk management, compliance, and security operations. But an AI failure can be a security incident, a compliance issue, an operational, and customer-related event all at once. Mapping the relationship between AI systems to critical processes is key to understanding impact.
Like Hubble’s law, the universe of GRC for AI will keep expanding whether you’re ready or not. The question isn’t whether your organization needs deeper, more technically rigorous GRC (it does). It’s whether you build that infrastructure intentionally, now, or scramble to construct it after the first significant AI-related loss event. The organizations that govern AI seriously today are the ones that will still be in control of their AI environments tomorrow.