The Future Is Now: Agentic AI Redefines Responsible AI
A month ago, I launched new research on Responsible AI (RAI) solutions. Forrester defines RAI solutions as software ensuring that organizations’ AI models and systems are explainable, accountable, and trustworthy. In that research, I redefined RAI in three critical components of explainability, accountability and trustworthiness. Some technology vendors declined to participate, stating that Responsible AI is not a new topic, and questioning the need to create a new market with an old name.
Old Terms In A New World
They are absolutely right, at least on the point of nomenclature.
The term “Responsible AI” has been in circulation for more than a decade. Around 2019, large technology organizations began publishing explicit “Responsible AI” principles, spreading wide adoption of the term. Underlying ideas appeared in policy and research conversations about “ethical AI,” “trustworthy AI,” and algorithmic accountability even earlier. These conversations centred on making systems justify their workings, and mitigate risks of unjust, illegal, or unethical outcomes. And the foundational concepts of RAI are much older.
- Explainability. The pillar including transparency, traceability, observability and interpretability, has roots in interpretable statistical modelling and expert systems research from the 1970s and 1980s.
- Accountability. The word is ancient, ultimately deriving from the Latin “computare,” meaning to calculate or reckon. The noun “accountability” appears in English in the 18th century, referring to the obligation to give an account of one’s actions.
- Trustworthiness. Fairness in decision-making predates computing entirely. Trust, as a social and economic concept, has existed as long as human cooperation.
Why The Change?
These words are old, but their meaning must significantly adapt to the age of genAI and LLMs, and it needs to further evolve to address the inevitability of agentic AI. Since genAI arrived, it forced organizations to deal with models’ hallucination, bias, and other legal issues such as IP protection and liability. Responsible AI is currently a discipline which primarily lives within data science teams, and their datasets. In its current organizational construct, RAI takes the shape of metrics, benchmarks, and controls for bias. Most organizations use data traceability and lineage as a proxy for data explainability, with model cards becoming more popular. Others include privacy best practices as a core element of their initiatives.
With AI’s failures becoming better understood, and real, organizations realize that responsible AI practices are essential, yet they continue to rely on approaches to RAI that are anchored in periodic risk assessments, ex-ante impact evaluations, outputs reviews, and retrospective log analysis. These approaches meet the needs of models deployed into relatively stable, yet non-deterministic systems like Machine Learning (ML) systems, predictive AI, and even some genAI use-cases, where the volume and speed of decisions can be audited after the fact, and governance occurs at defined checkpoints.
However, as agentic AI makes its way into enterprises, a point-in-time, reactive, and narrowly data-focused approach to RAI, will simply not cut it.
Why Now?
Agentic AI changes the ground entirely.
Agentic systems act. They plan, orchestrate, retrieve, write, execute, and modify. They interact across multiple systems, datasets, and user contexts. They trigger downstream effects in infrastructures that may have nothing to do with the model’s core architecture. They allocate access, modify data, initiate workflows, and in some cases reconfigure the environment in which future decisions will be made.
In this new world, RAI must be embedded deeply enough into agentic systems to observe the agents’ steps in complex, multi-systems, autonomous decision chains, and support the enforcement of remedies to avoid unintentionally biased decisions, and ethical outcomes.
What, Then, Is Responsible AI In The Age Of Agentic Systems?
Responsible AI in the age of agentic systems transitions to a discipline that governs autonomous decision-making as it happens, not periodically or at random moments. It embeds the critical components of explainability, accountability, and trust into the runtime fabric of AI infrastructure itself, and in alignment with the organization’s risk appetite and values.
In this new world, responsible AI must align to the core principles of dynamic frameworks for governing and securing agentic AI, such as Forrester’s AEGIS frameworks and must become a risk, not only a data, discipline. The old foundational concepts remain, and redefines RAI in a new world across its three critical components as follows:
- Explainability. This means real time observability and logging of what changes an agent is making, what data it is using or modifying, which systems are being affected and who (human or system) is accountable for those changes. This must happen across AI systems and third-party systems AI touches, not just AI models.
- Accountability. Evaluating whether AI agents’ actions and decisions align with defined policies, values, and users’ intent. If they don’t, Responsible AI solutions must be part of the remediation processes, either directly or orchestrating other controls, dynamically.
- Trustworthiness. Leveraging RAI to ensure fairness in AI decision-making, and ensuring that bias is not propagated through agents’ context from decision to decision. It must also help determine when a human must take action so that oversight is effective and organizations avoid the inevitability of oversight fatigue.
Ready or not, here it comes.
Let’s Connect
To discuss our recommendations further, reach out to schedule a guidance session.