Last week, the Business Roundtable (an influential group of CEOs of major US companies) published a Roadmap for Responsible AI. While many companies are already thinking about responsible AI due to market forces such as the impending Artificial Intelligence Act in Europe and the demands of values-based consumers, this announcement will elevate the conversation to the C-suite.
Some of the principles are refreshingly prescriptive, such as “innovate with and for diversity.” Others, such as “mitigate the potential for unfair bias,” are too vague or incomplete to be useful. For tech and business leaders interested in adopting any or all of these principles, the devil is in the details. Here’s our brief take on each principle with references to our published research (accessible to Forrester clients), where we get into much more depth on how to turn these principles into consistent practices:
- Innovate with and for diversity. When the folks conceiving of and developing an AI system all resemble each other, there are bound to be significant blind spots. Hiring diverse teams to develop, deploy, monitor, and use AI helps to eradicate these blind spots and is something we at Forrester have been recommending since our first report on the ethics of AI in 2018.
- Mitigate the potential for unfair bias. There are over 20 different mathematical representations of fairness, and selecting the right one depends on your strategy, use case, and corporate values. In other words, fairness is in the AI of the beholder. We recently published a report with best practices for assessing fairness in AI and mitigating bias throughout the AI lifecycle.
- Design for and implement transparency, explainability, and interpretability. There are many different flavors of explainable AI (XAI) — transparency relies on fully transparent “glass box” algorithms, while interpretability relies on techniques that explain how an opaque system such as a deep neural network functions. To better understand the intricacies of XAI and identify the right approach for your uses cases, see our report on Explaining Explainable AI.
- Invest in a future-ready AI workforce. AI is more likely to transform most people’s jobs than eliminate them, yet most employees aren’t ready. They lack the skills, inclinations, and trust to embrace AI. Investing in the robotics quotient — a measure of readiness — can prepare employees for working side by side with AI.
- Evaluate and monitor model fitness and impact. The pandemic was a real-world lesson for companies in the danger of data drift. Companies need to embrace machine learning operations (MLOps) to monitor AI for continued performance and consider crowdsourcing bias identification with bias bounties.
- Manage data collection and data use responsibly. While the Business Roundtable framework emphasizes data quality and accuracy, it overlooks privacy. Understanding the relationship between AI and personal data is crucial for the responsible management of AI. We explore the relation between privacy and AI in our report, Establish An Effective Privacy And Data Protection Program.
- Design and deploy secure AI systems. There is no secure AI without robust cybersecurity and privacy practices. Take Forrester’s Cybersecurity And Privacy Maturity Assessment to identify opportunities for improvement.
- Encourage a companywide culture of responsible AI. Some firms are beginning to take a top-down approach to fostering a culture of responsible AI by appointing a chief trust officer or chief ethics officer. We expect to see more of these appointments in the coming year.
- Adapt existing governance structures to account for AI. Ambient data governance, a strategy to infuse data governance into everyday data interaction and intelligently adapt data to personal intent, is ideally suited for AI. Map your data governance efforts in the context of AI governance.
- Operationalize AI governance throughout the whole organization. In many organizations, governance has become a dirty word. That’s not only unfortunate, but also quite dangerous. Learn how to overcome governance fatigue.
As robust and well-meaning as the Business Roundtable’s roadmap is, it’s missing two critical elements that companies must embrace to adopt AI responsibly:
- Mitigate third-party risk through rigorous due diligence. Most companies are adopting AI in partnership with third parties — by buying third-party AI solutions or by developing their own solutions using AI building blocks from third parties. In either case, third-party risk is real and needs to be mitigated. Our report, AI Aspirants: Caveat Emptor, explains best practices for reducing third-party risk in the complex AI supply chain.
- Test AI to diminish risk and to increase business value. AI-infused software introduces uncertainty that necessitates extra testing of interactions between the various models and the automatic software. Forrester has developed a test strategy framework that is based on business risk and suggests the level and type of testing needed.
The emphasis on responsible AI is not going away anytime soon. Companies that invest in people, processes, and technologies to ensure ethical and responsible adoption of AI will future-proof their businesses from regulatory or reputational disruption.