Summary
AI-based discrimination — even if it’s unintentional — can have dire regulatory, reputational, and revenue impacts. While most organizations embrace fairness in AI as a principle, putting the processes in place to practice it consistently is challenging. There are multiple criteria for evaluating the fairness of AI systems, and determining the right approach depends on the use case and its societal context. Furthermore, technology architecture and delivery leaders in charge of AI development efforts should adopt best practices to ensure fairness throughout the AI lifecycle — from defining the problem to model development, all the way through to deployment and ongoing performance monitoring.
Log in to continue reading
Client log in
Welcome back. Log in to your account to continue reading this research.
Become a client
Become a client today for these benefits:
- Stay ahead of changing market and customer dynamics with the latest insights.
- Partner with expert analysts to make progress on your top initiatives.
- Get answers from trusted research using Izola, Forrester's genAI tool.
Purchase this report
This report is available for individual purchase ($1495).