Summary
While the potential existential threat of artificial intelligence is well publicized, the threat of biased machine learning models is much more immediate. Most harmful discrimination is unintentional, but that won’t stop regulators from imposing fines or values-based consumers from taking their business elsewhere. Customer insights (CI) pros must defend against algorithmic or human bias seeping into their models while cultivating the helpful bias these models identify to differentiate between customers. This report will help CI pros learn how to identify and prevent harmful discrimination in their models — or their businesses will suffer reputational, regulatory, and revenue consequences.
Log in to continue reading
Client log in
Welcome back. Log in to your account to continue reading this research.
Become a client
Become a client today for these benefits:
- Stay ahead of changing market and customer dynamics with the latest insights.
- Partner with expert analysts to make progress on your top initiatives.
- Get answers from trusted research using Izola, Forrester's genAI tool.
Purchase this report
This report is available for individual purchase ($1495).