Best Practice Report

The Ethics Of AI: How To Avoid Harmful Bias And Discrimination

Build Machine Learning Models That Are Fundamentally Sound, Assessable, Inclusive, And Reversible

February 27th, 2018
With contributors:
Srividya Sridharan , Fatemeh Khatibloo , TJ Keitt , Jennifer Wise , Christopher McClean , Christian Austin

Summary

While the potential existential threat of artificial intelligence is well publicized, the threat of biased machine learning models is much more immediate. Most harmful discrimination is unintentional, but that won’t stop regulators from imposing fines or values-based consumers from taking their business elsewhere. Customer insights (CI) pros must defend against algorithmic or human bias seeping into their models while cultivating the helpful bias these models identify to differentiate between customers. This report will help CI pros learn how to identify and prevent harmful discrimination in their models — or their businesses will suffer reputational, regulatory, and revenue consequences.

Want to read the full report?

Contact us to become a client

This report is available for individual purchase ($1495).

Forrester helps business and technology leaders use customer obsession to accelerate growth. That means empowering you to put the customer at the center of everything you do: your leadership strategy, and operations. Becoming a customer-obsessed organization requires change — it requires being bold. We give business and technology leaders the confidence to put bold into action, shaping and guiding how to navigate today's unprecedented change in order to succeed.