Featuring:
Brandon Purcell, VP, Principal Analyst
Show Notes:
If Amazon can make the mistake, it’s a good bet you can, too. In this episode, VP, Principal Analyst Brandon Purcell discusses some of the mistakes that lead to bias in AI and what the impact can be.
To level-set, Purcell defines exactly what is meant by “fairness” in the context of AI and why it’s such a challenge. “There are 21 different mathematical representations of fairness,” he says to emphasize that “fair” is not a simple concept to convey through AI. “So it’s easy to say ‘We want to be fair in the AI systems that we’re creating,’ but what that actually means in practicing it consistently in our processes can be challenging.”
Why is bias in AI such a risk to firms today? Purcell outlines some of the most prominent reasons, ranging from regulatory compliance issues to the risk of alienating large swaths of values-based consumers if bias is uncovered. But in addition to the “stick” as a motivator, Purcell says there are some “carrots” to encourage firms to look for and address bias in their AI systems (think diversifying your workforce and expanding your addressable market).
Purcell also provides real examples of bias found in AI, including the causes and outcomes. The first example covers a hiring algorithm Amazon developed but never used that was deprioritizing female candidates due to a lower number of applicants (in the algorithm’s mind, they had a lower likelihood of being employed, so they were pushed to the bottom of the pile). Another example in the healthcare field illustrates the risk of using a “proxy” when the data you want to use for a desired outcome is not available.
Purcell then provides some suggestions on how firms can identify bias and optimize algorithms to avoid some of the issues described, emphasizing that this work is more human than technological. “It’s so important to start the AI lifecycle by bringing in a diverse group of viewpoints and discussing the potential AI use case,” Purcell says. And that means asking the very basic question: “Does it make sense to outsource an important decision to a computer?”
The episode wraps up with a discussion of which roles within the organization are responsible for ensuring that AI is fair. (Hint: It’s NOT the data scientists or developers).