Diego Lo Giudice, VP, Principal Analyst
AI is finding its way into software of all kinds, from the voice assistants in your home to the algorithms in life-saving healthcare applications. But the broader AI use becomes, the higher the risk of “AI gone wrong” — unintended decisions made by an algorithm.
In this episode of What It Means, Vice President and Principal Analyst Diego Lo Giudice discusses the expansion of AI and the increased need for checks and balances. But testing AI is not as simple as testing traditional software. As Lo Giudice puts it, how do you test something when you don’t know the desired or anticipated outcome?
Organizations deploying a wide array of AI-infused applications should prioritize testing of the AI-infused applications that present the highest risks. Is the algorithm recommending related products to buy or determining the length of prison sentences?
When it comes to the actual work of testing, there is good news and bad news. The good news is that there are some frameworks for testing AI emerging, and large tech firms are developing platforms for AI delivery that include testing. The bad news? For AI, testing doesn’t end when the software is deployed — in fact, it never ends. It’s vital to continue monitoring and testing the model in production to determine if it’s “drifting” from the original intent.
“This is the right moment to talk about this,” Lo Giudice points out, because there is still time to develop methods and protocols for AI testing before too many stories of “AI gone wrong” erode trust in the technology.