Lessons From The Inaugural Conference Of The International Association For Safe And Ethical AI
Last week’s inaugural conference of the International Association for Safe & Ethical AI in Paris started with a dire warning from renowned computer scientist Stuart Russell: “There are two potential futures for humanity — a world with safe and ethical AI or a world with no AI at all. We are currently pursuing a third option.” He said we’re in a moment where the entire human race is about to board an airplane that needs to stay aloft forever, and we have no safety standards in place.
This sense of existential urgency was echoed throughout the event by AI luminaries as diverse as recent Nobel Prize winner Geoffrey Hinton, Margaret Mitchell from Hugging Face, Anca Dragan from DeepMind, and Turing Award recipient Yoshua Bengio from the University of Montreal. The overwhelming consensus among these experts was that we should not be pursuing artificial general intelligence without knowing how to control it.
While most enterprises aren’t immediately concerned with AI’s existential questions, the conference also touched on several themes that are relevant to businesses today:
- AI alignment. At this point, most folks in the AI world are familiar with the paperclip maximizer thought experiment that demonstrates the catastrophic potential of AI misalignment. At the same time, they tend to discount it as science fiction. During her keynote, Anca Dragan demonstrated, “There is a clear technical path to misalignment.” Forrester’s research shows that misalignment is inevitable and poses an existential threat to your business today. Avoid catastrophe by adopting an align by design approach.
- Fairness. The intractable problem of bias in AI was a hot topic at the event, and opinions ranged from fatalistic (“there is no way to remove bias; we need to live with it”) to slightly more sanguine. One of the more compelling potential solutions to the problem came from Derek Leben, professor at Carnegie Mellon, who proposed a Rawlsian approach to algorithmic justice that combines and prioritizes several fairness metrics. While participants disagreed on the correct way to measure bias, there was widespread agreement that the best way to mitigate it is through proactive stakeholder engagement.
- Explainability. Fortunately, the fatalism around fairness did not extend to explainability, as well. Large language models are massive, complex, and entirely opaque … today. But promising research in mechanistic interpretability may eventually yield explanations of how large language models work. In the meantime, companies should strive for traceability and observability in their generative AI deployments.
While the event brought together academics, governments, and thought leaders from top AI vendors, enterprises were conspicuously absent. This was an unfortunate miss. It is the companies investing in AI that have the most leverage today in demanding that it is safe and ethical. Right now, these companies have the most to win and the most to lose. By demanding safety and ethical standards from AI vendors today, you may not only safeguard the future of your business … but potentially the future of humanity.