Featuring:

Enza Iannopollo, Principal Analyst

Show Notes:

In March, the European Parliament formally adopted the EU AI Act, marking the start of a new chapter in AI development and use. The act, which aims to mitigate AI risks and improve safety and transparency, is expected to become the de facto international standard for regulating AI, similar to the General Data Protection Regulation (GDPR) for privacy. What should companies know, and how can they prepare? Principal Analyst Enza Iannopollo explains this week on What It Means.

Iannopollo notes at the outset the monumental challenge that’s implicit in the AI Act. “It creates compliance structures for a technology that changes on a daily basis,” she says. “The way that business is going to use this technology also is very much unknown today. So it is a little bit of ‘Mission Impossible.’”

She explains that the act categorizes AI use cases by risk level and provides a few examples from each category. “Unacceptable risk” use cases, which will be prohibited, include social-scoring scenarios in which a person is categorized based on sensitive characteristics, such as political views or sexual orientation. “High risk” use cases can encompass medical devices or technology that’s used to determine whether someone can access public or private services or qualify for a loan. “Limited risk” use cases touch on synthetic content and chatbots and will require companies to inform users that they are interacting with an AI system and label content as being AI-generated.

Companies don’t have much time to prepare for these changes. As Iannopollo notes, the EU AI Act is expected to officially come into force next month, and enforcement for “unacceptable risk” use cases will begin six months after that. The rest of the act will be enforced in stages over the next three years. Penalties for violating the act will be steep, with fines of up to 7% of a company’s global revenue for the most severe offenses.

The act’s reach will be broad, Iannopollo warns. “It’s for everybody,” she says. “Every company, every provider that wants to sell their [AI] systems in the EU, anybody using the system within the EU.” It also will apply to companies that aren’t “physically using the technology in Europe, but will use the outcome, the insights of the technology on the European market.”

Later in the episode, Iannopollo discusses the act’s potential impact on innovation. AI technology providers will need to foresee risks that may arise in their system’s learning process, she says, which will necessitate putting limits on the system’s learning capabilities. Additionally, companies using copyrighted materials in their AI training data will need to secure permission from the right holders. “Clearly, it’s complicating the process of the training of a model that we’ve seen so far,” she says. “I think some of it is necessary, some of it is going to be very clunky.”

Near the end of the episode, Iannopollo shares steps that companies should take now to prepare for the act becoming law. She closes the episode with a prediction on the impact of the act, so listen for that.