This week, I attended the Global Privacy Summit, hosted by the International Association of Privacy Professionals. This is my fourth time attending, and the shortest way to recap the event is: AI is so hot right now. It was the topic of the opening keynote, the closing keynote, and many sessions in between, including debates on how it should be regulated, whether it will take over the world, and how to evaluate the impact of AI.
For marketers, generative AI will kill our jobs or save creativity, depending on who you ask. If you’re testing a new AI use case or thinking about generative AI, here are some helpful lessons from the Global Privacy Summit to guide you on your journey:
- Understand the legal risks. While AI may feel like an unregulated free-for-all, it does have legal boundaries that it must comply with. For example, the Italian data protection authority ordered a pause on ChatGPT after determining it is trained on consumer data without those consumers’ consent and doesn’t sufficiently protect minors. Canada’s privacy regulator is also investigating ChatGPT, and other regulators may follow suit. As Federal Trade Commissioner Alvaro Bedoya noted in his keynote, AI must abide by the same fairness laws as everyone else: avoiding unfair or deceptive practices; avoiding harm; presenting equal opportunities for housing, lending, employment, etc.; and protecting civil rights. Work with your privacy team before gearing AI toward consumer data.
- Factor in non-legal risks such as ethics and harm. AI can’t create things out of thin air; it builds on what it learns from data sets. As my colleague Brandon Purcell has written, those data sets come with their own biases and/or may not be representative. Ultimately, AI is fed by data and should face the same questions we ask of any data set: Where did this data come from? Do we have permission to use this data for this purpose? Is using the data for this purpose the right thing to do for our customers? If you’re using consumers’ data in ways they don’t expect, you run the risk of being creepy.
- Be specific on the scope of how AI will support your use case. Before launching a new AI pilot, define the use case for it, how AI will support your goals, and what impact you expect it to have on your customers. By defining these parameters up front, it will be easier to evaluate whether the AI is performing as expected and if it has crossed a threshold of risks that are outweighing the benefits.
AI is already embedded in many of the tools that marketers use today, and it has unlocked new possibilities for things like personalization at scale. As we wade through potential generative AI use cases, marketers must acknowledge that this technology is still developing and can get many things wrong — from basic facts, like Bing saying 10 billion people live on Mars, to more worrying failures, such as wrongly accusing a law professor of sexual harrassment. To paraphrase my colleague Christina McAllister, we’re still in the experimental stage. For the safety of your customers, avoid consumer-facing applications of generative AI.
There’s more to come on this topic. Stay tuned for research on the implications of generative AI on advertising, led by Kelsey Chickering, Jay Pattisall, Nikhil Lai, and Mo Allibhai.