ChatGPT is in trouble with privacy. As part of an open investigation, the Italian regulator has stopped the use of ChatGPT in Italy. According to the regulator, ChatGPT fails to inform users about the personal data it collects and processes, it doesn’t provide a clear legal basis for the processing activity it engages on, and it doesn’t support minors’ consent.
For now, it’s only Italy, but if other European regulators share this view, ChatGPT and the companies that use the technology will soon be in trouble.
If you plan to adopt or have already implemented this technology, or if any of your third parties use it, governance matters! Forrester highlighted the security risks associated with ChatGPT as well as other risks that generative AI models bring, such as coherent nonsense, creation of bias, etc. Despite the criticism around the decision, the truth is that privacy risk in the context of generative AI is real, and now it is time to take a look at contracts and privacy policies. In fact, this is a good opportunity to understand more about how generative AI systems collect, process, and use data. The implications of, and the privacy risks associated with, generative AI systems can be particularly impactful, and you must address them!
First, assess whether personal identifiable information (PII) is in scope. Existing data flow maps will provide a first indication, and your record of processing activities should also tell whether your generative AI is using PII. If you are using automated data discovery and classification platforms, ask your provider to run a scan of your systems to acquire more accurate information about the PII going into your generative AI systems. If PII is in scope, keep reading!
Second, complete a data protection impact assessment. This assessment will help you identify risks pertaining to the activities in which your systems engage. While specific risk profile will depend on your specific circumstances, keep in mind that data storage and/or processing that happens outside of the EU requires specific compliance measures. Additionally, the initial assessment of the Italian regulator refers to “substantial volumes” of data collected and processed. This typically highlights the need for special risk mitigation measures. IT, security, and privacy teams must come together and identify which controls and processes that they must put in place.
Third, verify that you rely on the appropriate legal basis for the collection and processing of the personal data that goes into generative AI systems. According to the GDPR, you have multiple options to choose from. If you are using legitimate interest, then ensure that you complete and document a legitimate interest test and provide users with clear, specific, and friendly information about how you plan to use their data. Remember that users can still object to the usage of their personal information, even if you claim that you have a legitimate interest! If you are going for consent, then ensure that you provide accurate information to users when you collect and process data and tell them how they can exercise their privacy rights if they wish to do so.
Fourth, review processing agreements that you have in place with your generative AI providers or with the providers of technology that might embed or use generative AI in their products. You must also review contracts, which should detail how the providers are using your customers’ and/or employees’ personal information and should contain clauses for data breach notification, termination of contracts, data subjects’ requests, etc. This is also where clauses for international transfer of data live, and you should carefully review those, too.
Fifth, if children data is in scope, ensure that you are prepared to support consent collection from their parents or guardians. The definition of a child’s age range differs across Europe: In some countries, it’s up to 13 years of age, and in others, it’s 16. Depending on where you are, check how local regulators define the age range of children for privacy purposes. If children data should not be in your system, then build a funnel at data collection to exclude their data. A clear statement that the service is not intended for people below a certain age would be a good starting point.
If you have questions and would like to discuss this topic in more detail, I would be delighted to talk to you. Please reach out and schedule an inquiry or a guidance session with me.