Oops! It Did It Again: Italian Data Protection Authority Asks DeepSeek About Its Data Privacy Practices
While the world (and financial markets!) was taken by surprise by the rise of DeepSeek’s open-source model, DeepThink (R1), the Italian privacy regulator — the Garante – didn’t waste time, sending a formal request to the Chinese company to disclose information about its practices with personal data. Soon after, it blocked it.
In its first-mover style that is becoming a staple, the Garante formally asked DeepSeek to answer questions about specific measures it leverages when collecting and processing personal data for the development and deployment of its technology. The questions are the same ones that the regulator asked OpenAI many months ago: What data has the company collected, for what purposes is it using the data, what legal basis (e.g., consent) did DeepSeek rely on for both collection and processing, and where is the data stored? More questions relate to the potential use of web scraping as a means to collect users’ data.
DeepSeek’s rejected the questions as — it said — it doesn’t operate in Italy and GDPR is not applicable to its data processing practices. The Garante then blocked DeepSeek in Italy and launched a formal investigation. Meanwhile, also the Ducth and Irish privacy regulators are inquiring. We expect further developments very soon.
Two things are important to keep in mind:
- We have seen the Garante take similar actions against OpenAI in the past. In fact, the Garante issued a fine as part of that investigation. This time, however, with multiple authorities already involved in the case and with a company, DeepSeek, practically refusing to engage, measures could be more drastic and pervasive.
- DeepSeek’s privacy policy is concerning. It states that the company can collect users’ text or audio input, prompts, uploaded files, feedback, chat history, or other content and use it for training purposes. DeepSeek also maintains that it can share this information with law enforcement agencies, public authorities, etc., at its discretion. It’s clear from previous cases that European regulators will question and likely stop this type of practice.
DeepSeek’s privacy practices are concerning but not too dissimilar from those of some of its competitors. DeepSeek’s substantial refusal to cooperate, however, is very different from its competitors’ behaviour. When coupling privacy risks to other geopolitical and security concerns, companies must take caution in their decision to adopt DeepSeek products. In fact, the European AI Office — a newly created institution to monitor and implement the EU AI Act, among other things — is also watching closely DeepSeek regarding concerns such as government surveillance and misuse from malicious actors.
From a privacy perspective, it’s fundamental that organizations develop a strong privacy posture when using AI and generative AI technology. Wherever they operate, they must keep in mind that, even when regulators are not as active as the Garante and when privacy regulations might be lagging, their customers, employees, and partners are still expecting their data to be safe and their privacy to be protected. Who they choose as business partners and who they share their customers’ and employees’ data with matters.
If you want to discuss this topic in further detail, please schedule a guidance session with me.