Mistral AI Faces Lawsuit Over Personal Data Breach: Exclusive Details Inside!

Mistral AI, an emerging French startup in the artificial intelligence sector, is currently embroiled in a controversy reported by L’informé. A lawsuit has been filed with the CNIL by a lawyer accusing the company of ignoring users’ rights regarding their personal data. Specifically, if you are using the free version of their chatbot, Le Chat, your data -is- automatically used to train AI models without any option to opt-out, a choice that is made available to paying users.

Mistral AI’s Right to Respond:

“User queries are essential for enhancing the accuracy of our assistant’s responses. Mistral AI has always provided its users the ability to opt-out of using data from their interactions with Le Chat.”
Spokesperson for Mistral AI

No Opt-Out for Free Users

Lawyer Jérémy Roche approached the CNIL about a specific issue: the free version of Le Chat logs everything typed by users and the responses it gives, using this data to refine the AI. Escape is not an option, unlike the “Pro” subscribers ($15/month) who can deactivate this feature in their settings. Moreover, “Team” and “Enterprise” subscribers’ data is not even used in this manner.

The core issue? According to Roche, Mistral AI makes the fundamental right to object to data use conditional on subscription payment. This approach might violate the GDPR, the European data protection regulation. Indeed, Article 12 of the regulation states that user rights must be accessible at no cost, except in special circumstances (such as excessive or repetitive requests). Clearly, this is not the case here.

A Common Yet Criticized Practice

Mistral AI is not the only company relying on user data to train its AI. OpenAI, with ChatGPT, engaged in the same practice until the Italian data protection authority intervened. Since then, OpenAI has introduced an option to reject this data collection. Even Elon Musk, through his AI Grok integrated into X (formerly Twitter), allows users to adjust their settings to prevent the use of their conversations for training purposes. Perplexity, an AI-based search engine, also offers a toggle to disable this option.

As you can see, other players have already been forced to amend their methods on this issue. Mistral AI might have to do the same under pressure from the CNIL.

So far, neither Mistral AI nor the CNIL has officially responded to this complaint. However, if the French authority decides to step in, the company may need to revise its practices… and potentially offer all users a real choice regarding the use of their data.

4.1/5 - (36 votes)

Leave a Comment