A European privacy group has filed a legal complaint against OpenAI, the company behind the popular artificial intelligence (AI) chatbot ChatGPT. The complaint arises from an incident where ChatGPT falsely accused a Norwegian man of murdering his children. The privacy group claims that OpenAI violated the General Data Protection Regulation (GDPR), which is a law designed to protect individuals’ personal data in Europe.
AI Generates False Criminal Accusation
Arve Hjalmar Holmen, a man from Norway, had asked ChatGPT for information about himself. The chatbot responded with a completely inaccurate statement. According to ChatGPT, Holmen had been convicted of murdering his two sons and attempting to kill a third child. It even specified that he had received a 21-year prison sentence for the crime.
While some details in the response were correct—such as the number and gender of his children and the name of his hometown—the claim about the murder was entirely false. This error, referred to as a “hallucination” in AI terms, is when an AI system generates incorrect or fabricated information. AI hallucinations can occur when a model processes biased or inaccurate training data, leading to the creation of false or misleading outputs.
The Austrian-based privacy rights group Noyb, which focuses on enforcing data protection rights, has now filed a formal complaint against OpenAI. In their complaint, Noyb included a screenshot of the conversation between Holmen and ChatGPT. However, they redacted the exact date the incident took place to protect privacy.
Since the incident, OpenAI has made updates to its AI model. Now, if a user asks ChatGPT about Holmen, the AI no longer falsely claims that he committed a crime. However, Noyb raises concerns that the incorrect data could still exist in OpenAI’s system. They warn that ChatGPT continuously learns from user interactions, making it difficult to know if the false information has been fully removed from the system or if similar errors might occur again.
Noyb’s Demands for Action
Noyb has officially submitted a complaint to Norway’s Data Protection Authority (Datatilsynet), arguing that OpenAI failed to meet its obligations under the GDPR. Specifically, they claim that OpenAI did not ensure the accuracy of the personal data processed by the AI, which violates Article 5(1)(d) of the GDPR. This rule mandates that companies must keep personal data accurate and up to date.
Holmen, the man falsely accused by ChatGPT, is particularly concerned about the long-term impact of this false information. He worries that the public might believe the fabricated story, thinking, “There is no smoke without fire.” Holmen’s fear is that the false claim could damage his reputation permanently, and some people may continue to believe the AI’s incorrect statement about him.
As part of their complaint, Noyb has asked for three specific actions to be taken against OpenAI:
-
Delete the false information from OpenAI’s system. Noyb argues that the company should permanently remove any data related to the fabricated criminal accusation.
-
Adjust the AI model to prevent similar mistakes. They want OpenAI to improve its system to ensure that such errors do not happen again in the future.
-
Impose a financial penalty on OpenAI to discourage future violations. The privacy group suggests that a financial penalty could serve as a deterrent to other companies that may not comply with data protection laws.
Kleanthi Sardeli, a lawyer representing Noyb, has criticized OpenAI’s handling of the false data. She stated, “AI companies cannot just ‘hide’ incorrect information while still processing it internally.” Sardeli emphasized that GDPR clearly applies to AI companies, and they must comply with data protection laws to protect individuals’ privacy.
The Legal Implications for AI Companies
This case could set a significant legal precedent for AI privacy regulations in Europe. If Noyb’s complaint leads to a ruling in favor of the privacy group, it could signal that AI companies must adhere to strict data protection standards under the GDPR. AI systems like ChatGPT that process vast amounts of user data may face new challenges in ensuring that they comply with European privacy laws.
This case also raises important questions about the responsibility of AI companies for the content generated by their models. While OpenAI has made efforts to update ChatGPT and correct the false information, the incident shows the risks of relying on AI systems that can produce misleading or incorrect outputs. Companies using AI for various purposes, including customer service, content generation, and legal advice, will likely have to re-evaluate their models and policies to prevent similar errors.
While OpenAI has yet to formally respond to Noyb’s complaint, the case has already attracted attention from privacy advocates and AI experts around the world. The growing use of AI technology in daily life means that issues like this will only become more prominent as time goes on. This case could lead to a more thorough examination of how AI companies handle personal data and whether they are doing enough to protect individuals’ rights under data protection laws like GDPR.
Regulatory Challenges and the Future of AI Privacy
The complaint filed against OpenAI by Noyb highlights the ongoing challenges in regulating AI technology and ensuring that AI companies comply with data protection laws. While OpenAI has made updates to its AI model to prevent similar mistakes in the future, the privacy group is calling for further action, including the deletion of false information, improvements to the AI model, and a financial penalty for OpenAI. The outcome of this case could have far-reaching implications for the future of AI privacy regulations in Europe, as well as for the broader tech industry.