Imagine discovering through ChatGPT that you’ve been imprisoned for 21 years for murdering your family, a claim entirely fabricated. This nightmare became reality for Arve Hjalmar Holmen, a Norwegian citizen, who encountered this shocking misinformation when he searched for himself on OpenAI’s chatbot. Distressed by the false accusation, Holmen has filed a complaint with the Norwegian Data Protection Authority, seeking penalties against OpenAI for the damaging and inaccurate information, as reported by the BBC.
The chatbot’s response to Holmen’s inquiry painted a horrific picture, alleging he had “gained attention due to a tragic event.” It falsely claimed he was the father of two boys found dead in a pond near their Trondheim home in December 2020, and that he was subsequently convicted of their murder and the attempted murder of a third son. ChatGPT further fabricated details about the case shocking the local community and receiving widespread media coverage. None of this ever occurred.
Understandably shaken, Holmen expressed his concern to the BBC, stating, “Some think that there is no smoke without fire — the fact that someone could read this output and believe it is true is what scares me the most.”
Digital rights group Noyb, acting on Holmen’s behalf, filed the complaint, arguing that ChatGPT’s response is defamatory and violates European data protection regulations concerning the accuracy of personal data. Noyb emphasized that Holmen has no criminal record and is a law-abiding citizen.
While ChatGPT includes a disclaimer acknowledging potential inaccuracies and advising users to verify information, Noyb lawyer Joakim Söderberg argues that this is insufficient. He contends, “You can’t just spread false information and in the end add a small disclaimer saying that everything you said may just not be true.”
Although AI chatbots are prone to generating erroneous information, often referred to as “hallucinations,” the severity of this particular error is alarming. This incident follows other high-profile cases of AI hallucinations, including Google’s AI Gemini tool suggesting using glue to stick cheese to pizza and recommending daily rock consumption.
Since Holmen’s search last August, ChatGPT’s model has been updated to incorporate recent news articles, according to the BBC. However, this doesn’t guarantee the elimination of errors.
This incident underscores the critical importance of verifying information generated by AI chatbots and avoiding blind trust in their responses. It also raises significant concerns about the safety and regulatory oversight of text-based generative AI tools, a field that has seen rapid expansion since ChatGPT’s launch in late 2022.