Skip to content
"Legal gibberish"

OpenAI faces defamation suit after ChatGPT completely fabricated another lawsuit

ChatGPT continues causing trouble by making up lawsuits.

Ashley Belanger | 244

Armed America Radio touts one of its hosts, Mark Walters, as the "loudest voice in America fighting for gun rights." Now it appears that Walters' prominent commentary on gun rights and the Second Amendment Foundation (SAF)—a gun rights nonprofit that gave him a distinguished service award in 2017—has led generative AI chatbot ChatGPT to wrongly connect dots and make false and allegedly malicious statements about the radio host. That includes generating potentially libelous statements that Walters was once SAF's chief financial officer and treasurer and that he was accused of embezzling funds and defrauding SAF.

Now, Walters is suing ChatGPT owner OpenAI in a Georgia state court for unspecified monetary damages in what's likely the first defamation lawsuit resulting from ChatGPT's so-called "hallucinations," where the chatbot completely fabricates information.

The misinformation was first uncovered by journalist Fred Riehl, who asked ChatGPT to summarize a complaint that SAF filed in federal court.

That SAF complaint actually accused Washington attorney general Robert Ferguson of "misuse of legal process to pursue private vendettas and stamp out dissent." Walters was never a party in that case or even mentioned in the suit, but ChatGPT disregarded that and all the actual facts of the case when prompted to summarize it, Walters' complaint said. Instead, it generated a wholly inaccurate response to Riehl's prompt, falsely claiming that the case was filed against Walters for embezzlement that never happened while serving at an SAF post that he never held.

Even when Riehl asked ChatGPT to point to specific paragraphs that mentioned Walters in the SAF complaint or provide the full text of the SAF complaint, ChatGPT generated a "complete fabrication" that "bears no resemblance to the actual complaint, including an erroneous case number," Walters' complaint said.

"Every statement of fact" in ChatGPT's SAF case summary "pertaining to Walters is false," Walters' complaint said.

OpenAI did not immediately respond to Ars' request for comment.

Is OpenAI responsible when ChatGPT lies?

It's not the first time that ChatGPT has completely fabricated a lawsuit. A lawyer is currently facing harsh consequences in court after ChatGPT made up six cases that the lawyer cited without first verifying case details that a judge called obvious "legal gibberish," Fortune reported.

Although the sophisticated chatbot is used by many people—from students researching essays to lawyers researching case law—to search for accurate information, ChatGPT's terms of use make it clear that ChatGPT cannot be trusted to generate accurate information. It says:

Artificial intelligence and machine learning are rapidly evolving fields of study. We are constantly working to improve our Services to make them more accurate, reliable, safe and beneficial. Given the probabilistic nature of machine learning, use of our Services may in some situations result in incorrect Output that does not accurately reflect real people, places, or facts. You should evaluate the accuracy of any Output as appropriate for your use case, including by using human review of the Output.

Walters' lawyer John Monroe told Ars that "while research and development in AI are worthwhile endeavors, it is irresponsible to unleash a platform on the public that knowingly makes false statements about people."

OpenAI was previously threatened with a defamation lawsuit by an Australian mayor, Brian Hood, after ChatGPT generated false claims that Hood had been imprisoned for bribery. In that case, Hood asked OpenAI to remove the false information as a meaningful remedy; otherwise, the official could suffer reputation damage that he said could negatively impact his political career.

Monroe told Ars that Walters is only seeking monetary damages as a remedy at this time, confirming that Walters' potential reputation loss could impact future job opportunities or result in lost listeners of his radio commentary.

A law professor familiar with the legal liability of AI systems, Eugene Volokh, told The Verge that Walters' case could be weakened by any failure to ask OpenAI to remove false information or to prove that actual damages have already resulted from ChatGPT's inaccurate responses.

Volokh's legal analysis of this particular case can be found here. Next month, he'll publish a longer article (a draft of which can be found here) analyzing legal liability for AI output generally. In the latter, he argues that "libel claims are in principle legally viable" if the plaintiff "can show that the defendant knew the statement was false or knew the statement was likely false but recklessly disregarded that knowledge" or "can show proven actual damages (e.g., lost jobs, lost business opportunities, lost social connections, and the like) and the plaintiff is a private figure and the defendant was negligent in making the false statement."

"Here, it doesn't appear from the complaint that Walters put OpenAI on actual notice that ChatGPT was making false statements about him and demanded that OpenAI stop that," Volokh told Ars. "And there seem to be no allegations of actual damages—presumably Riehl figured out what was going on, and thus Walters lost nothing as a result."

Monroe confirmed that Walters has not asked OpenAI to remove the false information but told Ars that he doesn't agree with Volokh's legal analysis.

"I don't know of any reasons why libel principles would not apply to companies that publish defamatory statements via AI," Monroe told Ars.

Experts have previously told Ars that it's still unclear if companies can be liable for AI output, partly because Section 230 could provide a legal shield. It seems more likely that users spreading false information generated by AI systems could be liable for damages.

Volokh told Ars that Section 230 may not apply, however, because Section 230 "doesn't immunize defendants who 'materially contribut[e] to [the] alleged unlawfulness' of online content."

"An AI company, by making and distributing an AI program that creates false and reputation-damaging accusations out of text that entirely lacks such accusations, is surely 'materially contribut[ing] to [the] alleged unlawfulness' of that created material," Volokh said.

Because Walters' complaint points out that OpenAI "is aware that ChatGPT sometimes makes up facts and refers to this phenomenon as a 'hallucination,'" this case could be the one to put companies' legal liability for AI systems to the test.

Volokh told Ars that arguing "that ChatGPT often does publish false statements generally" could bolster Walters' case. But that argument may not be sufficient—"just like you can't show that a newspaper had knowledge or recklessness as to falsehood just because the newspaper knows that some of its writers sometimes make mistakes," Volokh said.

Similarly, Volokh suggested that OpenAI might not be shielded by ChatGPT disclaimers like the one in its terms of use.

"Ars Technica can’t immunize itself from defamation liability by merely saying, on every post, 'this post may contain inaccurate information'—likewise with OpenAI," Volokh told Ars.

One problem with asking AI chatbots like ChatGPT for case summaries is that case law is not widely published online, the Free Law Project tweeted. AI systems like ChatGPT thus likely partly rely on widely published reporting to analyze case details, which seemingly increases the likelihood that any media figures commenting on a case could be confused as a party in a lawsuit, as Walters was.

Walters' complaint argued that OpenAI either should have known that it was publishing allegedly libelous information about Walters or that the company recklessly disregarded the falsity of ChatGPT's responses.

"ChatGPT’s allegations concerning Walters were false and malicious, expressed in print, writing, pictures, or signs, tending to injure Walters' reputation and exposing him to public hatred, contempt, or ridicule," Walters' complaint said.

Photo of Ashley Belanger
Ashley Belanger Senior Policy Reporter
Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.
244 Comments