OpenAI sued for defamation after ChatGPT fabricated yet another lawsuit

Post content hidden for low score. Show…
Post content hidden for low score. Show…

Cloudgazer

Ars Legatus Legionis
18,163
One would think that the "loudest voice in America fighting for gun rights" would be enough of a public figure that the "actual malice" standard would apply.
The complainant would agree with you from the lawsuit ...

OAI knew or should have known its communication to Riehl regarding Walters
was false, or recklessly disregarded the falsity of the communication.
 
Upvote
20 (21 / -1)

rjd185

Ars Scholae Palatinae
757
Subscriptor
Given the accuracy clause in ChatGPT terms of use at https://openai.com/policies/terms-of-use, presumably the decision to ‘publish’ inaccurate information is not something to be laid at OpenAI’s door in any strict legal sense. Suing OpenAI looks like more publicity not a serious legal case. IANAL
 
Upvote
113 (124 / -11)
Post content hidden for low score. Show…

rjd185

Ars Scholae Palatinae
757
Subscriptor
The complainant would agree with you from the lawsuit ...

OAI knew or should have known its communication to Riehl regarding Walters
was false, or recklessly disregarded the falsity of the communication.
Per the OpenAI accuracy clause then, does it make a difference that OpenAI not only knew the output could be inaccurate but explicitly states that it could be in advance, and made awareness of that part of the terms of use.
 
Upvote
38 (43 / -5)
Post content hidden for low score. Show…

Hmnhntr

Ars Scholae Palatinae
2,268
If you think its not liable because people should know its not facts, consider if someone could get around the liable laws by having their "AI" say stuff for them. You could say anything about anyone - just program the AI with that specific bias.
Except that I think the person using the AI should be held responsible. Surely it qualifies as 'reckless disregard for the truth'?

The AI doesn't have agency here. It cannot be responsible, and thus cannot be used to dodge responsibility. If you use it, you should be liable for what that use results in.
 
Upvote
115 (121 / -6)

MHester

Smack-Fu Master, in training
17
"One problem with asking AI chatbots like ChatGPT for case summaries is that case law is not widely published online, the Free Law Project tweeted."

No, the problem with chatbots is not that other people are not making more work available for the companies that make chatbots to steal. The problem is that what comes out the end of the "AI" chatbot word-chipper is no more the synthesis of intelligent processing than the vomit spewed by a drunk college student on a Saturday morning. There may be some novelty value in it but the only place it really belongs is in the toilet as soon as possible.
 
Upvote
19 (50 / -31)

telenoar

Wise, Aged Ars Veteran
179
Subscriptor
This article is a bit of a mess… like a ChatGPT response.

1. The newsworthy part here is suing an AI vendor as liable for their product’s content. IANAL, but just from the article, it seems clear it has to do with “spreading” and with damages. If the same piece of liebel is only generated once per user, can it be considered “spreading” disinformation at all, and is it damaging enough to win a lawsuit?
So I suspect this suit (aside from being filed by a far-right nut), would be tossed, setting a precedent which may actually not be so good for society en large.

2. The other question is, is this hallucination reproducible? The article says nothing of it, and the lawsuit does not. It simply describes a single instance. As we know, the AI may or many reply with completely different information in response to the same (or all so slightly differently-phrased) query. It just spews well-phrased text.

3. “It's not the first time that ChatGPT has completely fabricated a lawsuit.”
Obviously, for anybody who’s used ChatGPT or been following up on LLM’s (i.e. most of us Ars readers). ChatGPT likely completely fabricates lawsuits many times a day, alongside any other fabrications.
 
Upvote
59 (60 / -1)
https://www.techdirt.com/2023/06/08...lucination-but-who-should-actually-be-liable/
Of course, all of this raises a bunch of questions: Is this actually defamatory? Is there actual malice? If so, who is legally liable?

And I’m not sure there are really good answers. First off, only one person actually saw this information, and there’s no indication that he actually believed any of it (indeed, it sounds like he was aware that it was hallucinating), which would push towards it not being defamation and even if it was, there was no harm at all.

Second, even if you could argue that the content was defamatory and created harm, is there actual malice by Open AI? First off, Watson is easily a public figure, so he’d need to show actual malice by OpenAI, and I don’t see how he could. OpenAI didn’t know that the material was false, nor did it recklessly disregard evidence that it was false. The fact that OpenAI warns users that OpenAI may make up untrue things does not change that calculation, even as Walters’ complaint suggests otherwise



Being aware generally that the AI sometimes makes up facts is not the same thing as being aware, specifically, that it had made up facts in this case. And for there to be actual malice, I’m pretty sure they’d need to show the latter.

And then, even still, if this got past all those hurdles, is OpenAI actually liable?

I still have difficulty seeing OpenAI as the liable party here. Again, it just has created this sophisticated “auto-complete” system that is basing what it says on its prediction engine of what the next word should be. It knows nothing of Mark Walters. It’s just trying to craft a plausible sounding narrative based on the prompts provided by Riehl.



And, really, if this makes OpenAI liable, it seems lots of people could just ask OpenAI to fabricate any story they wanted, and then sue OpenAI over it. And… that can’t be right. Especially in a case like this where there is literally no harm done at all. Only one person saw the output and that person knew it was false, and quickly checked to confirm that it was false.
 
Upvote
-5 (6 / -11)

studenteternal

Smack-Fu Master, in training
66
Generative AI should be programmed to say I don't know rather than filling in around the details. Hopefully fear of this sort of lawsuit will result in better versions of these language models.
Thing is it can't there is no difference between true and false information to the model, it's just a statical likely hood that the output is coherent based on training data. The model doesn't "know" it's lying
 
Upvote
105 (107 / -2)

fenncruz

Ars Tribunus Militum
1,610
Subscriptor++
Do we know if openai was trained on the cases in question, and then made stuff up? Or just trained on other cases, and then made stuff up. The first would seem to more likely fall into the "knowingly made false statements" while the second might get away with unknowingly making false statements.
 
Upvote
-18 (1 / -19)
Thing is it can't there is no difference between true and false information to the model, it's just a statical likely hood that the output is coherent based on training data. The model doesn't "know" it's lying
Exactly! Also the model doesn't know it's telling the truth! It knows nothing!

That's one reason why I hate the term "hallucinate" for these models. When it gets an answer wrong everyone says it "hallucinated" the wrong answer. But it didn't. It's process for getting the answer wrong is exactly the same as its process for getting the answer right - it just followed a different random path to generate the text string it output and put out something we judge as incorrect instead of correct. If you want to say it "hallucinates" wrong answers you might as well say it "hallucinates" right answers because it's doing the exact same thing in both cases.
 
Upvote
146 (148 / -2)

uwsparky

Wise, Aged Ars Veteran
106
Except that I think the person using the AI should be held responsible. Surely it qualifies as 'reckless disregard for the truth'?

The AI doesn't have agency here. It cannot be responsible, and thus cannot be used to dodge responsibility. If you use it, you should be liable for what that use results in.
We now have a plaintiff that knows ChatGPT will publish mistruths about him when prompted. True those mistruths are not published to the world at large, but it is clear that ChatGPT will publish them to anyone that asks. This is most definitely a ChatGPT problem regardless of the TOS which do NOT apply to the person that the mistruths are generated about -- just like I cannot sign you into a contract for which you know nothing about.

This will be an interesting case as, if I remember correctly, libel requires others to see it. I don't know what happens for a collection of individually tailored publications that are generated on request. And it may be a state-by-state issue which is going to be awful.

A whole bunch of law students and professors will have competing journal articles soon. :)
 
Upvote
-3 (18 / -21)

fancysunrise

Ars Scholae Palatinae
798
If you think its not liable because people should know its not facts, consider if someone could get around the liable laws by having their "AI" say stuff for them. You could say anything about anyone - just program the AI with that specific bias.
There is a fundamental difference between a tool inadvertently saying some incorrect thing - just like a search engine or any statistical tool - and intentionally inducing the tool to produce a predetermined result. That is, the former is other people not understanding what the tool is and getting wrapped up in media and marketing hysteria while the latter has nothing really to do with the tool, which is used as a veil for traditional disinformation. The latter is an expression of intent, which is a necessary component for defamation under the law (or negligence, but that's not part of the hypothetical); the former has no intent and is incidental.

There will be murky areas in the middle and establishing requirements for these cases is almost always difficult, but it's a false dichotomy. I just hope that judges don't get as confused and duped by the hype as these lawyers have been.
 
Upvote
21 (21 / 0)

fancysunrise

Ars Scholae Palatinae
798
Exactly! Also the model doesn't know it's telling the truth! It knows nothing!

That's one reason why I hate the term "hallucinate" for these models. When it gets an answer wrong everyone says it "hallucinated" the wrong answer. But it didn't. It's process for getting the answer wrong is exactly the same as its process for getting the answer right - it just followed a different random path to generate the text string it output and put out something we judge as incorrect instead of correct. If you want to say it "hallucinates" wrong answers you might as well say it "hallucinates" right answers because it's doing the exact same thing in both cases.
It's not like any of this is new, except for the media attention that started last year. Our inept media spheres are fueling the misinformation about these tools and that's the biggest threat for the time being, Altman's unhinged "human extinction" quips and calls for legal motes or Musk's cries for attention notwithstanding.
 
Upvote
24 (26 / -2)
It's not like any of this is new, except for the media attention that started last year. Our inept media spheres are fueling the misinformation about these tools and that's the biggest threat for the time being, Altman's unhinged "human extinction" quips and calls for legal motes or Musk's cries for attention notwithstanding.
It's frustrating because when you know how these LLMs work you're looking at all of this like "what the heck is going on? Am I taking crazy pills or something?" Because it's just beads on a string. Really good context to get a really high probability that the next word on that string is going to be a good one, but that's all it is.

I'll give Altman this - he's damn good at being a hype machine. I know that's his skillset and how he's done so well in Silicon Valley up to this point, but to manage to get so many people to think that his LLM is somehow more than just a really damn fine autocomplete tool - PT Barnum would applaud. If anyone can manage to make his eyeball scanning cryptocoin dreams work, it'll be him.
 
Upvote
54 (55 / -1)
Post content hidden for low score. Show…

Pluvia Arenae

Ars Tribunus Militum
2,677
Subscriptor++
The misinformation was first uncovered by journalist Fred Riehl, who asked ChatGPT to summarize a complaint that SAF filed in federal court.
The word "uncovered" seems misleading here. The text, and the claims made in the text, did not exist before Riehl asked the LLM a question. This might seem like nitpicking, but I think it's important enough since this is a story about legal issues.
 
Upvote
62 (62 / 0)

Wandering Monk

Wise, Aged Ars Veteran
163
Subscriptor
While we’re suggesting laws about LLMs, can we have a law that requires them to be called “believable bullshit generators”? Because that’s literally what they are.

Some AI (neural net) systems are actually trained to be accurate/truthful (like the ones that are trained to find medical problems in cat scans or X-rays). IBM’s Watson was trained to give correct answers to Jeopardy prompts. LLMs are not trained that way; they are trained to sound like something somewhere might actually say.
 
Upvote
7 (15 / -8)

fenris_uy

Ars Tribunus Angusticlavius
8,132
"I don't know of any reasons why libel principles would not apply to companies that publish defamatory statements via AI," Monroe told Ars.

I know that his job is to prove his case, but I can find a reason why libel principles wouldn't apply to this case, OpenAI didn't published anything about Walters.
 
Upvote
7 (12 / -5)

MMarsh

Ars Praefectus
4,318
Subscriptor
Per the OpenAI accuracy clause then, does it make a difference that OpenAI not only knew the output could be inaccurate but explicitly states that it could be in advance, and made awareness of that part of the terms of use.
I'm not sure if falling back on the terms of service will work here. The terms of service (ToS) is a contract between the company providing the tool and that company's user/customer. It can't override actual law, which includes defamation / libel law. And it can't force a 3rd party, who is not a user/customer of that company and never agreed to its ToS, to surrender any rights.

At most, the ToS might assign responsibility for the consequences of any ChatGPT-generated content to the user who prompted it. If the company can throw that user under the bus via the ToS, then I can see the case tipping in favour of the "it's just a tool, just like how you can't sue Microsoft because someone wrote a ransom letter in Word" angle.

I can also see how a court might conclude that, since OpenAI's product is producing libellous / defamatory material about a 3rd party who has no relationship with OpenAI, then OpenAI has an obligation to remedy that in some yet-to-be-determined fashion.

OpenAI might not be the publisher of the material in question, which could be a point in their favour. Or they may indeed be the publisher, if their tool is spitting the false statements out to anyone in the general public who prompts it with a particular name. I can write all the defamatory material I want in private; it only becomes a legal problem if I publish it for someone else to read. The person putting it out in public bears the brunt of the legal responsibility.

The "I just want monetary damages" as opposed to some kind of injunction or call for specific performance might be another point in OpenAI's favour. It makes it look to a judge & jury like Mr. Walters is just after cash, rather than actually protecting a reputation.

But I don't see a legally sound way to a "nobody's responsible, everything is fine, shrug, let's all go home" conclusion.

It's going to be an interesting case.
 
Last edited:
Upvote
11 (14 / -3)
"Is OpenAI responsible when ChatGPT lies?" YES!!!! We must make sure somebody is responsible...no more section 230s! The Tech industry avoids responsibility like the plague -- it's part of the geek personality (and I am one) -- but we should not bow to their efforts, even when they come disguised with a request to "regulate us please"!
...hallucinated nobody mentally competent, ever.
 
Upvote
6 (9 / -3)

Pluvia Arenae

Ars Tribunus Militum
2,677
Subscriptor++
While we’re suggesting laws about LLMs, can we have a law that requires them to be called “believable bullshit generators”? Because that’s literally what they are.

Some AI (neural net) systems are actually trained to be accurate/truthful (like the ones that are trained to find medical problems in cat scans or X-rays). IBM’s Watson was trained to give correct answers to Jeopardy prompts. LLMs are not trained that way; they are trained to sound like something somewhere might actually say.
Watson is not a neural network system.
 
Upvote
11 (11 / 0)