The complainant would agree with you from the lawsuit ...One would think that the "loudest voice in America fighting for gun rights" would be enough of a public figure that the "actual malice" standard would apply.
Wikipedia is protected by '230.Hm. How often has Wikipedia been sued for "wrong facts?"
And the content isn't generated by machine learning so was a really bad comparison.Wikipedia is protected by '230.
Per the OpenAI accuracy clause then, does it make a difference that OpenAI not only knew the output could be inaccurate but explicitly states that it could be in advance, and made awareness of that part of the terms of use.The complainant would agree with you from the lawsuit ...
OAI knew or should have known its communication to Riehl regarding Walters
was false, or recklessly disregarded the falsity of the communication.
Except that I think the person using the AI should be held responsible. Surely it qualifies as 'reckless disregard for the truth'?If you think its not liable because people should know its not facts, consider if someone could get around the liable laws by having their "AI" say stuff for them. You could say anything about anyone - just program the AI with that specific bias.
Of course, all of this raises a bunch of questions: Is this actually defamatory? Is there actual malice? If so, who is legally liable?
And I’m not sure there are really good answers. First off, only one person actually saw this information, and there’s no indication that he actually believed any of it (indeed, it sounds like he was aware that it was hallucinating), which would push towards it not being defamation and even if it was, there was no harm at all.
Second, even if you could argue that the content was defamatory and created harm, is there actual malice by Open AI? First off, Watson is easily a public figure, so he’d need to show actual malice by OpenAI, and I don’t see how he could. OpenAI didn’t know that the material was false, nor did it recklessly disregard evidence that it was false. The fact that OpenAI warns users that OpenAI may make up untrue things does not change that calculation, even as Walters’ complaint suggests otherwise
…
Being aware generally that the AI sometimes makes up facts is not the same thing as being aware, specifically, that it had made up facts in this case. And for there to be actual malice, I’m pretty sure they’d need to show the latter.
And then, even still, if this got past all those hurdles, is OpenAI actually liable?
I still have difficulty seeing OpenAI as the liable party here. Again, it just has created this sophisticated “auto-complete” system that is basing what it says on its prediction engine of what the next word should be. It knows nothing of Mark Walters. It’s just trying to craft a plausible sounding narrative based on the prompts provided by Riehl.
…
And, really, if this makes OpenAI liable, it seems lots of people could just ask OpenAI to fabricate any story they wanted, and then sue OpenAI over it. And… that can’t be right. Especially in a case like this where there is literally no harm done at all. Only one person saw the output and that person knew it was false, and quickly checked to confirm that it was false.
Thing is it can't there is no difference between true and false information to the model, it's just a statical likely hood that the output is coherent based on training data. The model doesn't "know" it's lyingGenerative AI should be programmed to say I don't know rather than filling in around the details. Hopefully fear of this sort of lawsuit will result in better versions of these language models.
Exactly! Also the model doesn't know it's telling the truth! It knows nothing!Thing is it can't there is no difference between true and false information to the model, it's just a statical likely hood that the output is coherent based on training data. The model doesn't "know" it's lying
We now have a plaintiff that knows ChatGPT will publish mistruths about him when prompted. True those mistruths are not published to the world at large, but it is clear that ChatGPT will publish them to anyone that asks. This is most definitely a ChatGPT problem regardless of the TOS which do NOT apply to the person that the mistruths are generated about -- just like I cannot sign you into a contract for which you know nothing about.Except that I think the person using the AI should be held responsible. Surely it qualifies as 'reckless disregard for the truth'?
The AI doesn't have agency here. It cannot be responsible, and thus cannot be used to dodge responsibility. If you use it, you should be liable for what that use results in.
There is a fundamental difference between a tool inadvertently saying some incorrect thing - just like a search engine or any statistical tool - and intentionally inducing the tool to produce a predetermined result. That is, the former is other people not understanding what the tool is and getting wrapped up in media and marketing hysteria while the latter has nothing really to do with the tool, which is used as a veil for traditional disinformation. The latter is an expression of intent, which is a necessary component for defamation under the law (or negligence, but that's not part of the hypothetical); the former has no intent and is incidental.If you think its not liable because people should know its not facts, consider if someone could get around the liable laws by having their "AI" say stuff for them. You could say anything about anyone - just program the AI with that specific bias.
It's not like any of this is new, except for the media attention that started last year. Our inept media spheres are fueling the misinformation about these tools and that's the biggest threat for the time being, Altman's unhinged "human extinction" quips and calls for legal motes or Musk's cries for attention notwithstanding.Exactly! Also the model doesn't know it's telling the truth! It knows nothing!
That's one reason why I hate the term "hallucinate" for these models. When it gets an answer wrong everyone says it "hallucinated" the wrong answer. But it didn't. It's process for getting the answer wrong is exactly the same as its process for getting the answer right - it just followed a different random path to generate the text string it output and put out something we judge as incorrect instead of correct. If you want to say it "hallucinates" wrong answers you might as well say it "hallucinates" right answers because it's doing the exact same thing in both cases.
It's frustrating because when you know how these LLMs work you're looking at all of this like "what the heck is going on? Am I taking crazy pills or something?" Because it's just beads on a string. Really good context to get a really high probability that the next word on that string is going to be a good one, but that's all it is.It's not like any of this is new, except for the media attention that started last year. Our inept media spheres are fueling the misinformation about these tools and that's the biggest threat for the time being, Altman's unhinged "human extinction" quips and calls for legal motes or Musk's cries for attention notwithstanding.
The word "uncovered" seems misleading here. The text, and the claims made in the text, did not exist before Riehl asked the LLM a question. This might seem like nitpicking, but I think it's important enough since this is a story about legal issues.The misinformation was first uncovered by journalist Fred Riehl, who asked ChatGPT to summarize a complaint that SAF filed in federal court.
Exactly!Except that I think the person using the AI should be held responsible. Surely it qualifies as 'reckless disregard for the truth'?
"I don't know of any reasons why libel principles would not apply to companies that publish defamatory statements via AI," Monroe told Ars.
I'm not sure if falling back on the terms of service will work here. The terms of service (ToS) is a contract between the company providing the tool and that company's user/customer. It can't override actual law, which includes defamation / libel law. And it can't force a 3rd party, who is not a user/customer of that company and never agreed to its ToS, to surrender any rights.Per the OpenAI accuracy clause then, does it make a difference that OpenAI not only knew the output could be inaccurate but explicitly states that it could be in advance, and made awareness of that part of the terms of use.
As are all and only who are innocent of wrongdoing.Wikipedia is protected by '230.
...hallucinated nobody mentally competent, ever."Is OpenAI responsible when ChatGPT lies?" YES!!!! We must make sure somebody is responsible...no more section 230s! The Tech industry avoids responsibility like the plague -- it's part of the geek personality (and I am one) -- but we should not bow to their efforts, even when they come disguised with a request to "regulate us please"!
Watson is not a neural network system.While we’re suggesting laws about LLMs, can we have a law that requires them to be called “believable bullshit generators”? Because that’s literally what they are.
Some AI (neural net) systems are actually trained to be accurate/truthful (like the ones that are trained to find medical problems in cat scans or X-rays). IBM’s Watson was trained to give correct answers to Jeopardy prompts. LLMs are not trained that way; they are trained to sound like something somewhere might actually say.