Amicus Warns Against Immunity, Lays Out Path Forward In ChatGPT Defamation Case

·

(January 7, 2025, 8:16 AM EST) -- ATLANTA — An amicus urged a Georgia judge not to adopt the “all-or-nothing” approaches offered by the parties in an artificial intelligence defamation case and instead focus on whether OpenAI LLC could have implemented an alternative design that would have prevented the alleged violation.

(Mark Walters v. OpenAI LLC, No. 23-A-04860-2, Ga. Super., Gwinnett Co.)

(TLPC amicus brief available.  Document #46-250108-038B.)

Technology Law and Policy Clinic at New York University School of Law (TLPC) filed its amicus curiae brief on Dec. 12.

Journalist Fred Riehl subscribes to OpenAI’s ChatGPT and interacted with the artificial intelligence about a court case on which he was reporting:  Second Amendment Foundation, et al. v. Robert Ferguson, et al., W.D. Wash., No. 23-647.  Washington Attorney General Robert Ferguson and others are the defendants in Second Amendment, while Second Amendment Foundation (SAF) and Alan Gottlieb are the plaintiffs.

Summary

Riehl asked ChatGPT to summarize the allegations in Second Amendment, to which the program responded that the case involved Gottlieb’s claims that while Mark Walters was serving as SAF’s treasurer or chief financial officer, he defrauded and embezzled funds from the foundation.  Walters is the host of the nationally syndicated Armed American Radio.  He claims that he is not a party to Second Amendment, that he never held either position at SAF and that there are no allegations that he misappropriated funds from SAF.  In fact, Second Amendment does not involve allegations of financial misconduct at all, Walters says.

When asked to provide a portion of the complaint, ChatGPT produced a paragraph describing the fabricated claims, Walters says.  When questioned further, ChatGPT provided the entire text of the complaint, though it was completely fabricated and included the wrong case number, Walters says.

Walters claims that by sending Riehl the materials through ChatGPT, OpenAI published false and malicious information about him that harmed his reputation and caused him public ridicule.

In a June 5, 2023, complaint filed against OpenAI in the Gwinnett County, Ga., Superior Court, Walters asserts a claim for defamation and seeks compensatory and punitive damages. 

OpenAI moved to dismiss the action, which the court denied Jan. 11, 2024.

‘Clearly Inaccurate’

In a Nov. 14 memorandum of law in support of the motion for summary judgment, OpenAI says that with discovery now complete, Walters still cannot show the existence of a false statement made to a third party, that OpenAI acted with a degree of fault or an entitlement to damages.  Yet all three are required for a defamation claim, OpenAI says.

Riehl had the complaint in his possession when he queried ChatGPT, OpenAI says.  He had seen a press release on the case prior to asking ChatGPT about it.  There is also evidence that ChatGPT repeatedly issued disclaimers and warnings about its outputs, OpenAI says.

Something is defamatory only if it can reasonably be understood as describing real-world events, OpenAI says.  That is not possible in the case at hand because no one would believe ChatGPT’s “clearly inaccurate output,” OpenAI argues.  It says Riehl knew the facts about the lawsuit in question when he queried ChatGPT about it, knew ChatGPT was prone to producing fictional responses and knew the outputs at issue were false.  He did not believe them, but his research immediately confirmed what he knew, OpenAI says.

Immunity

In a Dec. 12 amicus brief in support of neither party, TLPC says the case presents “important and novel questions” about how defamation laws will apply to AI-generated content.  The “all-or-nothing” options offered by the parties do not provide for the proper balance of the issue.  In its forthcoming ruling, the court should consider defamation law’s core purpose and the facts specific to this case, TLPC says.

Despite OpenAI’s protestations to the contrary, it is possible that AI users would believe the program’s outputs, TLPC tells the court.  And the law should protect against reputational harms caused by the use of AI, TLPC says.

TLPC notes that many of the AI being employed by companies are included in services subject to “unilateral contracts of adhesion that members of the general public have no meaningful opportunity to negotiate.”  As these services inevitably grow in popularity, the possibility that users will rely on the information becomes ever more likely.  Nor are inaccurate outputs unlikely, as when challenged about how ChatGPT reached an incorrect conclusion, it will often simply output more “unsubstantiated hallucinations,” TLPC says.

“The Court should reject OpenAI’s attempt to obtain what would amount to categorical immunity from defamation claims for generative AI companies,” TLPC argues.

Speech Protections

TLPC notes however that the First Amendment, U.S. Const., amend. I, does offer some protection and that the court must “carefully and creatively” apply the protections enshrined in the First Amendment with those promised by defamation law.  “In doing so, the Court should look to established principles of products liability and agency law to conduct a fact-specific inquiry into whether the company that developed and deployed the generative AI tool has met the requisite level of fault,” TLPC says.

“In the immediate case, the Court should question whether liability could be established based only on evidence that OpenAI was aware of the general risk of hallucinations, as such a low bar for finding defamation fault would raise serious First Amendment concerns by heavily curtailing generative AI companies’ ability to offer their tools to the public.  In particular, the Court should query whether Mr. Walters has identified a design decision which, if implemented, would have prevented ChatGPT from making the outputs about him at issue.  Requiring a plaintiff to identify a reasonable alternative design . . . would be one way for a court to find liability without seriously jeopardizing the First Amendment’s fault requirement,” TLPC says.

“No court has yet directly grappled with the difficult challenges posed for defamation law when a corporation develops and deploys a generative AI tool capable of making false, apparently factual claims about specific individuals.  The Court’s analysis will help shape the development of this area of the law — both in Georgia and throughout the country,” TLPC argues.

Counsel

Walters is represented by John R. Monroe of John Monroe Law PC in Dawsonville, Ga.

OpenAI is represented by Stephen T. LaBriola and Maxwell R. Jones of Fellows LaBriola LLP in Atlanta.

TLPC is represented by Clare R. Norins of the First Amendment Clinic at the University of Georgia School of Law in Athens, Ga.

(Additional documents available:  OpenAI’s memorandum in support of summary judgment.  Document #46-241211-020B.  Jan. 11, 2024, order. Document #46-240207-016R.  OpenAI’s motion to dismiss.  Document #46-240105-018B.  Walters’ opposition.  Document #46-240105-017B.  Amended complaint.  Document #46-231004-014C.)