Mark Walters, host of "Armed American Radio," said in a brief filed in Georgia state court that the company "spits out lies" through its ChatGPT product. Though he isn't required, Walters said he can show "actual malice" by OpenAI, meaning the company can't secure summary judgment.
"OAI is operating the high-tech equivalent of the neighborhood gossip, who says, 'I don't know if this is true or not, but …' If all that it takes to avoid defamation liability is a liberal sprinkling of disclaimers, the law of libel would be very different indeed," Walters said.
To secure summary judgment, a party must show whether a reasonable jury could believe the defamatory statements were factual, Walters said. If the statement could reasonably be understood as describing actual facts or events, then a defendant is not entitled to a summary judgment, he said.
"In the present case, OAI provides its ChatGPT product to the public as a research tool," the brief said. "Clearly, research tools are not works of fiction, and the user would reasonably believe the output of the research tool is intended to be factual and not fiction or satire. At the very least this is a question for the jury that cannot be resolved on summary judgment."
Walters is a nationally syndicated radio host who describes himself as the "loudest voice in America" on Second Amendment issues, according to court documents. His suit against OpenAI alleges ChatGPT produced a fake complaint naming him as a defendant when Frederick Riehl, a journalist writing about a legal case, used the chatbot to research the case.
Riehl is a longtime friend of Walters and a board member of the Second Amendment Foundation. In May 2023, Riehl asked ChatGPT to summarize a legal complaint the SAF had filed against Washington Attorney General Robert Ferguson.
Walters alleges ChatGPT defamed him when it responded with a summary that said the Ferguson complaint was filed against him. The technology also alleged Walters "defraud[ed] and embezzle[d] funds from the SAF," according to court documents.
"OAI does not claim that it did not make false and defamatory statements concerning Walters, nor can it," Walters said. "The statements pertaining to Walters from OAI to Riehl were complete fiction and they were obviously defamatory."
The Ferguson complaint actually accused the attorney general's office of violating the SAF's civil and constitutional rights wheb conducting law enforcement investigations into alleged wrongdoing by the SAF and other plaintiffs.
Despite the ChatGPT incident, Riehl did not publish a story claiming Walters embezzled funds from the SAF or that Walters was accused of embezzlement, OpenAI said. Additionally, Riehl never repeated the story as true to anyone else and took no adverse action against Walters, the company said.
In a November memorandum, OpenAI said Walters qualified as a public figure. Therefore, he must show "actual malice" in a defamation case, meaning he must prove by clear and convincing evidence that OpenAI made the statements knowing they were false or with reckless disregard as to whether they were false.
"Although this case involves a new technology, the outcome would be the same if it involved a newspaper, book, or blog. Because there is no defamatory statement, no evidence of actual malice, and no recoverable damages, summary judgment is warranted on each of these independent grounds," OpenAI said in its motion for summary judgment.
Walters disputes that the "actual malice" standard should apply, saying he does not qualify as a public figure. Even if it did apply, Walters said his case would meet that standard.
"AI showed a reckless disregard for the falsity of its statements because it knew that ChatGPT had a proclivity to invent lies but left the system online and operational. It admits it has no way to monitor the lies. And even after it became aware that it demonstrably published lies about Walters, it refuses to say that if it took any measures specifically to prevent repeating the lies about Walters," he said.
Walters said actual malice requires clear and convincing evidence that a defendant acted with knowledge that the statement was false or with reckless disregard as to its truth.
"It is hard to imagine a case where the defendant showed more awareness that it was circulating false information than the present case," Walters said. "OAI goes to great lengths to emphasize that it tells its users repeatedly that its statements are not reliable."
An attorney for Walters declined to comment. Counsel for OpenAI didn't immediately respond to a request for comment Monday.
Walters is represented by John R. Monroe of John Monroe Law PC.
OpenAI LLC is represented by Ashley A. Carr, Brendan Krasinski, Danny Tobey, Ilana H. Eisenstein and Peter Karanjia of DLA Piper and Stephen T. LaBriola and Maxwell R. Jones of Fellows Labriola LLP.
The case is Mark Walters v. OpenAI LLC, case number 23-A-04860-2, in the Superior Court of Gwinnett County, State of Georgia.
--Additional reporting by Jake Maher. Editing by Drashti Mehta.
For a reprint of this article, please contact reprints@law360.com.