Stanford AI forgery expert's court statement contained blatant AI forgery, lawyers claim

A Stanford professor serving as an authority witness in a federal court lawsuit over artificial intelligence-generated forgeries filed an affidavit containing false information that likely got here from an AI chatbot, a lawsuit says.

The statement, filed by Jeff Hancock, professor of communications and founding director of the Stanford Social Media Lab, “cites a study that does not exist,” in response to the plaintiffs' Nov. 16 filing within the case. “It is likely that the study was a 'hallucination' generated by a large AI language model such as ChatGPT.”

Hancock and Stanford didn’t immediately reply to requests for comment.

The lawsuit was filed in Minnesota District Court by a state lawmaker and a satirical YouTuber in search of a court order striking down a state law that criminalizes election-related, AI-generated “deepfake” photos, videos and audio. is said unconstitutional.

According to court filings Saturday, Hancock was retained as an authority witness by the Minnesota attorney general, a defendant within the case.

Lawmakers and YouTubers questioned Hancock's reliability as an authority witness within the filing and argued that his report needs to be thrown out because it could contain additional, undetected AI fabrications.

In his 12-page submission to the court, Hancock said he was examining “the impact of social media and artificial intelligence on misinformation and trust.”

Submitted with Hancock's report was his list of “cited references,” court records show. One of those references — to a study by authors Huang, Zhang and Wang — caught the eye of lawyers for state Rep. Mary Franson and YouTuber Christopher Kohls, who can be suing California Attorney General Rob Bonta over a law that enables lawsuits for damages from election deepfakes.

Hancock cited the study, which reportedly appeared within the Journal of Information Technology & Politics, to support a degree he made in his court filing in regards to the sophistication of deepfake technology. The publication is real. But the study is “imaginary,” says the filing from Franson and Kohls’ lawyers.

The journal volume and article pages cited by Hancock don’t address deepfakes, but cover online discussions by presidential candidates about climate change and the impact of social media posts on election results, the filing says.

Such a quote, with a plausible title and a supposed publication in an actual journal, “is characteristic of an artificial intelligence 'hallucination' that academic researchers have warned their colleagues about,” the filing says.

Hancock stated, under penalty of perjury, that in his opinion he “identified the academic, scientific and other materials referred to,” the filing said.

The filing raised the chance that the alleged AI falsehood was inserted by the defendant's legal team, but added: “Hancock would still have made a statement falsely claiming to have reviewed the material cited.”

Last yr, attorneys Steven A. Schwartz and Peter LoDuca were fined $5,000 each in federal court in New York for filing a private injury lawsuit that included falsified prior court cases utilized by ChatGPT to support their claims Arguments had been invented.

“I didn’t understand that ChatGPT could fabricate cases” Schwartz told the judge.

image credit : www.mercurynews.com