A Stanford University misinformation expert subpoenaed in a Minnesota federal court case for filing an affidavit containing fabricated information has blamed a synthetic intelligence chatbot.
And the bot made more mistakes than the one highlighted by plaintiffs within the case, Professor Jeff Hancock wrote in an apologetic court filing, saying it had no intention of misleading the court or any lawyers.
“I express my sincere regret at the confusion this has caused,” Hancock wrote.
Lawyers for a YouTuber and a Minnesota state lawmaker suing to overturn a Minnesota law said in a court filing last month that Hancock's expert witness statement included a reference to a study by authors Huang, Zhang, Wang that didn't exist. They assumed Hancock had used a chatbot to create the 12-page document and called for the filing to be thrown out because it could contain additional, undetected AI forgeries.
This was the case: After lawyers sued Hancock, he found two additional AI “hallucinations” in his declaration, in line with his filing in Minnesota District Court.
The professor, founding director of the Stanford Social Media Lab, was brought into the case by the Minnesota attorney general as an authority defense witness in a lawsuit brought by state lawmakers and the satirist YouTuber. Lawmakers and the social media influencer are in search of a court order declaring unconstitutional a state law criminalizing election-related, AI-generated “deepfake” photos, videos and audio.
Hancock's legal tangle highlights one of the vital common problems with generative AI, a technology that has taken the world by storm since San Francisco-based OpenAI released its ChatGPT bot in November 2022. The AI chatbots and image generators often produce errors often known as hallucinations. Texts can contain misinformation, and pictures can contain absurdities similar to six-fingered hands.
In his regrettable filing with the court, Hancock, who studies AI's impact on misinformation and trust, explained how his use of OpenAI's ChatGPT to create his expert submissions led to the errors.
Hancock admitted that in his statement, along with the fake study by Huang, Zhang, Wang, he also included “a non-existent article by De keersmaecker & Roets from 2023” and 4 “fake” authors for an additional study.
To bolster his credibility with “details” of his expertise, Hancock claimed within the filing that he was a co-author of the “fundamental article” on AI-mediated communications. “In particular, I have published extensively on misinformation, including the psychological dynamics of misinformation, its spread, and possible solutions and interventions,” Hancock wrote.
He used ChatGPT 4.0 to seek out and summarize articles for his submission, however the errors likely didn't appear until later when he was drafting the document, Hancock wrote within the submission. He added the word “quote” to the text he gave the chatbot to remind himself so as to add academic citations to the points he was making, he wrote.
“So GPT-4o's response was to generate a quote, which I believe is where the hallucinated quotes came from,” Hancock wrote, adding that he believed the chatbot also made up the 4 fake authors.
This filing also called into query Hancock's reliability as an authority witness.
Hancock apologized in court and claimed that the three errors “have no bearing on the scientific evidence or opinions” he presented as an authority.
The judge within the case has scheduled a hearing for Dec. 17 to make a decision whether Hancock's expert testimony must be thrown out and whether the Minnesota attorney general can file a corrected version of the motion.
Stanford, where students might be suspended and sentenced to perform community service for using a chatbot to “Substantially complete an assignment or exam” without permission from her instructor, didn’t immediately reply to questions on whether Hancock would face disciplinary motion. Hancock didn’t immediately reply to similar questions.
Hancock will not be the primary to file a court filing containing AI-generated nonsense. Last 12 months, attorneys Steven A. Schwartz and Peter LoDuca were fined $5,000 each in federal court in New York for filing a private injury lawsuit that included falsified prior court cases utilized by ChatGPT to support their claims Arguments had been invented.
“I didn’t understand that ChatGPT could fabricate cases,” Schwartz told the judge.
image credit : www.mercurynews.com
Leave a Reply