Stanford University misinformation expert admits using AI to fake court citations

Stanford professor claims the fabricated citations were an unintentional result of using ChatGPT-4o.


News Desk December 03, 2024
PHOTO: FILE

Listen to article

A Stanford University misinformation expert has admitted using artificial intelligence to draft a court document that contained multiple fabricated citations about AI.

Jeff Hancock, a professor at Stanford, submitted the document in a case involving a Minnesota law that makes it illegal to use AI to mislead voters ahead of elections.

The fake citations, created by the AI tool ChatGPT-4o, were discovered by opposing lawyers who later petitioned the court to dismiss Hancock’s declaration.

Jeff Hancock, who charged the state of Minnesota $600 an hour for his services, claims the fabricated citations were an unintentional result of using ChatGPT-4o.

The Attorney General’s Office filed a new motion in court, stating that Hancock believes the citations were “AI-hallucinated” and that he didn’t intend to mislead the court or counsel.

The office was not aware of the fake citations until the opposing lawyers raised the issue. The AG’s office is now asking the judge for permission to allow Hancock to submit a revised declaration.

In a separate filing, Jeff Hancock defended his use of AI, stating that generative AI tools like ChatGPT are increasingly used in research and drafting by academics and professionals.

He argued that the practice was widespread, citing AI’s integration into programmes like Microsoft Word and Gmail.

However, his case follows a precedent set earlier this year, when a New York court ruled that lawyers must disclose the use of AI in expert opinions and rejected an expert's declaration after discovering the use of AI-generated content.

Jeff Hancock, known for his expertise in misinformation and technology, has published several papers on AI and its effects on communication. He used GPT-4o to help with the literature survey on deep fakes and to draft his declaration.

According to Hancock, the AI tool misinterpreted his personal notes as commands to insert fake citations.

The case has raised important questions about the ethics of AI use in legal proceedings, with concerns over the potential for misinformation to be introduced unintentionally.

Jeff Hancock’s role as an expert witness in numerous other court cases has yet to be fully examined, with no comment on whether AI was used in those instances.

COMMENTS

Replying to X

Comments are moderated and generally will be posted if they are on-topic and not abusive.

For more information, please see our Comments FAQ