Skip to main content

Anthropic blames Claude AI for ‘embarrassing and unintentional mistake’ in legal filing

The chatbot added wording errors to a citation that made it look like a genuine article didn’t exist.

The chatbot added wording errors to a citation that made it look like a genuine article didn’t exist.

STK269_ANTHROPIC_A
STK269_ANTHROPIC_A
Not the best publicity for Anthropic’s chatbot.
Cathryn Hutton / The Verge
Jess Weatherbed
is a news writer focused on creative industries, computing, and internet culture. Jess started her career at TechRadar, covering news and hardware reviews.

Anthropic has responded to allegations that it used an AI-fabricated source in its legal battle against music publishers, saying its Claude chatbot made an “honest citation mistake.”

An erroneous citation was included in a filing submitted by Anthropic data scientist Olivia Chen on April 30th, as part of the AI company’s defense against claims that copyrighted lyrics were used to train Claude. An attorney representing Universal Music Group, ABKCO, and Concord said in a hearing that sources referenced in Chen’s filing were a “complete fabrication,” and implied they were hallucinated by Anthropic’s AI tool.

In a response filed on Thursday, Anthropic defense attorney Ivana Dukanovic said that the scrutinized source was genuine and that Claude had indeed been used to format legal citations in the document. While incorrect volume and page numbers generated by the chatbot were caught and corrected by a “manual citation check,” Anthropic admits that wording errors had gone undetected.

Dukanovic said, “unfortunately, although providing the correct publication title, publication year, and link to the provided source, the returned citation included an inaccurate title and incorrect authors,” and that the error wasn’t a “fabrication of authority.” The company apologized for the inaccuracy and confusion caused by the citation error, calling it “an embarrassing and unintentional mistake.”

This is one of many growing examples of how using AI tools for legal citations has caused issues in courtrooms. Last week, a California Judge chastised two law firms for failing to disclose that AI was used to create a supplemental brief rife with “bogus” materials that “didn’t exist.” A misinformation expert admitted in December that ChatGPT had hallucinated citations in a legal filing he’d submitted.

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.