Anthropic Fires Back Against Music Publishers’ Injunction Motion: ‘Using Copyrighted Content As Training Data for an LLM Is a Fair Use’

anthropic ai
  • Save

anthropic ai
  • Save
Photo Credit: Igor Omilaev

Two months back, music publishers including Universal Music, Concord, and ABKCO demanded a preliminary injunction against Anthropic amid an infringement legal battle. Now, the generative AI company has officially fired back.

Amazon– and Google-backed Anthropic just recently submitted a 40-page response to the plaintiff publishers’ preliminary injunction request. For background, these publishers maintain that Anthropic committed infringement when ingesting their compositions without permission in connection with the AI training process.

Plus, Anthropic’s Claude AI assistant is said to have copied the lyrics of multiple works (often verbatim) in responses without attribution.

Consequently, Universal Music, Concord, and others are calling on the court to compel Anthropic to “implement effective guardrails” curbing alleged compositional infringement in existing products. Also sought by the plaintiffs is an order preventing the defendant from utilizing copies of “lyrics to train future AI models.”

(On the guardrails side, Anthropic approached the plaintiffs to build “additional guardrails and offered to cooperate with them to create even broader ones,” but they, following “not constructive” talks, went ahead and filed the injunction motion, according to the AI business. Anthropic is said to have since developed “additional safeguards to prevent” the display of the lyrics in question in any event.)

As mentioned, the AI developer has refuted the motion and the arguments therein – including by once again expressing the belief that training large language models (LLMs) on protected media constitutes fair use.

“While Anthropic is confident that using copyrighted content as training data for an LLM is a fair use under the law—meaning that it is not infringement at all—there is no basis to conclude that money damages would not make Plaintiffs whole if they ultimately prevail on the merits,” the defendant indicated.

Those comments pertain specifically to the argument that the plaintiffs suffered irreparable harm because of their works’ use as “tokens” in the training process.

Approximately two dozen similar suits “around the country” are unfolding sans preliminary injunctions, per the text, and “it speaks volumes” that the litigating publishers “allowed at least several months to pass without any notice to Anthropic of the allegedly incurable wrong.”

Beyond these top-level arguments, Anthropic described at length how Claude functions and its most common uses (“song lyrics are not among the outputs that typical Anthropic users request”).

Interestingly, notwithstanding the well-documented reach and capital of leading AI players, the defendant is also of the opinion that requiring licenses to train systems on copyrighted content would effectively mark the end of “today’s general-purpose AI tools.”

“One could not enter licensing transactions with enough rights owners to cover the billions of texts necessary to yield the trillions of tokens that general-purpose LLMs require for proper training,” the filing spells out. “If licenses were required to train LLMs on copyrighted content, today’s general-purpose AI tools simply could not exist.”

Lastly, Anthropic is urging the court to toss the injunction motion because of the suit’s venue (“Anthropic has no relevant connection to Tennessee”), besides attempting to pin the alleged infringement on the plaintiffs themselves for good measure.

“With respect to the copies that Plaintiffs’ [Claude AI] prompts yielded,” the document reads, “the only evidence in the record is that Plaintiffs and their agents, rather than Anthropic, made the allegedly infringing copies. … Under these circumstances, the output copies do not constitute copying by Anthropic.”