Anthropic CEO Doubles Down on Fair Use Defense—”The Law Will Back Us Up”

Dario Amodei CEO Anthropic
  • Save

Dario Amodei CEO Anthropic
  • Save
Photo Credit: Dwarkesh Podcast (Dario Amodei)

Anthropic CEO Dario Amodei says he believes that his company’s use of copyrighted works is sufficiently transformative and that “the law will back us up.”

In a recent New York Times interview speaking to Ezra Klein about the future of artificial intelligence and its development at break-neck pace, Amodei addressed the issue of these large AI models being tracked on copyrighted works. Klein asks him, “what is the responsibility to the actual wages and economic prospects of the people who made all this possible?”

“I think everyone agrees the models shouldn’t be verbatim outputting copyrighted content,” Amodei prefaces his stance. “For things that are available on the web, for publicly available, is that the training process, again, we don’t think it’s just hoovering up content and spitting it out—or it shouldn’t be spitting it out.”

“It’s really much more like the process of how a human learns from experiences,” Amodei says. “And so, our position is that it is sufficiently transformative, and I think that the law will back this up, that this is fair use.”

In 2023, Concord, Universal, and ABKCO filed a lawsuit against Anthropic accusing it of copyright infringement for the training materials in its chatbot, Claude. The complaint was filed in Tennessee and alleges that Anthropic’s business profits from ‘unlawfully’ scraped song lyrics from the internet to train its AI models—which reproduce copyrighted lyrics for users in chatbot responses.

Anthropic’s response to the lawsuit was to make the same argument that Amodei makes, that using the lyrics to train a chatbot is a ‘transformative use’ of the original material that adds a ‘further purpose or different character’ to the original works.

Anthropic further argued that the song lyrics it has in its training database make up a “miniscule fraction” of the training data and that licensing the scale required is incompatible with how it operates. “Training demands trillions of snippets across genres and may be an unachievable licensing scale for any party,” Anthropic argues.

Anthropic also argues that the labels themselves caused Claude to produce infringing content they control and thus are responsible for the infringement they report, rather than Anthropic itself.