Audio Deepfakes Are Getting So Good, They Might Not Be Legal

Audio deepfakes

Photo Credit: James Owen

Convincing audio deepfakes have arrived in 2020 – but they’re falling into legal grey territory.

Projects like OpenAI are feeding algorithms with Frank Sinatra tracks to create new tunes featuring the singer’s velvety voice. OpenAI has also created audio deepfakes of several popular artists, including Katy Perry, Elvis, Simon and Garfunkel, 2Pac, and others. Their model has been trained with 1.2 million songs scraped from the web, along with corresponding lyrics and metadata.

Inputting modifiers like ‘Dolly Parton‘ or ‘ Red Hot Chili Peppers’ will produce a convincing audio deepfake. But these audio deepfakes are toeing the line of legality to the point that Jay-Z has been fighting back. Earlier this year, his agency Roc Nation fought to get a deepfake of him taken down.

Sometimes the algorithm can hone in on an artist so well, it’s uncanny. Other times, you end up with Frank Sinatra singing a Christmas song about being in a hot tub. For example, here’s the lyrics OpenAI wrote for a Frank Sinatra-style Christmas ballad.

“It’s Christmas time, and you know what that means,
Ohh, it’s hot tub time!
As I light the tree, this year we’ll be in a tub,
Ohh, it’s hot tub time!”

Using an algorithm to generate new tunes that sound familiar is not new territory, but convincing audio deepfakes are a novel concept.

That new territory comes with plenty of new legal challenges. Amanda Levendowski, a professor at Georgetown Law, says these legal cases are closest to legal issues revolving around human impersonators.

Specifically, she cites a case from 1988 when the Ford Motor Company created an advertisement mimicking Bette Midler’s voice. The ad employed a former backup singer for Midler to re-create the iconic diva’s warble.

Midler sued over the ad, and the 9th Circuit Court of Appeals found that Midler’s voice was a protected property right. In that decision, Judge John T. Noonan writes that the First Amendment protects the recreation of a person’s voice. However, if that recreation is used to exploit another person’s identity, then it is not protected.

One could argue that creating new audio deepfakes ‘pretending’ to be Jay-Z or Travis Scott is impersonation.

“Copyright has been administered federally for a very long time; some states just disagree foundationally on why we have right of publicity,” Levendowski says. Right of publicity laws aren’t federal; they’re established state by state. Tennessee has a very favorable approach to right of publicity laws for artists due to Nashville’s thriving music industry.

AI-generated music that impersonates a human singer may fall under the same purview as the 1988 case of Ford Motor Company vs. Bette Midler. But can they be legal? Maybe, maybe not. It all depends on the context in which the audio is used.

Leave a Reply

Your email address will not be published.