YouTube Unveils New AI Likeness Protections — Covering Soundalike Audio and More — for ‘Uniquely Identifiable’ First Parties

youtube privacy guidelines
  • Save

youtube privacy guidelines
  • Save
YouTube has established new AI likeness and deepfake protections under its privacy guidelines. Photo Credit: Muhammad Asyfaul

In a move that could prove significant on the music rights side, YouTube is officially enabling first parties to demand the removal of unauthorized lookalike and soundalike content.

The Google-owned platform addressed the policy in a broader privacy guidelines update, emphasizing at the outset that content must contain “uniquely identifiable” information to constitute a violation.

Additionally, YouTube itself “reserves the right to make the final determination of whether a violation of its privacy guidelines has occurred,” according to the text.

Notwithstanding the discretion, logic and evidence suggest that sizable music rightsholders’ complaints will be heard loud and clear by the Content ID developer, which is also reportedly leaning into AI initiatives with the major labels. (Technically, YouTube “will not accept privacy complaints filed on behalf of” employees or companies, it’s worth clarifying. Complaints from legal reps will be accepted, however.)

Shifting the focus specifically to the takedown policy for “AI-generated or other synthetic content that looks or sounds like” a particular person but was created without permission, the media at hand needs to “depict a realistic altered” version of one’s likeness to qualify for removal.

Voice is expressly mentioned in the text, and the “realistic altered” description therefore applies to unapproved AI soundalike tracks, which remain plentiful (and continue to garner a substantial number of views) on the video-sharing platform and elsewhere.

Among the “variety of factors” YouTube will weigh when considering soundalike/lookalike removal notices are “whether the person can be uniquely identified,” whether the media in question has “public interest value,” and whether it “features a public figure or well-known individual engaging in a sensitive behavior such as criminal activity.”

Looking to the bigger picture, music rightsholders, chief among them the majors, have now curbed the prevalence of AI tracks on Spotify and will presumably have an easier time keeping up on unauthorized works on YouTube due to the privacy policy.

While important – Spotify and YouTube are, of course, decidedly popular music-access options – the near-term solutions don’t mark a comprehensive victory for rightsholders.

But industry companies and organizations are continuing to strive for fundamental progress, including with a stateside push for federal name, image, and likeness protections. Predictably, Congress, subject to no shortage of big tech lobbying, is proving slow to act in the complex and unprecedented area.

Across the pond, the BPI back in March took aim at soundalike-voice startup Jammable (formerly Voicify AI). Three months and change later, despite the firmly worded threat of legal action, a variety of AI soundalike voice models, from Michael Jackson to Katy Perry and many in between, still appeared to be live on Jammable at the time of this writing.