TikTok Announces Expanded AI Tagging — Including Automatic Labels — After Mending Fences With UMG

tiktok ai tags
  • Save

tiktok ai tags
  • Save
A visual representation of how certain AI-created content will be automatically labeled on TikTok moving forward. Photo Credit: TikTok

Eight months after debuting new tags for AI-created videos – and about one week after putting its Universal Music licensing dispute to rest – TikTok is expanding labels for auto-generated media.

The platform just recently unveiled its enhanced AI-labeling approach in a more than 700-word announcement message. That post, longer and more detailed than many of TikTok’s other releases, doesn’t mention Universal Music Group by name.

As many know, however, the major label amid its much-publicized showdown with TikTok criticized the app’s artificial intelligence policies on multiple levels.

“TikTok is allowing the platform to be flooded with AI-generated recordings—as well as developing tools to enable, promote and encourage AI music creation on the platform itself – and then demanding a contractual right which would allow this content to massively dilute the royalty pool for human artists, in a move that is nothing short of sponsoring artist replacement by AI,” UMG vented a little over three months ago.

Worth reiterating in light of this pushback is that September of 2023 had seen TikTok, undoubtedly with the UMG renewal talks in full swing, begin “testing an ‘AI-generated’ label.” At the time, the ByteDance subsidiary acknowledged eventual plans to apply the tag “automatically to content that we detect was edited or created with AI.”

Those plans have evidently come to fruition, after Universal Music kicked off May by touting the new licensing pact – and in particular TikTok’s pledge “to remove unauthorized AI-generated music from the platform” and roll out “tools to improve artist and songwriter attribution.”

Returning to today’s announcement from TikTok, the app has partnered with the Joint Development Foundation’s Coalition for Content Provenance and Authenticity (C2PA).

The “Content Credentials” technology, which is said to “attach metadata to content,” will now be supported and recognized by TikTok so that it can issue automatic labels to AI media “when it’s uploaded from certain other platforms.” At present, the involved works include videos and images, but will “soon” encompass audio-only uploads (like music) as well.

“TikTok is the first video sharing platform to put Content Credentials into practice,” the app spelled out on the adoption front. “This means that the increase in auto-labeled AIGC [AI-generated content] on TikTok may be gradual at first, since it needs to have the Content Credentials metadata for us to identify and label it.”

Furthermore, the app, which already labels clips featuring on-platform AI effects and compels creators to identify any “realistic” AI media, intends to “start attaching Content Credentials to” its own content (including when downloaded) sometime during “the coming months.”

TikTok has also joined Adobe’s Content Authenticity Initiative (CAI), which bills itself as “a group of creators, technologists, journalists, and activists leading the global effort to address digital misinformation and content authenticity.”

(According to its website, the previously noted C2PA “unifies the efforts of the Adobe-led Content Authenticity Initiative…and Project Origin, a Microsoft- and BBC-led initiative that tackles disinformation in the digital news ecosystem.”)

Next, TikTok, the Beijing-headquartered owner of which reportedly suppresses a substantial number of “sensitive words,” has partnered with MediaWise to release in 2024 a dozen videos “that highlight universal media literacy skills.”

Also on the horizon is a series of AI-focused videos made “with expert guidance from” Brooklyn-based Witness. By its own description, the latter organization aims to make “it possible for anyone, anywhere to use video and technology to protect and defend human rights.”

Finally, TikTok pointed to its previous commitments to targeting harmful AI content and underscored plans to, among other things, continue improving “proactive detection models.”