The music industry, understandably, is deeply concerned about AI’s impact on creativity — and the jobs and companies that could be eliminated in the process. But take a closer look, and the AI picture is far more complicated, with different use cases, applications, and approaches that could prove beneficial to the business. As the discussions ensue, two very distinct types of AI music creation are coming into focus: Assistive AI and Generative AI.
One of the more interesting nuances coming out of the AI debate in music is the emergence of ‘good’ and ‘evil’ applications and outcomes. It all depends on the individual perspective, but the ‘evil’ might include Generative AI — AI that’s replacing songwriters by writing lyrics from scratch and creating scores of royalty-free music that’s supposedly ‘new.’ On the flip side, Assistive AI is increasingly being cast as a ‘good’ and helpful creative tool.
Regardless of the type, AI-generated content doesn’t exactly appear from thin air. Instead, AI is great at cribbing content from thousands of successful artists and songwriters. Earlier this month, Universal Music Group put its foot down by demanding that DSPs remove Generative AI content drawing from its catalog. Moments later, UMG was putting the squeeze on YouTube, Spotify, TikTok, and others to rip down a viral ‘collaboration’ between Drake and The Weeknd, just one of several Drake-inspired AI songs to surface.
But what is ‘Assistive AI,’ and why is it arguably ‘good’?
In a nutshell, Assistive AI applications allow enhanced musician creativity and collaboration, including the ability to quickly master songs to perfection. There are also rising applications like AI-powered sync mapping that could potentially bump the value of music IP assets by remonetizing old gems via sync opportunities.
There are early indications that artists are utilizing Assistive AI to augment their creative production processes, with generally positive takeaways. But Assistive AI is also controversial: for example, finishing a lyric with AI is a creative accelerant. But what was the source material for the eloquent additions that rhymed?
Digital Music News recently delved into the fast-emerging AI discussion by interviewing multiple leading experts in the field at CRS in Nashville.
DMN’s Noah Itman hosted the packed panel, which led to some thought-provoking perspectives about the bigger price we pay for using AI — and its broader, longer-term implications for the music industry.
While AI offers the magic of ‘creating something from nothing,’ AI also holds the power to put artists out of work. But does AI’s lack of emotion put it at a disadvantage?
Companies like CAA have announced broad-scale agreements with virtual talent, and Itman posed the question of whether a move like that is ethically acceptable. At a certain point, a scary question could emerge: what’s the incentive to continue working with human artists?
Chris McMurtry, VP of Product and Head of RME at PEX, noted that AI could ultimately fail to create music like humans, simply because all of its inputs are the products of human creativity. “A machine will react to what you tell it,” he said.
Zach Bair, CEO of VNUE agreed that even though AI is developing faster than anyone expected, “There’s no substitute for humanity and heart that goes into songwriting.”
Of course, there’s only so much that can be outsourced to a machine when discussing an emotional product. But AI’s true capability of interfering in music production processes, and influencing the royalties it generates for millions of artists, is only just beginning.
Recently, Ditto Music surveyed 1,299 independent artists that are actively releasing music in 2023. Surprisingly, the survey showed that 60% of musicians are already using AI to make music. Dig deeper, and the data gets more interesting.
A mere 28.5% said they would never use AI, though the reasons had absolutely nothing to do with not trusting artificial intelligence. In fact, the top two reasons for not using AI were a lack of access to AI tools and a lack of time.
These numbers paint the picture of broader AI adoption, and potentially greater creative output and positive benefits. But the broader AI future remains highly unpredictable.
Rahul Sabnis, CCO at iHeartMedia, suggests taking cautious steps with AI tools. Even in the past non-AI world, the collective music industry failed to regulate music rights effectively, and it won’t be easy in the AI world. “The [AI] genie’s out of the bottle. I’m very curious how fast this sweeps through, because it’s a wildfire coming that is about to hyper-accelerate the ability for us to do things.”
But if AI is drawing from this vast pool of music and there’s a way to fingerprint those resources, can this pave the way for artists to receive compensation for the music that’s ‘inspiring’ AI?
According to McMurtry, fingerprinting could be the solution, as it can point to the original rights holder that inspired AI. “Essentially, you can train AI to recognize itself.”
For Assistive AI that augments artists’ skill and expertise, Bair is optimistic that copyright laws will protect artists royalties from getting stolen by AI. “Imperative changes need to take place in our music royalty system.”
While the Assistive AI train is garnering support, Generative AI is rapidly becoming problematic. When AI generates royalty-free music, with no expiry of ownership and an inability to be copywritten, Bair believes compliance gets too complicated.
With Generative AI pumping out royalty-free music in bulk, it becomes challenging to trace it back to true rights holders. “Producers and developers of that original content will have to be compensated in some manner,” Bair argued, “Because there are 100-year old copyright laws at play. At the end of the day, it’s going to have to be a legislative solution, combined with a tech solution.”
It’s also worth noting that Generative AI has its eventual limits. At some point, AI ‘creativity’ starts feeding upon itself, creating a ouroboros of bland output. Sean Peace, founder and CEO of SongVest, says that AI is only as smart as what it has heard. “AI is taking bits and pieces of existing things and mashing them up. But from a legal standpoint, the copyright does get complicated.”
But how does a system compensate the potentially thousands of creators that were sourced by the AI itself to create something ‘new?’
Peace thinks that lawyers will have a field day trying to figure that out. “It’s an interesting conundrum, because AI created it, but humans gave it the input to create it.”
And in case you wanted something more complicated: Peace also believes that in the current climate of lack of regulation and legislation, “The software developer could very well be able to put a claim on the copyright.”