Digital Music News is exploring the myriad of ways in which artificial intelligence is impacting the music industry. Our upcoming ‘Rules for AI’ conference features panelists from across the industry sharing their opinions on what the technology will add to our industry in the coming years.
Immensity’s Nick Minicucci took the time to speak with Digital Music News about the impact of generative AI on music creation and how AI is already benefiting music production. Minicucci shares that he believes it’s hard to put a number on the impact AI will have in the future, but that it will open new avenues for creative expression.
Digital Music News is hosting a mini-conference dedicated to exploring the impact AI will have on the music industry in the coming years, with experts like Immensity CEO Nick Minicucci weighing in. Want to listen in or attend in person? Here’s what you need to know.
- What: Digital Music News’ Conference ‘The Rules for AI’
- When: October 25 | 11 am – 2 pm
- Where: Hollywood, Los Angeles
- Cost: $35
- Tickets: RESERVE YOUR SPOT
“AI tools will make music creation more accessible, so I expect we’ll see higher quality music coming from a larger number of people—not just artists,” he explains. “AI will also power more advanced tools for music production, so the most creative and innovative artists will have more ability to explore new sounds and ideas—potentially spinning up new genres and music unlike anything we’ve heard before.”
“Generative AI can already create music on its own and while that’s not going to replace human creativity, it will replace the need for humans to write much simpler or repetitive music—like functional background music.”
The impact AI will have on background music is a universal issue expressed by our panelists so far. It will definitely change that landscape, but AI is already being deployed on the production side to make that process much more streamlined. We asked Nick who he believes is the biggest winner in AI so far.
“I think that some of the biggest winners are the companies out there doing AI-powered stem separation. A lot of musicians in my network have been adopting this tech very quickly and have a ton of positive feedback. There’s a lot of opportunity for AI in that space.”
Stem separation is the ability to isolate individual elements of a mix—which is sometimes as easy as clicking a button with AI. Separating stems can mean separating vocals from instrumentals, or various instruments present in a mix. Traditional methods of stem separation involved a process of equalization and phase cancellation—but the result was never pristinely clean separate elements. AI separation is leaps and bounds better than traditional methods.
AI algorithms can identify minute differences in sound waves, resulting in more accurate separation than manual methods that rely on the human ear. Stem separation by AI is also much faster, typically with results in minutes instead of hours. Artifacting does happen in the process, but that often depends on the quality of the original recording. As it stands, stem separation tools for digital audio workstations (DAW) have become massively popular for music production.