The Music Industry Is Moving at a Million Miles Per Hour — Is GPU Audio Building the Engine to Power It All?

  • Save

By offloading CPU loads onto the Graphical Processing Unit, GPU Audio is placing a powerful and underexploited parallel processor at the center of music production. The shift could have massive implications for the emerging music metaverse, AI, resource-intensive production environments, collaboration platforms, and bandwidth-demanding concert live streams. The company might provide the critical backbone required to support a myriad of fast-emerging music industry sub-categories.

Keeping up with the breakneck music industry of 2022 is a difficult task. Once upon a time, billions of streams were dazzling enough – now the industry is bursting with avatar bands, metaverse livestreams, increasingly-sophisticated DAW or plugin options, emerging collaborative platforms, NFT surges and flops, and resource-intensive, immersive audio experiences.

It’s a dizzying explosion — especially when it comes to the topic of powering it all. Concepts like complex spatial audio environments and real-time jamming sound great on paper, but is there enough processing power for all of it?

Graphic cards — or GPUs — are underexploited processing powerhouses capable of more than just graphics, and can take a good beating with massive amounts of workload. They are the backbone of the modern AI industry, the infrastructure of metaverses, and now—the future of audio?

As advancements in tech surpass imagination, computing demands for audio dutifully follow. Upgraded tech like spatial audio, neural networks, machine-learning-based plugins, and heavy virtual analog plugins now highlight a massive need for music developers. A new ‘standard of processing’ is needed to go where tech is going.

The pro-audio industry has tried numerous prospective solutions, from SHARC to FPGAs. But no company has successfully enabled the underutilized power of graphic cards. The tech had been thoroughly idealized yet never effectively executed due to the fundamental differences between GPU architecture and the sequential nature of audio processing.

Now, Los Angeles-based GPU Audio is tapping into the remarkable power of built-in GPUs for audio production and attempting to change the face of a multitude of other industries.

Currently the only company able to fully exploit the processing capability of built-in GPUs, GPU Audio founders refer to themselves as ‘unlockers’ and ‘enablers’ of  ‘accelerated audio computing.’

Judging from initial user feedback, the company appears to be filling a real gap in the industry. Alexander “Sasha” Talashov, co-founder and co-CEO at GPU Audio, described the company as ‘a conductive layer that connects companies, complex tasks, and systems.’ Just recently, the company joined forces with DMN to broaden adoption and accelerate the growth of this powerful solution.

GPU Audio launched its Early Access plugin at NAMM, and it’s already being touted as a solution that brings parallel processing back to the forefront of music production. Post NAMM drop, GPU Audio’s pre-beta users increased 33-times this summer and currently stand at over 20,000 users — up from 600 in the first couple of days of June. Just weeks ago, they further released the “Beta Suite” featuring a growing toolset of audio plugins, beginning with classic, mainstay effects: Flanger, Phaser, and Chorus.

Talashov spoke about the company’s broader mission of establishing new standards for pro-audio, and enabling connections to other computing technology advancements. According to Talashov, a good example is the current VST3 standard. “The VST3 doesn’t provide any new features; it doesn’t connect us with anything. No new hardware, no new software, no new bridges to the rest of the world. So we began there — by connecting it to the GPU.”

Speaking about building this bridge with GPUs, Talashov added, “This is not only connecting platforms — it’s connecting industries.”

It’s no secret that processing power bottlenecking has always been the bane of the pro-audio industry.

Fiendish latency has destroyed the sanity of audio producers everywhere. Then there’s trouble rendering out STEMS, summing a group of mixes, or having to freeze or export and import tracks to save DSP power. But even an investment in expensive hardware fails to defeat bad renders, off-beat STEM processing, and choppy audio workflows in Dolby Atmos. Despite the considerable advances in Apple silicon, walk into an Atmos-certified studio, and you still find engineers using two or three computers to execute projects.

The economics of tapping into existing GPUs versus purchasing expensive hardware is grabbing user attention.

A bottleneck-free hardware setup can easily cost $4,000 or more when purchasing computing power for mixing tracks live. In comparison, a $900 hardware setup will run programs that usually require external acceleration hardware or expensive desktop-grade solutions. By enabling existing GPUs, users can fire up a neural GP, amplifiers, processing, and other essentials with huge performance implications.

  • Save

Speaking about what GPU Audio is doing, fellow co-founder Jonathan Rowden said, “We greatly power up spatial audio tools. We speed up machine learning and AI tools – even the basics. Tools that use GPU Audio will turn your GPU into a DSP accelerator, saving your CPU headroom for other tasks.”

And what about Apple’s second generation M2 — the buzz that these chips are the ultimate, much-needed native hardware upgrade for track processing?

Speaking about these latest Macbook processors, Rowden appeared unconcerned. “People talk about M1 and M2 MacBook processors as though these eliminate the need for acceleration, but there are still limitations when ‘going native’ for consumer-grade systems. Apple silicon also has GPUs, and we’ll be enabling and unlocking those as well. We’re already working with Apple on solving this and the results are extremely promising. The feature will be available in Early Access in just a few weeks–or less.”

Already, the company has raised $6 million. With its current valuation, GPU Audio could be approaching its Series A by the Spring of 2023.

Talking about what’s coming within the next few months, Talashov said, “If you have a Windows laptop, you can download it today. Our first Beta Suite Bundle was made public on October 8th. It’s compatible on PCs with NVIDIA GPUs. AMD support is now in ‘Early Access’ and available on the PRO Driver based GPUs. Internally, we already have Mac OS support for M1 and M2 GPUs, along with AAX support for ProTools – and before long, users can test this in early access.”

As the system is developed, there’s also the promise of exciting implications for real-time collaboration. Cloud-based DSPs can provide real-time, non-destructive workflow opportunities that could change the game in myriad industries. Rowden elaborated on this, saying, “We operate on the grounds of core-level innovation. GPU Audio can make audio renders available instantly. We can get rid of the export button. Imagine how a simple feature can change the game of how companies design the future of workflows and the impact it will have on creative output – from such a fundamental level.”

GPU Audio is focused on the tech’s scalability and upgradability, and is acutely interested in collaborating with third parties to develop products. Speaking about this, Rowden clarified, “This is what we mean by bridging accelerated computing and pro-audio. This tech is a new standard that we believe can be used anywhere.”

With grand plans of tech applications in various industries, GPU Audio claims it will ‘power the future of audio from music to metaverse’ – and company execs are pretty convincing when they explain how.

From the accelerating computational needs of gamers and PC users to manufacturing industries running mammoth equipment for complicated computations, unlocking the potential of GPUs is what gets the ball rolling.

According to Rowden, “Anywhere a GPU is present, GPU Audio powered applications will be able to harvest its power.”

In the final weeks of October, the GPU hardware powerhouse AMD (Advanced Micro Devices) also invited GPU Audio as guests to the Adobe MAX conference in Los Angeles. Rowden called AMD one of their ‘strongest supporters,’ and added, “AMD wants to bring GPU-accelerated audio processing to their creative users – video and graphics creators. Senior leaders at AMD expressed that everything their creative user base needs is accelerated by GPUs, and it makes sense to expand this to audio.”

For AI and machine learning, GPU’s highly parallel nature will process and accelerate workloads.

Designing robotics, industrial equipment, and autonomous vehicles require data input from various sources and sensors, such as video equipment and audio sensors. New neural network models, techniques, and use cases appear rapidly. For speech and image recognition demands and other language processing, GPUs accelerate data ingestion and expedite the entire AI workflow. With the acute interest and support of leaders at multiple GPU hardware companies, GPU Audio is working on a backend and front-end solution for making AI-based audio applications faster and more powerful.

And what about web3 and the metaverse?

GPUs may also supply the enormous computing power needed by web3 developers, so the industry can tackle the massive amounts of channels and pathways that go into metaverse creations.

Rowden is clearly thinking big. “Because GPU Audio utilizes a source of compute that makes up a vast processing infrastructure – especially cloud-based – it stands as the most powerful potential resource for the future of cloud-based DSP. This has massive implications for metaverse and web3-focused projects,” he explained, adding, “GPU powered plugins will be easily deployed on the cloud, and it won’t stop there.”

GPU Audio aims to facilitate realistic immersive experiences in the metaverse by ‘enabling’ the tech that allows a network of cross-communicating and connected virtual worlds. It will also unlock advanced AI executions.

On this front, US-based NVIDIA, a world-leading designer and manufacturer of GPUs, chipsets, and other multimedia software has also taken an interest in what GPU Audio can bring to the table. NVIDIA has asked the company to implement more focused efforts on the development of AI advancement tools and metaverse applications.

NVIDIA is currently developing Omniverse, a collaboration platform for developing metaverses and related products and ecosystems. As that initiative takes root, NVIDIA is now exploring how GPU Audio processing can transform the ‘cost’ of audio processing bottlenecks into an advantage on their cloud.

This is just one example of the pioneering tech that GPU Audio says will change the world – as they continue to enable GPU solutions.


GPU Audio greatly encourages anyone interested to join their more than 20,000 users to participate in Beta and Early Access Testing. Beta addresses the production suite of plugins, while Early Access features a convolution reverb to address new features like Mac M1/2 support before it reaches beta.

Please contact Jonathan Rowden at jonathan@braingines.com (or LinkedIn here) to learn more.