Google Just Acquired ProducerAI — And It's the Biggest Signal Yet That AI Music Has Gone Mainstream

@giacomo.mov ·

Two days ago, Google dropped what might be the most important announcement in AI music this year — and somehow half the internet slept on it.

On February 24th, Google officially acquired ProducerAI, the Chainsmokers-backed music creation platform formerly known as Riffusion, and folded it directly into Google Labs. Not as a side experiment. Not as a quiet acqui-hire. As a full-stack, globally available AI music production suite powered by some of the most advanced generative models on the planet.

If you care about AI music, AI music videos, or the future of how songs get made, buckle up. This one changes everything.

From Viral Hobby Project to Google’s Music Brain

Here’s the origin story you need to know. ProducerAI started life as Riffusion — an open-source hobby project by co-founders Seth Forsgren and Hayk Martiros that went viral in December 2022. The original concept was wild for its time: generating audio directly from images using diffusion models. It was janky, it was experimental, and it captured the imagination of millions.

Fast forward to October 2023, and the startup raised a $4 million seed round led by Greycroft, with The Chainsmokers coming aboard as advisors. By July 2025, the team rebranded as ProducerAI and launched a public-beta platform with its own proprietary model. Then, earlier this month, something telling happened: ProducerAI users received an email advising them to download all their songs and sounds because as of February 20th, the service would be “powered by a new set of models” and all past content would become inaccessible.

Now we know why. Google had swallowed the whole thing.

What’s Under the Hood: Lyria 3, Gemini, Veo, and… Nano Banana?

The tech stack Google has assembled here is genuinely impressive — and it reveals something about where the company is heading.

ProducerAI now runs on a combination of Google’s most powerful AI models working in concert. Lyria 3, Google DeepMind’s latest high-fidelity music generation model, handles core audio creation. Gemini powers the conversational chat interface. Google’s Veo model handles AI-generated music videos. And yes, there’s something called Nano Banana that generates album artwork. (I didn’t name it. Don’t look at me.)

What makes this different from, say, typing a prompt into Suno and hitting “generate”? According to Elias Roman, Senior Director of Product Management at Google Labs: “It’s not a tool that you put in your prompt, roll the slot machine, and something will come out. The reality is that’s not how good music is made.”

The key differentiator is the conversational, iterative workflow. You don’t just fire off a prompt and pray. You start with something like “make a lofi beat,” and then you refine. Tweak the tempo. Swap out the vocalist. Push the bass line deeper. Add reverb. It’s designed to mimic the actual back-and-forth process of working with a human producer — except this producer has, as Wyclef Jean put it, “the infinite information.”

The Spaces Feature Is Quietly Revolutionary

Buried in the announcement is a feature called Spaces that deserves way more attention than it’s getting. Spaces lets users create entirely new instruments and audio effects using nothing but natural language. Want a sound that’s halfway between a flute and a synthesizer? Just ask. Want to build a node-based modular audio patching environment? You can do that too — without knowing anything about signal routing or synthesis.

These mini-apps are shareable and remixable across users, creating what’s essentially a community-driven marketplace of AI-generated instruments. If that doesn’t sound groundbreaking, consider this: traditional instrument design takes years of physical prototyping. Custom synthesis patches require deep technical knowledge of audio engineering. Spaces collapses all of that into a text prompt.

For independent musicians who can’t afford racks of gear or years of audio engineering education, this is a genuine paradigm shift.

The Celebrity Co-Signs Are Piling Up

Google clearly understands that AI music tools need artist credibility to survive. And they’ve been working the phones.

Grammy-winning artist Wyclef Jean used Google’s Lyria model and Music AI Sandbox during the development of his track “Back From Abu Dhabi.” Grammy-winning rapper Lecrae has been actively building on the platform. And The Chainsmokers — who have been advisors since the Riffusion days — continue to champion the project. Alex Pall of The Chainsmokers called it “truly crafted around the musician’s experience.”

But perhaps the most compelling quote came from Wyclef Jean himself, speaking about the AI-human collaboration: “There’s one thing that you have over the AI: a soul. And there’s one thing that AI has over you: the infinite information.”

That’s not a man who’s scared of AI replacing him. That’s a man who sees it as the ultimate instrument.

Let’s not pretend everything is sunshine and synthesizers. The question hanging over every AI music announcement is the same: what was the model trained on?

Google hasn’t fully disclosed Lyria 3’s training data. In a blog post, the company stated it has sought to “develop this technology responsibly in collaboration with the music community” and has been “very mindful of copyright and partner agreements.” Music Business Worldwide reported that the training data is understood to include music that YouTube and Google “have the right to use” under their terms of service and partner agreements.

That’s… carefully worded. And in a landscape where hundreds of musicians — including Billie Eilish, Katy Perry, and Jon Bon Jovi — signed an open letter in 2024 calling on tech companies not to undermine human creativity, the scrutiny will be intense.

The broader AI music industry is still navigating these waters. Suno, perhaps ProducerAI’s most prominent competitor, recently settled a copyright lawsuit with Warner Music Group while still facing suits from Universal Music Group and Sony Music. Suno has raised $250 million at a $2.45 billion valuation and reports $200 million in annual revenue — proof that demand for AI music tools is enormous regardless of the legal clouds.

Google’s approach of embedding SynthID watermarks into every piece of content generated through ProducerAI — audio, images, video, and text — is at least an attempt to address transparency. Users can even upload tracks to Gemini to check if they’re AI-generated. It’s not a complete solution, but it’s more than most competitors are offering.

The Xania Monet Effect: AI Artists Are Already Here

If you want to understand why Google is going all-in on AI music right now, look no further than Xania Monet.

Last year, a 31-year-old Mississippi poet named Telisha Jones used Suno to transform her poetry into full songs performed by an AI avatar she named Xania Monet. The result? Her song “How Was I Supposed to Know” hit No. 1 on Billboard’s R&B Digital Song Sales chart. She generated 5.4 million streams in a single week. And she signed a $3 million record deal with Hallwood Media.

Jones writes every lyric herself — 90% based on her own true stories — but the vocals, production, and instrumentation are entirely AI-generated. Artists like Kehlani and SZA have vocally pushed back, and the controversy is ongoing. But the market has spoken: fans connected with the music regardless of its synthetic origins.

This is exactly the kind of creator that platforms like ProducerAI are designed for. Not replacing musicians — empowering storytellers, poets, and creators who have the vision but lack the traditional production skills to bring it to life.

The AI Video Piece: Why Veo Changes the Game for Musicians

Here’s where things get really interesting for anyone who makes music videos — and where this story connects directly to what we’re building at OneMoreShot.ai.

ProducerAI doesn’t just generate music. It generates music videos. Google’s Veo model is integrated directly into the platform, allowing users to control characters and aesthetics for AI-generated video that matches their songs. Combined with Lyria 3 for audio and Nano Banana for artwork, this is a complete end-to-end creation pipeline: write a song, produce it, generate cover art, and create a music video — all from text prompts.

This is happening across the industry. The AI video generator market, valued at $716.8 million in 2025, is projected to reach $3.35 billion by 2034. Four of the six major AI video models — Kling 3.0, Sora 2, Veo 3.1, and Seedance 1.5 Pro — now generate synchronized audio natively. We’ve gone from silent AI clips to full audiovisual productions in less than six months.

Meanwhile, researchers at Queen Mary University of London recently released AutoMV — the first open-source AI system capable of generating complete music videos directly from full-length songs. AutoMV works like a virtual film production team: it analyzes a song’s structure, beats, and time-aligned lyrics, then deploys specialized AI agents acting as screenwriter, director, and editor to produce coherent narrative videos. Human expert evaluations show it significantly outperforms existing commercial AI video tools.

The cost? Roughly the price of an API call. Compare that to the tens of thousands traditional music video production demands.

What This Means for Independent Musicians

Let’s zoom out and talk about what matters: how does all of this affect you, the person actually making music?

The good news: The barriers to professional-quality music production and visual content are collapsing at breathtaking speed. ProducerAI offers a free tier. Paid plans start at $8/month for roughly 600 songs. That’s less than a single hour of studio time in most cities.

The complicated news: We’re entering an era where the value proposition of a musician is shifting. Technical production skill — knowing how to mic a drum kit, how to EQ a vocal, how to route a compressor — is becoming less of a moat. What’s becoming more valuable is everything AI can’t do: authentic storytelling, genuine emotional connection, cultural context, lived experience, and the kind of creative vision that can’t be reduced to a prompt.

The practical takeaway: The artists who will thrive in 2026 and beyond are the ones who learn to use AI as a creative amplifier — not a replacement for their artistry, but a force multiplier for their vision. Write your own lyrics. Develop your own aesthetic. Build a genuine connection with your audience. Then use AI to execute at a scale and speed that would have been impossible five years ago.

The Competitive Landscape Is Getting Wild

Google’s move puts them in direct competition with a crowded and well-funded field. Suno has $250 million in fresh capital and a $2.45 billion valuation. Udio is carving its own lane. ElevenLabs has signed AI licensing deals with Merlin and Kobalt and recently released an album of AI-generated songs alongside artists like Liza Minnelli and Art Garfunkel.

But Google’s unique advantage is the stack. No other company can offer music generation (Lyria 3), video generation (Veo), image generation (Nano Banana), conversational AI (Gemini), and distribution (YouTube) all under one roof. When Lyria 3 also powers Dream Track — YouTube’s AI music feature for creators, now expanding globally — the distribution flywheel becomes enormous.

ProducerAI is available in over 250 countries, with free and paid tiers. All outputs carry SynthID watermarks. It’s experimental, it’s in Labs, and it will evolve. But the signal is unmistakable: Google sees AI music creation as a first-class product category, not a side project.

The Bottom Line: Make Your Music Visible

We’re witnessing something historic. The tools that used to require a record label budget, a professional studio, and a production crew are being democratized at an unprecedented pace. A poet from Mississippi can top the Billboard charts with AI-generated vocals. A bedroom producer can create a complete song, artwork, and music video from a single text conversation. A Grammy winner can discover new sounds that are, in his own words, “technically impossible to manifest by any other means.”

The question isn’t whether AI will transform music creation. It already has. The question is whether you’ll use these tools to amplify your unique creative voice — or watch from the sidelines as others do.

If you’ve got music and you need visuals to match, that’s exactly what we built OneMoreShot.ai for. Upload your track, and our AI generates stunning, beat-synced music videos in minutes — no production crew, no five-figure budget, no weeks of post-production. Just your music, brought to life visually.

Because in 2026, making great music is only half the battle. Making it visible is what gets you heard.