Google is taking a big swing at AI music again, and this time it’s not just about short, 30‑second clips for fun prompts in Gemini — it’s going after full song structure with Lyria 3 Pro, a new version of its music model that can generate tracks up to three minutes long with much more control over how those tracks are built. For creators, that shift from “cool demo” to something that actually resembles a song is the real story.
What makes Lyria 3 Pro interesting is that it doesn’t just stretch the same loop for three minutes; Google says the model has a much better grasp of musical composition, so you can literally ask for intros, verses, choruses and bridges in your prompt. That means instead of getting a single mood piece you drop under a reel, you can start to sketch out a full song arc — build‑up, hook, breakdown, the works — and let the model fill in the details. In practice, that’s closer to how real producers think: you start with a structure in your head, then iterate on sound design, transitions and dynamics.
Google is also clearly positioning Lyria 3 Pro as something more serious than a toy inside one app. It’s rolling out in public preview on Vertex AI so studios, game developers and other businesses can generate on‑demand audio at scale — think bespoke soundtracks for levels in a game or variations of a brand theme that adapt to different campaigns. For indie devs and tool makers, the same engine is available through Google AI Studio and the Gemini API, where Lyria 3 Pro sits alongside Lyria RealTime for more interactive use cases like live, responsive music in apps and services.
On the consumer side, the model is quietly threading itself into multiple Google surfaces you might already use. In Google Vids, the company’s AI-powered video creation app, Lyria 3 and Lyria 3 Pro can score everything from internal explainers to small‑business promos with custom tracks that align with the tone of the video. Google Workspace customers and Gemini AI Pro & Ultra subscribers are starting to see those options show up this week, turning Vids into something closer to a full “generate video plus soundtrack” studio rather than just a slideshow generator. And inside the Gemini app itself, paid users can now generate longer music with Lyria 3 Pro instead of being stuck at the 30‑second limit of the standard Lyria 3 experience.
The other big piece of this puzzle is ProducerAI, a collaborative music tool built by musicians that Google recently pulled into its Labs ecosystem and then upgraded with Lyria 3 Pro. Instead of just spitting out one‑off clips, ProducerAI is pitched as an “agentic” experience that helps artists, producers and songwriters iterate on full songs: you can generate sections, tweak arrangements, adjust lyrics and keep moving back and forth with the AI as if it were a very patient co‑producer. For working musicians, that kind of workflow is far more compelling than another prompt box, because it mirrors how real sessions unfold — lots of versioning, re‑cuts, and small tweaks around a core idea.
Google is careful, at least in its messaging, to insist that this is a tool for creative expression rather than a way to clone artists or flood platforms with sound‑alikes. The company says Lyria 3 models are trained on material that YouTube and Google have the rights to use, through terms of service, partner deals and applicable law, and if you name a specific artist in your prompt, the model is supposed to treat that as a broad stylistic hint rather than a direct imitation. There are also filters that check generated tracks against existing content to reduce obvious overlaps, and everything Lyria 3 and Lyria 3 Pro produce is stamped with SynthID, Google’s imperceptible watermark for AI‑generated media. In theory, that gives both platforms and rights holders a way to identify when a track came from Google’s AI stack, which will matter more as synthetic music gets harder to distinguish from human work.
Crucially, Google isn’t building Lyria in a vacuum. Through its Music AI Sandbox program, the company has been handing experimental tools to producers and songwriters and folding their feedback back into the model’s development. That’s already turned into concrete collaborations: Grammy‑winning producer Yung Spielburg used Lyria in scoring a Google DeepMind short film, and DJ‑producer François K has been working with the system in an iterative way to shape a soon‑to‑be‑released track. Their public comments lean on the same theme — Lyria 3 isn’t a one‑click song machine, but a new instrument that fits into an existing toolkit, especially for refining ideas quickly with surprisingly high fidelity.
If you zoom out a bit, Lyria 3 Pro is also arriving in a crowded and increasingly competitive AI music landscape. The base Lyria 3 model inside Gemini only produced 30‑second clips aimed more at casual users looking to soundtrack a memory, a reel or a mood, with automatic lyrics and AI‑generated cover art. By pushing Pro into Vertex AI, AI Studio, Vids, ProducerAI and the Gemini app simultaneously, Google is effectively building a vertical stack that ranges from hobbyist creators on phones to enterprise pipelines and pro DAW workflows. It’s a way of saying: whatever level you’re at — from YouTube Shorts to commercial scoring — there’s now a Google‑branded way to get AI‑generated music up to three minutes long with far more structural nuance than before.
The big questions from here are less about whether Lyria 3 Pro can technically make convincing music, and more about how people will actually use it. Will artists see it as a collaborator that speeds up sketching and demoing, or as an unwelcome competitor? Will small creators lean on it to avoid licensing hassles and copyright flags, now that Google is building in watermarking and rights‑sensitive training? And as these tools get better at full‑length, structured tracks, how will platforms and labels adapt their rules around what counts as “original” work?
Discover more from GadgetBond
Subscribe to get the latest posts sent to your email.