Google DeepMind has updated its Music AI Sandbox with new features and opened it to more musicians across the United States. The tools help artists create music with AI assistance.
Google first launched these tools in 2023 through YouTube’s Music AI Incubator. Now they’ve made significant improvements based on feedback from musicians who’ve been testing them.
What’s New in the Music AI Tools
The biggest update is Lyria 2, Google’s newest AI music system. It creates higher quality music that sounds more professional than earlier versions. They’ve also added Lyria RealTime, which lets musicians make music with the AI as they go, without stopping to wait for the computer to catch up.
The Music AI Sandbox has three main tools that are now better:
Create – Musicians can simply describe what kind of music they want. For example, they might ask for “upbeat jazz with saxophone” or “slow electronic beats with synthesizers.” Artists can also add their own lyrics and choose things like how fast they want the music to play.
“The Create tool helps generate many different music samples to spark the imagination or for use in a track,” Google explains in their announcement.
Extend – This helps when musicians have a good start but aren’t sure where to take their music next. The AI suggests ways to continue the song based on what’s already created.
As musician The Range puts it, it’s like having an “infinite sample library” that helps when you’re stuck.
Edit – Artists can change their music’s style, mood, or genre. They can now use simple text instructions to transform their music, like typing “make this more energetic” or “change this to sound like 80s rock.”
Similar Posts
Digital Watermarking on All AI Music
Every piece of music made with these Google tools gets an invisible watermark using technology called SynthID. This watermark can’t be heard but helps identify that the music was created with AI. This addresses growing concerns about knowing what’s made by humans versus computers.
Real Musicians’ Experiences
Several musicians have tested these tools and shared their thoughts:
Isabella Kensington found the “Extend” feature helpful for songwriting and trying new ideas.
The Range described it as helping overcome writer’s block.
Adrie expressed caution about AI generally but sees these tools opening up new experimental avenues.
Sidecar Tommy noted the tools help “speeding up production and sparking complex orchestral ideas from simple beginnings.”
Industry Worries Not Fully Resolved
Despite these improvements, many music industry experts remain concerned about important issues like:
- Who owns rights to AI-created music
- How to compensate artists whose work might have trained the AI
- How to properly credit human creativity when AI is involved
While artists like Wyclef Jean and Marc Rebillet have tried the tools, the music industry is still figuring out rules for using AI in music creation.
Built With Musicians’ Input
Google emphasizes they’re building these tools by working directly with musicians. They’re expanding access to more U.S. musicians, producers, and songwriters to get more feedback.
“Their input guided our development and experiments, resulting in a set of responsibly created tools that are practical, useful and can open doors to new forms of music creation,” Google states in their announcement.
Practical Questions for Musicians
For musicians thinking about using these tools, several practical questions remain:
- How well do these tools work with standard music production software?
- What music was used to train Lyria 2?
- Who owns songs created with these tools?
- How will this technology impact the music industry in the future?
These AI music tools show how quickly technology is changing music creation. While they offer exciting new possibilities for musicians, the industry is still working through how to balance innovation with protecting artists’ rights and livelihoods.