Google’s MusicVAE Is a Machine Learning Mozart

Synced
SyncedReview
Published in
3 min readMar 19, 2018

Google has announced the release of MusicVAE, a machine learning model that makes composing musical scores as easy as mixing paint on a palette. A breakthrough from Google Brain’s Magenta Project, MusicVAE generates and morphs melodies to output multi-instrumental passages optimized for expression, realism and smoothness which sound convincingly like human-composed music.

While breakthroughs in AI technologies have thus far tended to emerge from research into industry solutions, Magenta is exploring AI’s potential in the creative spaces that differentiate humans from machines. Launched in 2016, Magenta uses deep learning and reinforcement learning algorithms to explore art and music and has introduced a number of research tools, including NSynth, a music synthesizer; and SketchRNN, an online neural network-based interactive doodling experiment.

Teaching a machine to create a standardized method for blending different musical elements is not easy. Google researchers turned to Variational Auto-Encoders (VAE), a widely-used generative model that has yielded state-of-the-art machine learning results in image generation and reinforcement learning since 2013.

VAEs work in an encoder-decoder structure where the encoder represents the variation in a high-dimensional dataset with a lower-dimensional code, and the decoder morphs the variation in a neural network to create an output. The model can be refined and tuned by comparing the input and output.

Google researchers had already applied the technique to SketchRNN, and have now brought the same infrastructure to MusicVAE. Because musical elements are typically more complicated than sketches, Google researchers developed a novel hierarchical decoder for MusicVAE that is capable of generating long-term structure from individual latent codes.

Google last Thursday released a Tensorflow implementation of MusicVAE and a JavaScript library with pre-trained MusicVAE models to help coders, composers and researchers build tools.

Several Google engineers have already handcrafted applications based on MusicVAE. Melody Mixer is an interface created by Google’s Creative Lab that allows users to generate interpolations between short melody loops. Latent Loops, from Google’s Pie Shop, can generate a palette of melodic loops by sketching on a matrix.

Demos and music samples generated by MusicVAE are already popping up on social media. “This MusicVAE thing is absurdly cool. The interpolated (and randomly generated) melodies/songs sound *real*, like they were composed, not generated,” tweeted Alexander Huth, an Assistant Professor in Computer Science and Neuroscience at UT Austin.

The Magenta team stresses that MusicVAE and their other smart tools are meant as collaborative tools to “allow artists and musicians to extend (not replace!) their processes.”

Journalist: Tony Peng| Editor: Michael Sarazen

Dear Synced reader, the upcoming launch of Synced’s AI Weekly Newsletter helps you stay up-to-date on the latest AI trends. We provide a roundup of top AI news and stories every week and share with you upcoming AI events around the globe.

Subscribe here to get insightful tech news, reviews and analysis!

--

--

Synced
SyncedReview

AI Technology & Industry Review — syncedreview.com | Newsletter: http://bit.ly/2IYL6Y2 | Share My Research http://bit.ly/2TrUPMI | Twitter: @Synced_Global