AI’s Growing Role in Musical Composition
As artificial intelligence matures so does its potential in the creative industries — one of which happens to be music production. Although AI is not about to top the hit charts any time soon, algorithms are already creating, performing and even monetizing their own musical compositions. Synced took a look into current AI music techniques and projects from tech giants and startups alike.
IBM Watson Beat
IBM’s Watson Beat is a cognitive cloud-based music program developed using AI and machine learning that can assist artists in the creation of original compositions. Built on IBM’s famed artificial intelligence platform Watson, Beat uses users’ audio samples to compose original music.
Deep learning driven music generation algorithms are the heart of Watson Beat. IBM trained the system on a comprehensive set of audio samples and digital instruments, deconstructing every piece of music into core elements like pitch, rhythm and timbre. With such a huge number of data points fed into the neural network, Watson Beat can understand not only melodies and rhythm but also emotional styles and musical genres. Beat creates unique compositions based on what it has learned, which artists can then edit to communicate for example a particular mood.
Watson’s ability to turn millions of unstructured data points into emotional insights can also help artists find inspiration. Grammy award-winning music producer and composer Alex da Kid used Watson Beat to make “heartbreak” the emotion colouring his hit song, “Not Easy.” Says da Kid, “Musicians will use AI to understand what their audience wants, to tailor songs to specific audiences, and even to help move the creative process into different spaces.”
See a background video on the creation of “Not Easy” here:
(Behind the Scenes of “Not Easy”)
Aiva Technologies is a small Luxembourg startup that has emerged as a leader in the field of AI music composition with their AI app “Aiva” (Artificial Intelligence Virtual Artist). The Aiva team are from classical music backgrounds and have taught Aiva to compose melodies in this genre.
Aiva was trained with massive amounts of classical music by composers such as Bach, Beethoven and Mozart; and learned its own models of music theory. Aiva is purely a composer, not a performer. It produces sheet music which is performed by professional orchestras in recording studios. The recordings are used as soundtracks in film, video games, commercials and other entertainment content. Aiva’s tech is based on deep learning algorithms which use reinforcement learning techniques. Such techniques do not require labeled data for inputs or outputs, so the AI can improve its performance without any explicit instructions. This makes it easier to generate scores with the variations and diversity that characterize creative arts such as music.
Give Aiva’s music a listen and see what you think:
(Aiva — “Genesis” Symphonic Fantasy in A minor, Op. 21)
Founded by three Hollywood film composers, Amper Music is aimed at anyone (YouTubers, computer-game developers, hard-up film directors, etc.) who needs cheap background music.
Amper is not just an AI composer; it is also a performer and producer. It can help even those who know nothing about composition to create original, license-free music on demand in just a few clicks. Users can create entire tracks instantly simply by selecting a desired mood, style, tempo, and length; and can further customize tracks using the editing function, which also requires no musical composition or production experience. Unlike most AI music systems, Amper doesn’t use neural networks and wasn’t trained on musical scores. Instead, it was taught music theory and how to recognize which music triggers which emotions. Amper aims to position itself as a pay-as-you-go model that can simplify workflow and bypass the budgeting and licensing involved with stock music.
Amper Music CEO Drew Silverstein talks about creating music with AI:
(First Album composed, produced entirely by AI)
Other Notable AI Musical Composition Efforts
- Google’s Magenta is a research project that aims to push the limits of what AI can do in the process of creating art and music by providing smart tools and interfaces that allow musicians to extend their processes.
- Sony’s Flow Machines is a initiative funded by the European Research Council and coordinated by Sony CSL (Computer Science Laboratories), to “research and develop Artificial Intelligence systems able to generate music autonomously or in collaboration with human artists” across a broad range of musical styles.
- Jukedeck is a British AI music startup that creates soundtracks for video and films and builds tools to help people make music.
- Humtap is an IOS app that helps people create professional quality and royalty-free original music soundtracks using only their voices. Users can simply hum a melody along with a video and Humtap will transform that into a soundtrack.
The Future of AI in Musical Composition
Although impressive, AI’s forays into musical composition will not make human composers obsolete, as we are still a long way from putting the ‘art’ into artificial intelligence. AI compositions still require human input (at least in the initial stages) with regards to music theory, musical production, and orchestration. AI is currently mostly used to reduce the time spent on repetitive tasks in music production.
A big challenge for AI is understanding creative and artistic decisions, as even human experts don’t agree on these. Moreover AI still lacks human adaptability and the elusive creative touch vitally important in the arts. AI experts generally agree that music composition software won’t replace humans, but it is already changing the process of music creation and will have an even greater impact in the near future, as human composers and musicians collaborate with AI to help them explore new ways to create.
Author: Miaozhen Zhang | Editor: Michael Sarazen
Subscribe to Synced Global AI Weekly to get insightful tech news, reviews and analysis! Click here !