Launching DeepMusic.ai — Amplifying Human Creativity With AI

Deep Music AI
14 min readDec 15, 2020

--

We’re proud to announce the launch of DeepMusic.ai — an organization with a groundbreaking mission to weave together the artificial intelligence (AI) and arts communities. Two seemingly opposite fields that infrequently intertwine come together to shape the future of creativity. DeepMusic.ai was born as a way to amplify human creativity.

We are shaping the future of music to figure out HOW and WHERE AI can assist creatives and utilize human strengths to build something novel.

AI is disrupting every industry and is one of the most powerful tools we will encounter. However, many people think machines can’t and shouldn’t be creative. Creativity is the ability to generate ideas that are both novel and valuable; machine creative outputs have historically been neither. But over just the past decade, there have been drastic changes in the AI landscape and the technology is being more widely used by the public instead of just in research labs or universities. Recent hardware and software breakthroughs such as GPUs and deep generative models like massive language model GPT-3 have renewed interest in how AI can support human-machine collaborations. In fact, in recent years, these algorithms are doing things that we once thought were uniquely human such as driving cars, providing medical diagnoses, writing news articles, and trading stocks.

Our mission is to build a community of professional artists and scientists to help shape the AI tools that are being designed. Our future depends on how we build and train these algorithms and work with those who are developing these tools. We need to ensure that AI tools are built to work with professional artists as a collaborator. We want to aggregate and provide resources to artists and the public and have creative voices at the table at the start of this revolution. If we ignore red flags for the sake of profitability, we can guarantee future loss of artistic merit. There’s been tremendous progress with AI tools for creativity, but we’ve got a long way to go to get these tools widely adopted.

Carol Reiley’s Perspective:

I’ve always viewed technology as a superpower, enabling and amplifying humans.

I fell in love with AI and robotics 20 years ago, working at the intersection of human and machines; that “soft, messy area” where hard tech meets the real world and is used by people. Working with AI has given me awareness of what humans are good at and where our limitations are — perhaps doing a job neither could do as well alone. For example, extending past human limitations by scaling up motions to pick up boulders, or scaling down for microsurgery, or exploring the unknown depths of the oceans and outer space. I’ve also hacked and contributed to many open-source and creative projects[1]. Those creative projects were very special to me because they resonated and connected with people in ways that differed from my other work. They taught me the power of the open source community and how to find ways of connecting people to collectively contribute.

Creativity is often thought to be uniquely human and what we excel at as a species and where AI is said to fall short. As an AI scientist who challenges current truths and perspectives, I feel compelled to better understand why the meteoric rise of AI tools has been so controversial, especially in the creative realm. The production of innovative work touches on something so innately human and precious and is the heart of what AI seems to be lacking. I know it is one of the hardest challenges to tackle in many ways, but it’s the understanding of gaps in humanity and bridging the human side to technology that appeals to me.

AI-generated art can be beautiful, emotional, and heartfelt but the way we evaluate art may need to change.

(left) DeepFaceDrawing: Creating faces from sketches Credit: Shu-Yu Chen, et al.
(right) AI Generated Art

I am less interested in hitting a button and autocompleting a piece (lacking artistic merit/intent), or mimicking the perceived style of a particular artist (technically interesting, but less artistically), and more interested in the human-AI co-creation process and finding a way to provide feedback to the creators of these tools. This is where I notice a large opportunity for AI. I want to unpack the sensitivities around how the tools will be used by experts, what it is about the creative process that is elusive, and perhaps understand better how to build assistive creative machines.

Will AI generated music be found on TikTok or in concert halls? What are the current AI tools and how do they compare? I started to systematically research what tools exist and share my document to help make the barrier to entry for these tools as easy as possible. I noticed that most software programs were not designed with professional artists as the primary end user. Most tools were programmed to democratize music composition with an end user who doesn’t have much formal training or background in music. This is a great mission, but where are the tools that can augment the work of creative professionals? If we can begin to understand how various professional artists think and they can understand the current limits of AI, then perhaps we can advance the state of AI and creative productivity focusing on building tools to unlock and enable artists to create even more novel things.

The World Economics Forum has predicted that 75 million jobs will be displaced by 2022, but jobs requiring “soft skills” and higher-level thinking are supposed to be the most safe from being replaced. However, with the rise in artistic AI tools, are the professional artists’ voices shaping the future of their creative jobs? Instead of the fear of job replacement, we should focus on proactively co-writing the future of AI so there are more jobs and opportunities than today. What can the imagined future of AI in the creative realm look like? Beyond performance and composition, AI will impact legal rights (authorship, identity), humanity (ethics), and businesses (effects of business models like streaming, monetization, personalization, influencers). These deep philosophical issues are nearly as important as the created art itself.

Today’s tools have sparked the popular imagination but are not yet in widespread use. The next step is to bridge the gap to the needs of professional musicians so the tools can be routine instead of the exception. And many of the features are fine for amateurs or artists just starting out, but aren’t what professional artists want or need (as we discovered at the onset of this project). Many professional artists are not in the loop with developing AI efforts or as involved as I expected.

Why now? AI is moving at an unprecedented speed and disrupting industry by industry. As an entrepreneur in the field, I saw the evolution of self-driving cars happen in the span of five years from no one believing it was feasible to cars with sensors driving around my neighborhood and automated trucks making deliveries, to billions of dollars of investments and successful IPOs. With the impressive results from generative algorithms in recent years in the creative fields (painting, novel, music generation) and AI surpassing human-level performance on many tasks, AI is coming to the creative world.

Why start with music? My research interests are in the field of haptics, which is the sense of touch, and computer vision, which is getting machines to see. I had been really curious about how humans interact through auditory experiences.

Music has been one of the first applications of AI and has been intertwined since the start [2]. They both stem from math [3], are universally understood and felt, and are a beautiful means of expression. The structure and theory of music intuitively seemed most conducive to pattern recognition. A machine can process centuries of data, search and identify patterns quickly, and isn’t limited by physical body speed or number of limbs.

Why professionals? I wanted to start collaborations with artists at the top of their craft, who have a strong sense of their creative selves and can articulate their processes. If we can put this AI superpower in their hands, what will they do with it? Artists are what I believe to be one type of the most creative people, who make art for their livelihood, the essence of humanity. What should we protect in the raw creativity process and how do they adapt this new tech tool to their work? Can we actually create more jobs and assist them to do their jobs more efficiently, quickly, and expressively? Much of current music is rightly designed for human limitations and form. I am interested in what will happen if we can exceed and create art not bound by human limits.

I wanted to validate that this crazy idea has legs and what professional artists want and do with such a tool. I joined the San Francisco Symphony as a Collaborative Partner and Harman Kardon’s technical advisory board in 2019 to better understand how organizations are thinking about this AI paradigm. From grad school in Baltimore, I knew Hilary Hahn through mutual friends, and I approached her to discuss the project. From our very first discussion, it was clear that what I was seeing was being deeply felt by artists like Hilary.

Hilary Hahn’s Perspective:

Artists now have a rare window to join the conversation with developers, so that we can help shape future artistic experiences.

If we artists share our knowledge now, we give AI the chance to interact with us fluently in the future. If we discuss our wishes for the craft of art-making with people who are developing new creative tools, we can look forward to interfacing with ideas that can give back to us. Artists are constantly pushing the limits of human creativity. We can take pride in understanding the state of AI creativity as well. AI and art are compatible. As we bridge the gaps in this crucial time, something new can develop: a co-existence, a mutually supportive creative experience in which AI tools and human expression expand simultaneously.

I’m a violin soloist, which means that I’m an instrumentalist who both leads and collaborates. I work on repertoire that spans 400 years, including contemporary music that I commission composers to write for me. I work in multiple artistic cultures and am inspired by the creative processes of artists of all genres. I am constantly juggling the micro and the macro to enable a more communicative human emotional experience. I love that art is about and for people. I do what I do because I believe in holding space for the human condition.

When Carol called me to discuss the topic of AI + creativity, I had just begun a planned year-long sabbatical. We talked while I was on a walk in the rain. I wound up perched on the edge of a park bench under my umbrella for two hours, deep in imaginative discussion with Carol about the collaborative potential of these fields. I started out interested but a bit skeptical. Artists have been creating for millennia without AI. The ideals and techniques of creativity are passed personally from one generation to the next; that vital process is already meaningful. And art is subjective. Innovation and improvement are not necessarily one and the same. Ultimately, what spoke to me deeply about this project is the chance for artists’ experiences to have an influential place within the future of technology. The conversations between artists and AI scientists can be as engrossing for both fields as that first DeepMusic.ai conversation I had with Carol. I want to hear those exchanges of ideas, whether they are congenial or conflicting.

The question “What is art?” is simple or unanswerable, depending on who you’re asking. It is essential to debate that question in the context of AI. Artistry influences how everything is designed and experienced. Coding and engineering are art forms. Musicians and artists admire scientists, and we’ve benefited from technology in its many stages. A violin, for example, is a piece of engineering designed by geniuses, refined over centuries. Antonio Stradivari combined the technology and art of his era to create a varnish that remains mysterious to this day. As soon as we can get artists and AI scientists in dialogue — truly speaking with each other instead of at each other — they will be able to marry their knowledge and creativity, which exist in droves in both fields.

If you asked me what art is, I would tell you that it’s at the core of humanity. Art is a community-strengthening instinct: sharing and interpreting life experiences. I play music by people who lived in past centuries, who couldn’t have imagined the Internet, who lived lives drastically unlike mine, yet they speak today through the music they wrote. Ten years ago, I began a global music-commissioning project to broaden the short-form violin-piano repertoire, called “In 27 Pieces: the Hilary Hahn Encores.” I wound up commissioning 26 composers to write pieces, and I held an open contest for the 27th. That contest generated nearly 600 new works of music, ten of which I world-premiered myself, in addition to the 26 premieres of the commissioned works. I learned an immense amount about composing and human expression during that project, and my “In 27 Pieces” recording won a Grammy. This year, I’m giving world-premiere performances online of pieces I commissioned after March 2020, music written amidst a global trauma.

Art translates lived experience into emotional expression. Therein lies the challenge for artistic AI, which lives and learns differently from humans and has different reasons for making art. For example, AI can’t — in human terms — go to a concert, instantly feel emotional synthesis with the performer, have a feeling of what it wants to hear that performer play, and be spontaneously inspired to write a piece for that performer. Or can it? What is sometimes overlooked is that people create AI, and those scientists have lives and experiences that define and artistically influence their work.

Some musicians and composers I invited into this project had questions. Could their contributions contribute to the redundancy of human artistry? Why do we want AI in music? I struggled to articulate my answers, until I realized that, while these questions are crucial, they shouldn’t determine whether the artistic community gets on board or stands to the side. AI and technology are here to stay. They help me in my work every day, whether I’m captioning videos for social media, walking to a coffee shop in a foreign city, or using my phone to connect with my worldwide community. As recently as fifteen years ago, composers were criticized for utilizing the Midi playback function to double check their work when composing within software programs such as Sibelius. It was almost seen as cheating. Yet that argument has faded as people have seen that music written with that tool as a helper has soul — because it was written by a person with a soul. Similarly, AI is only as good as its human creators’ programming. “Good” in this context can mean accurate or beautifully considered or ethical or innovative or emotionally moving. Every version of human interaction carries the potential to be a work of art, even if it’s one human and one computer. Artists shouldn’t sit back back and say, “No thank you” to AI as an artistic entity when we have the opportunity to step forward as partners. If we don’t engage in the conversation, we are choosing to mute ourselves.

In short, there are ways for AI to create and develop in parallel with the painstaking stewardship of artistic tradition, but in order for this to be possible, we have to seize the exploratory moment now.

Our Mission:

Our goal is to use music as a lens on how humans and AI can co-create something special together. We started by commissioning professional composers to build unique pieces of music using various world-class AI tools. DeepMusic.ai will be the narrator of this new period of artistic composition, identifying and filling in the voids with necessary resources and tools. Our goal is to get the ball rolling, seed questions for exploration, and give guidance on various efforts.

People often have a complicated relationship with technology. Just like electricity and computers changed the way we worked, AI is the next breakthrough. AI challenges us in ways we wouldn’t have necessarily thought of on our own. This new generation of tools for musicians could transform the way people create and think in new and unexpected directions. We may be reevaluating how art is produced and judged. Tools like synth, looping, rhythm engines, and chaos generators have led to new forms and genres of music, allowing some professionals to create more efficiently. AI has been used to model music composition — general infilling, adventurous chord progressions, new instruments and semantic controls to music and sound generation, to list a few examples. Much as Electronic music brought new genres of music like heavy metal and EDM, AI will not only create new kinds of art, but new kinds of artists.

This is a call to action for any person who wants to work with AI to start a creative project.

How We Are Bridging The Gap:

  • Educate: An initial video tutorial [link] to walk you through how to create a piece, and an open research database [link] that gives a snapshot of the current state of the art AI creativity software so people can compare the tools and decide which ones to try out. Since the field is so dynamically changing, we made this open so anyone can contribute.
  • Collaborate between AI and composers: We have led the production and release of the first three commissioned world premiere pieces as examples of world-class composers creating music in tandem with state of the art AI — offering feedback and discussion for future versions of the tools. We have planned our first panel discussion with the composers about their artistic journey and process working with AI. Watch here!
  • Transparent: The process is as important as the end product, so the composers are sharing their experiences through journaling, interviews, and videos. Each piece has an artist statement. Each world premiere video (and the corresponding sheet music) clearly delineates what is AI-composed, what is human-composed, and how and why the composer altered the AI-generated notes. We share some of the original AI generated midi files used in the piece and “failure cases” that were not selected.

We aim to share resources, create a constructive feedback loop between artists and the companies and research groups who create AI tools, and highlight the work behind this project.

We invite everyone to explore the AI + creativity resources and join us on this journey.

[1]: Most engineers are highly creative, building things that don’t exist, finding solutions to problems, and imagining the future. For various side projects, I had built art robots, published an open source hack in MAKE magazine to Guitar Hero to play air guitar hero by reading EKG signals in the forearm. My labmates and I won a botsker at a robot film festival for a video of a million-dollar surgical robot playing the popular board game Operation. I also built an educational robotics company and wrote a children’s book. Earlier this year, I co-composed my first AI piece using MuseNet, a one minute piece with Nico Muhly for the San Francisco Symphony Virtual Gala “Thoroughline.”

[2]: In 1951: Alan Turing, the godfather of computer science, built a machine in 1951 that generated three simple melodies.
-In 1960, Russian researcher R. Kh. Zaripov published worldwide the first paper on algorithmic music composing using the “Ural-1” computer.[6]
-In 1965, inventor Ray Kurzweil premiered a piano piece created by a computer that was capable of pattern recognition in various compositions. The computer was then able to analyze and use these patterns to create novel melodies. The computer was debuted on Steve Allen’s I’ve Got a Secret program, and stumped the hosts until popular panelist Henry Morgan guessed Ray’s secret.
-In 1979, An influential sampling synthesizer, the Fairlight CMI, had the ability to record and play back samples at different pitches and was adopted by professional musicians.
-In 1997, an artificial intelligence program called Experiments in Musical Intelligence (EMI) appeared to outperform a human composer at the task of composing a piece of music imitating the style of Bach.

-Progress in the field of AI music has rapidly accelerated in the past few years, thanks in part to devoted research teams at universities, investments from major tech companies, and machine learning conferences like NeurIPS. In 2018, Francois Pachet, a longtime AI music pioneer, spearheaded the first pop album composed with artificial intelligence, Hello, World. In 2019, the experimental singer-songwriter Holly Herndon received acclaim for Proto, an album in which she harmonized with an AI version of herself and YACHT paired up with Google Magenta’s team for the Grammy-nominated, AI-assisted album Chaintripping.

[3]: As Pythagoras discovered about 2,500 years ago, music is deeply mathematical, and it’s possible to represent melody using numbers and ratios.

--

--