The creativity machine
Art and artificial intelligence collide at the SF Innovation Hangar
I felt a privilege being invited to speak to a group of early teens at the SF Innovation Hangar on how artificial intelligence helps us better steward this planet. I thought I was going to teach, but as can be the case at events like this, what actually transpires is a much deeper learning.
I regaled the group with tales like that of Guiding Eyes for the Blind, who looked at the genetic makeup of dogs and thousands of Word documents from trainers to recommend man/dog training partnerships based on their personal traits. Incidentally, the story is now featured in this ad which yes, makes gratuitous use of the cuteness of puppies to explain what science can do:
But as the session closed a straightforward question from an audience member got me thinking.
“What technology will have the greatest impact right now?”
My initial feeling was one of mild frustration. Hadn’t I just spent the last 20 minutes waxing lyrical on the potential of artificial intelligence? So I reiterated. I explained how AI is exploding right now, and as with other major tech advances (I am old enough to have been working when the internet burst on the scene), expect to see the space specialize and somewhat fragment as it matures.
Driving back home it dawned on me that this is only half the answer. There’s a class of problems where we’re applying AI that’s largely novel and unique: that’s around the artistic/creative process.
Cognitive art in the SF Innovation Hangar
The expansive SF Innovation Hangar skirts around San Francisco’s verdant Palace of Fine Arts, priding itself on celebrating STEAM: what we’re capable of when we unite the forces of Science, Technology, Engineering, Art and Math. So, as was the case on this brisk Saturday morning, the experiential dial is turned way up. You could see robots playing miniature basketball, tiny drones whizzing round like matchbox cars with wings and Lego Mindstorms creations modeled on Minecraft characters.
Given its size, the Hangar also holds artifacts from past events. Max Ehrman’s graffiti mural unravels along one long stretch of wall. It’s unique in that AI played a role in its inspiration. Created during the IBM Watson Developer Conference in November 2016, Watson recommended a color scheme to Max based on insights from the hangar and social media posts from the surrounding SF area.
In this case the computer recommendations are not just driven by inputs of color schemes but the system also took a crash course in the theory of color, ingesting journals and articles on color theory, psychology, marketing and design.
This area is one of unbounded potential. Where can computers take us if we give them a seat at the table of creativity?
Just this week, Maureen Baeck covered the Gaudi-inspired ‘living’ sculpture unveiled at the Mobile World Congress. We’ve seen AI systems help film directors by decomposing a film down into its component scenes and its emotional content. We’ve heard songs created by producers with an AI partner on the sound desk, recommending lyrics and tones based on social media content.
One team of students presenting in the hangar reimagined the zoo... using animatronics and holograms to give us an appreciation for how endangered animals exist without the need for taking those animals from the wild. You can easily see the potential for AI to make these animals even more lifelike.
What is fueling this innovation right now?
To the extent that art represents us, the job of the artist is to understand us at a deeper level, and express that in new and unexpected ways. In that realm we’re seeing advances in computing.
Computers are expanding in their ability to be able to plot the emotional spectrum using patterns and trends. So Big 5 psychology theory can be applied to bodies of text giving a reading of the personality or tone of the writing. Systems can discriminate against frustration (short, curt sentences) versus openness (positive words placed throughout sentences). Similarly, tones in images can be used to determine nuances like somber versus serene.
Apply this understanding to disparate data sources, like social media, song lyrics or an uncut version of a film, and a new world of opportunity glimmers.
You can couple all of this with new forms of interaction. Just look at the organic movement of the Barcelona structure. On the other hand, chat bots and virtual assistants take us deeper into the realm of conversation. And you know how it goes with talking: there’s all kinds of layers of meaning. There are movements exploring compassionate chat and companies like Slack and Google are broadening their career searches, going beyond traditional computer science and engineering in search of teams of comedians and scriptwriters: creative folks who understand artful texts.
So at this juncture, where machines are telling us about our creative selves: our traits, our emotions and what messages resonate, what roles do we humans have to play in the creative process? There is one commonality in all these examples of creativity systems: the computer is helping the artist with their art. The art is not purely autonomous — there’s a strong guiding hand, both in determining the role of AI and how it is applied. We’re still conceiving the ideas that lead to the art and defining the roles systems play in executing those ideas.
There’s also a case to be made for the frission that occurs when we humans engage with each other. The energetic Eth-Noh-Tec couple took to the stage before me. They acted, sung, dance and chanted us through a triad of stories from Far East folklore… including the Buddhist tale of Great Joy, the ox. The stories have passed through centuries — civilizations and philosophies have come and gone — yet these stories still have current resonance. Could they be told using the latest technologies? Most probably. But the inescapably human qualities of the Eth-Noh-Tec storytelling duo cannot easily be replaced.