AI ❤ Design

Ruth Kikin-Gil
15 min readMay 31, 2017

--

This article explores how AI can disrupt and augment the design process.It is a write-up of talks I gave at Women of Silicon Valley 2017 and Interaction 18 Conferences.

Part I: How I stopped fearing and started loving AI

AI is coming to get us and everyone knows that. The Singularity — the predicted moment when computer intelligence surpasses human intelligence — is right around the corner. Many say it’ll happen before the end of this century: that we’ll be jobless and living in an apocalyptic era whose nature we can only guess. Welcome to the future.

Photo: John E. Ellen

“A society of economic miracles and technological awesomeness, with nobody there to benefit … A Disneyland without children.”
Nick Bostrom, Superintelligence

I’m sure you’ve all heard the forecasts predicting that automation will eliminate many professions — how any job that can be automated will be. Many say we are heading towards a disaster if we don’t wake up and do something now.

I read these articles and studies too; as a result I became interested in this huge thing called AI in a very personal way. I realized it’ll eat my job too.

Now, I have a confession to make: I have a love-hate relationship with technology. I love the promise of where it can take us and how it could change the world for the better. I hate poor implementation which leads to stress and unnecessary technological burden.

That’s exactly why I became a designer: to make sure technology (or at least the bits I touch) have both the right vision and the right implementation.

I’m also an optimist, so for the rest of this article, instead of Doomsday predictions, I choose to live in denial, and look at Artificial intelligence as a beautiful thing which can only positively impact the design process (and me as a designer).

What’s given me hope is hearing repeatedly that the few surviving professions will be highly creative and will have the spark of creativity that’s found in humans, but difficult to teach machines.

In McKinsey’s “four fundamentals of workplace automation” report, one fundamental addresses the future of creativity and meaning:

“Capabilities such as creativity and sensing emotions are core to the human experience and also difficult to automate.”

Makes sense, right? Machines can’t love and they can’t feel empathy, but they can produce variations. However, these variations are only as good as their rules and training sets — they can’t break the mold.

I’m sure an AI could learn how to produce Swiss typography, but can it come up with what Stefan Sagmeister Or David Carson did without previous reference? Their work is singular because they ignored the design conventions of their time and re-defined them while exploring themes like legibility and the designer’s role in society. I doubt an AI could invent new design approach with intent, and not just because their thinking process differs.

https://www.bing.com/images/search?q=David+Carson+Ray+Gun&FORM=IDMHDL

Automation’s negative implications worry big companies too. As a result they formed the partnership on AI to benefit people and society. Among the members you’ll find Microsoft, Google, Facebook, ALCU and X-prize. Their goal is ensuring that we build ethical, unbiased, and beneficial technology. Satya Nadella, Microsoft’s CEO, wrote about how Humans and AI can collaborate to solve social issues. He discussed the future of employment, including qualities needed to survive and thrive in AI induced world. Two of the four were creativity and empathy.

Bingo! We have a win! These are core qualities every designer has and keeps developing throughout their career!

With a winning mindset, I started thinking about the promise of AI for designers. How could it augment the design process? How could it better both designers and products? Keep reading, and you’ll find out.

Part II: ASI, AGI, ANI. Oh, my!

Wait, you say: what do you mean by AI? And what is this design process you keep mentioning?

Let’s talk about AI first — Artificial intelligence is a broad term, and includes an array of sub-fields, techniques, and methods. You can see some examples in the slide below:

When Prof. John McCarthy, the father of AI, coined the term Artificial Intelligence in 1956, he defined the subject as the “science and engineering of making intelligent machines, especially intelligent computer programs”. AI’s vision is creating a “computer mind” that thinks like a human: a machine that learns and improves.

When we say AI, many people think of Samantha from the movie “Her”, or her evil twin, HAL 9000 from 2001: A Space Odyssey. This kind of AI is referred to as ASI or Artificial superintelligence. Nick Bostrom, a philosopher and leading AI thinker describes it as:

“An intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.”

When dealing with ASI we’re getting into thought experiments about humanity’s future. Will we be able to live harmoniously with intelligent machines? Will we lead ourselves to extinction? Or perhaps we’ll find immortality? However, I won’t be covering any of that today.

Graphcore.ai: This is a visualization of what deep neural network looks like in action

I find it more interesting to consider how we’ll get to AGI: General purpose AI or Human-Level Machine Intelligence (HLMI). How is AGI different from ASI or weak AI? AGI can perform human tasks and simulate human reasoning, at the same level as a very intelligent human being. Although we are not there yet, hundreds of the most pessimistic AI experts believe that by 2075 machines will likely achieve human thinking level and perform most human professions. Anything beyond that comes back to ASI — the AI of our dreams, nightmares, and science fiction movies.

And this is where we are today with AI: we are using Artificial Narrow Intelligence or ANI (also known as weak AI). Dealing with ANI is like working with someone with a Savant syndrome. They’re a genius in one area, and not very good at much else. Like the AI that defeats the world’s best GO player but can’t describe a picture or translate a sentence.

We are surrounded by instances of ANI: Autonomous cars, Facebook’s friend recommendations, talking to Cortana, Siri, or Alexa. Even some of the news we read is generated by AI. ANI may be narrow but it’s powerful, useful, and everywhere.

This realization got my designer brain wondering: “how can ANI augment the design process?” Before revealing the answer, I’ll invite you into my world and show you how the design process is used.

Part III: The design process

The design process is end to end, from finding a solvable problem, to ideating solutions, iterating upon designs and prototypes, and launching a final product or service.

The Design Council developed the double diamond model to explain how the design process works. They divided it into four stages:

The discover phase is where it all begins. This stage’s purpose is generating knowledge for the rest of the process. It creates empathy with users for better understanding: Who are these users? What are their biggest pain points? What motivates them? What do they need? What will make their lives better?

We go broad and apply divergent thinking to identify and understand a problem area. This is the time to explore social, business and technology trends, ask questions, and form a hypothesis.

Define stage: After going broad, it’s time to converge and create the vision and design strategy through explorations. Multiple concepts are generated based on Discover phase findings. Critical thinking is applied to zoom in on the problem and define the vision and value proposition.

Design stage: Once the vision and value proposition are in place, it’s time to go broad again, exploring and iterating on solutions to deliver a strategy. At this point you begin tactical design: the concept and experience goals are defined, but will it all work? Details are fleshed out to create a successful user experience.

Deliver Stage: Then everything moves to production — actual assets are created, and in tech this means code is written. Once it’s executed and released, success is tracked through telemetry and user studies.

I make this sound linear, but many of these processes are iterative and nonlinear throughout product creation. If you’re working with Agile methodology, design rhythm will also be different. The main point is that design thinking is inevitable. This is a process and a tool kit which designers pick and choose from, and the list above just partially details the tasks typically performed throughout.

I looked at these activities and deliverables to see where AI can come into play. What chores would everyone be happy to get rid of? Where can AI intervention enhance designers’ abilities and grant us super powers? Where should humans just do what they’re good at: being creative, empathetic, and using their intuition?

Part IV: The [Ai]d and the Hum[Ai]n

How do we do that? When we engage with AI, there are two main models: the [Ai]d, and the Hum[Ai]n.

When the AI is an [Ai]d, it’s a very smart passive tool in a designer’s hands. The designer has full control over what the AI is doing; the AI is part of the infrastructure enabling the designer to accomplish what they want. No more, no less.

When the AI is Hum[Ai]n, it’s still a tool, but it augments the designer. Together they create something that was previously unattainable. The designer is both a creative director giving a brief and a curator making final decisions based on AI-suggested options.

Let’s look at some examples, starting with the [Ai]d:

The [AI]d

An example for an [AI]d is the “face aware liquify” filter in Adobe Photoshop. It uses computer vision and pattern recognition to identify and modify faces and facial expressions. The AI was trained on a huge amount of facial image data in Adobe’s ecosystem, and can recognize faces in an image. But it goes beyond that: it recognized the different facial features. It understands how changing a detail in one facial feature impacts the rest of the face and facial expressions. The interface allows the user to change parameters and modify faces, quickly and easily. Have a look at this before and after:

An example for an [Ai]d AI

The Hum[Ai]n

When Deep Blue defeated Chess grand-master Garry Kasparov in 1997, people thought it was the end of chess — that the game won’t be interesting anymore

But the next year Kasparov was joined by a computer, and played against another Grand-Master, Veselin Topalov, and his computer. A new type of chess evolved from this defeat. Kasparov called it advanced chess, others call it Centaur Chess, a name I prefer because the new game is a hybrid, of human intuition, creativity, and empathy, and machine capabilities like remembering and calculating huge numbers of chess moves.

Image: Felix

The best centaur chess players are not necessarily the best human chess players. Rather, they’re the ones who are best at collaborating with a computer. Tournaments have proven that the centaur teams outperformed both humans and machines separately. The centaur team is a gestalt that augments the human performance.

Here’s a Hum[ai]ne duo — Arthur Harsuvanakit and Brittany Presten from Autodesk collaborated with Dreamcatcher, Autodesk’s generative design CAD system. The AI was seeded with a creative brief: fed a digital 3-D model of a chair Presten designed with inspiration from Hans Wagner’s Elbow chair and Berkley Mills’s Lambda chair.

Then they gave the AI technical constraints and physical requirements — such as how much weight that chair should carry — and let Dreamcatcher iterate.

The process of creating the chair

Dreamcatcher explored many different options, but Presten picked the winner. This beautiful chair is the final outcome. Its soul comes from the designer, it’s function comes from the machine

Hum[Ai]n Duo: Harsuvanakit and Presten + Dreamcatcher = Elbo chair

Hum[Ai]n Duo: Michael hensmeyer + AI = Computational architecture

Here’s another example of a Hum[ai]n duo: Architect Michael Hansmeyer is using AI to develop novel architectural shapes which couldn’t otherwise be created. He built algorithms to design unusual columns with ornaments too intricate for a human to carve manually. His inspiration came from paper folding and cutting and Greek architecture. Once he fed that into his AI, the algorithms suggested multiple shape variations. Hansmeyer selected the ones to keep developing, and the machine iterated on these. The result was a curated set of 3D printed columns in architectural scale. None of this could have happened without a human-AI collaboration.

Michael Hansmeyer’s TED talk

Part V: The Augmented Design Process

Now let’s find out how these models can be used to augment the design process

The highlighted activities have potential for AI intervention

Insights Ex-Machina

Take this scenario: as part of the research phase, we interview many users, often resulting in dozens of hours of videos for review. They need to be tagged so the researcher can return to interesting tidbits and turn them into insights. This is painstakingly slow and takes hours, if not days. Yet, the technology to understand video content is already here. Microsoft’s Video Indexer can extract topics from a video clip, recognize people, detect sentiment, and transcribe audio.

Microsoft’s Video Indexing

It’s not hard to imagine a system which does this and uses natural language to communicate the insights, similar to Narrative Science’s story highlights, which are based on user’s data. Imagine: an AI tracks interesting topics for the researcher and packages everything so she can pay attention to the highlights. Now she can spend more time on understanding users and developing perspective, not on endless hours of video watching and summarizing.

Quill’s Story Highlights for PowerBI

“With the addition of NLG, smart data discovery platforms automatically present a written or spoken context-based narrative of findings in the data that, alongside the visualization, inform the user about what is most important for them to act on in the data.”

Gartner, Smart Data Discovery Will Enable a New Class of Citizen Data Scientist

Next: The Live Persona

A persona is an archetypal user for your product. They are a fictional character based on many research data points, including user interviews, market research, and surveys. Their purpose is bringing research to life and creating empathy for your user.

Personas are multi-dimensional. They include user information like profession, tech use, favorite brands, and, most importantly, needs, goals, and pain points.

Today, compiling information and research into a persona is a lengthy process. Additionally, personas must occasionally be updated to keep up with current trends and usage patterns, even if the product hasn’t changed. Every time you update a persona, you have to start the process again.

My team in Microsoft Office uses personas on a daily basis, and you can often hear lively discussions like, “what would ‘Mike’ do in that situation?” is it the right scenario for his persona? or “How would ‘Kat’ react? Is what we designed solves her needs?”

What if we could just ask her?

What if persona-creation could be automated? What if you could take user interviews and information gathered from users’ digital footprints to infer behaviors? Instead of sampling from a small user set and manually creating a persona, we could generate it using data from millions of people, and that data could always be current.

Here’s a glimpse into an existing technology: “Add Magic Sauce” is a powerful tool that gathers tons of information about users’ online behaviors, and overlays that information with a psychological behavioral model (Big Five). While each piece of such information is too weak to individually produce a reliable prediction, when thousands of individual data points are combined, the predictions become very accurate. To give you a hint of the power of prediction and evaluation of “Add Magic Sauce”:

  • With 10 Facebook likes, their tool can predict a person’s personality and responses better than work colleagues.
  • With 70 likes, they can predict it better than their friends.
  • And with 300, better than that person’s partner.
How many likes it takes to know you?

And don’t think that this is limited to Facebook. Cambridge Analytica, a controversial commercial company owned by right-wing billionaire Robert Mercer, used similar techniques to target UK and US voters and influence their voting behavior. In the UK CA helped the Brexit “leave” campaign and in the US they worked with Trump. But, for our non-political purposes, this kind of system could easily create personas that are reliable, accurate and always fresh.

The next step is making personas interactive: relatable and capable of telling their stories. We can go beyond an AI that describes a persona in natural language. Wordsmith is already looking at integrating voice in its auto-insights tool. A “live” persona could speak about itself in a first person voice, and even answer questions.

Data+Insights+Predictions+Communication = Live persona

But the live persona could do even more to help designers and other stakeholders empathize with it. It could be a collection of different artifacts: the persona’s favorite playlist, a Facebook page, a Twitter account — all based on real data.

Br[Ai]nstorming

We often use personas as part of our brainstorming. Brainstorming is like a ping-pong game. Ideas form, then bounce around quickly. They are considered, reversed, manipulated, and quickly exchanged going back and forth.

From: http://gph.is/2d91I1W

Alas, brainstorming alone is hard. Many designers find themselves working on their own with no one to brainstorm with but themselves. What if there was a companion AI that could help when you want to brainstorm and you are a team of one?

There are already dozens of idea generation techniques. Consider how an AI trained in these techniques could provide you with the right method or activity at the right time: provoking you, inspiring with words or images, asking the right questions to get your creative juices flowing. If it has access to live personas, it could even present you with real scenarios and pain points.

Part VI: Final thoughts

I talked about a few opportunities where AI can enhance the design process. While many view AI as a threat, it can also be a gift: augmenting, complementing, and completing human abilities.

The successful collaboration combines the human as the creative force, and the AI as the pragmatic partner.

The human defines the vision and the values. She brings creativity, unpredictability and inefficiencies that are part of any creative process, as well as an idea of the desired final result.
The AI deals with the data and the technical constraints: its value is efficiency in generating endless iterations based on the human-specified parameters. But let’s not confuse permutations with originality or good design — it’s the human who defines the parameters, while the AI executes.

Image: James Vaughan via Flickr https://www.flickr.com/photos/40143737@N02/4747873754

The AI-generated results may look different than what a human would have developed, but that’s because we have different problem solving approaches. Machine output is always ingrained in logic and repetitive permutation. The beauty of human creativity is that it understands logic, but often diverges from it just to experiment and explore. Although machines can generate endless variations, only a human recognizes the outstanding ones. By curating machine-generated designs, a human assigns value to results and artifacts that, to a machine, all seem the same. But design is not just generating: it’s paring down options and making decisions.

Given the right tools, the product design process could greatly benefit from coupling intuition with grit, empathy with pragmatism, and curiosity with efficiency. When you combine human with machine, you get something greater than the sum of its parts.

We are better together.

Join 30,000+ people who read the weekly 🤖Machine Learnings🤖 newsletter to understand how AI will impact the way they work and live.

Special thanks to Ming-Li Chai, Laura Neumann and Rolf Ebeling for their feedback, to the wonderful Kimberly Koenig @plethora_etc for her editorial advice and support. And to Erez Kikin-Gil, always.

--

--

Ruth Kikin-Gil

Product designer, Responsible AI strategy at MSFT. I have a love-hate relationship with tech. These are my opinions. [She/Her]