2018 Austin Startup Week: AI Sessions @ Galvanize

A Conversation about AI and Design

Thanks to Austin Startup Week and my panel host, Charlie Burgoyne, for posing these questions and providing a space for open conversation and imagination.

Jennifer Aue
IBM Design
Published in
10 min readOct 4, 2018

--

Q (Charlie): It’s 2038, the cover of Wired features a top designer. What do they say about how their work has changed over the last 20 years?

A (Me): Good design is good design, regardless of the medium. Bringing our understanding of concepts like hierarchy, contrast, grid, tension, pacing, etc. to a more abstract space won’t change that.

In 2038, design will be informed by data in ways that we could never achieve before. Designers have the opportunity to explore new problem spaces alongside new collaborators — data scientists, miners, trainers, and architects.

In that regard, the most immediate change will be towards a focus on data visualization. Once data is ready to be analyzed, how will we make the insights immediately accessible and enjoyable to understand? Where we’ll go from where Edward Tufte left off now that machines are analyzing multiple layers of inputs and queries remains to be seen.

Not only do we need to find ways of visualizing this complexity, we need to reveal where the data came from, it’s reliability, it’s age, and it’s proportionate affects on the system’s recommendations.

Experiments in data visualization by artist Aaron Koblin.

At a deeper level, designers will be adapting their skills for problem solving and critical thinking into discovering new ways to use and combine data to uncover insights that are meaningful to users.

We need to bridge the gap between the scientists writing the algorithms, the results users want, and the results they didn’t realize they needed.

Beyond that, design will shift it’s focus from creating functional forms and delivering Q&A type transactions to building relationships. This will bring people onto our teams with backgrounds in linguistics, writing, journalism, maybe even improvisational comedy, actors, or psychologists. It will force designers to start thinking in terms of space, time, and emotion.

What’s also interesting about your question is the date you’ve chosen, which puts this person in a new generation of designers who are just now entering the workforce. Specifically, Generation Z.

An unexpected observation I’ve made in working with these students is that their priorities, the way they relate to technology, and their ideas for where they want to see it evolve are different — better — than the current working generations.

Problems like bias and privacy that currently plague us are rooted in culture and history. Younger people are starting from a place of valuing equality and acceptance. Their ideas for where technology is heading directly reflect that. How they relate to their devices and apps is more fluid. They have a tendency to simply put these issues down as things of the past and focus on higher aspirations, largely on bringing deeper understanding between humans by enhancing how we communicate.

Q: Is a general AI possible and if so, what role does design play in testing it, i.e. should the Turing test incubate design?

By “general AI” you mean machines that can understand, reason, and learn — what is also known as strong AI—a machine that can basically do everything a human can do.

This is also referred to as the beginning of the singularity, which is estimated to happen within the next 45—50 years.

Ray Kurzweil: The Coming Singularity

We surpassed the Turing Test in 2011 with Watson, and many others have repeated that success since then. But the Turing Test only tests how well a computer can simulate natural conversation. While speech is an important component of realizing what seems to be our cumulative vision for AI — a fully functioning personal companion and collaborator — there are many other challenges that need to be tackled before that vision is realized. Namely, the qualities of cognition—the ability to reason, learn, and understand. Can it create an inside joke? Can it have empathy? Can it have it’s own sense of self?

These are problems of scale, of speed, of science, and of human nature. And we’re looking to nature to figure out how to solve them. What we’ve achieved so far, and where new discovery will come from, is derived from physics, chemistry, biology, and neurology. AI is perhaps the first time we’ve had to pull from all of these disciplines to solve for a single problem space.

Ray Kurrzweil, “How to Create a Mind”

With AI we are trying to understand how the human mind works by reverse engineering it. In those terms, I see the tests for marking achievement being twofold.

A measurement of architecture and engineering—measuring the machine’s ability to respond with better insights using less data, less speed, and likely a “one algorithm to rule them all” solution. We’ll need to continue evolving the way we engineer computers, bringing them into our world of analysis across multiple dimensions simultaneously. This poses both a software problem—neural networks and deep learning being the first step—and a hardware problem.

Intel presents Neuromorphic Computing at CES 2018
Intel advertisement for biologically inspired computer chips

A measurement of development—measuring a computer’s cognitive abilities against existing tests for human development. How far along hove we brought machine cognition? Can it perform to the equivalent of a 3 year old mind? A 10 year old? Perhaps one day, a mind that surpasses what we’re able to achieve in our own lifespan?

Q: How does the general public’s perception of AI affect their expectations of product design? Do we need and expect smart blenders?

We expect what we see on movie screens and read in sci-fi novels, and I love that that’s where are minds are with this. The arts are, and should always be, where we envision new worlds and inspire innovation. Without those expectations, we’ll never reach such lofty goals.

I think the initial awe towards technical achievements like Siri, Watson, AlphaGo, Pepper, and a long list of others immediately queue up those memories of sci-fi visions and lead to the question—ah, is this it? Do I finally get my own flying car or home robot or holodeck? And the answer is yes, those things are coming because once humans imagine something, it’s simply a matter of time before we can make them, even if that period of time lasts hundreds or thousands of years.

Even though we’re experiencing a dramatic leap forward right now, we’re just at the beginning of understanding how to build the components of these visionary systems. We have a long way to go before we understand how to bring those components together to create complete experiences.

Do we need these things? We don’t need much of anything to survive. We need what’s outlined in Maslow’s Hierarchy. The question is more about quality and aspiration. Do we want to stop invention once we’ve fulfilled all the layers of the pyramid, or do we press on to discover new ways to experience life? I would hope that the answer is always to press on. So in those terms, yes, make a smart blender. The market will decide through natural selection whether that blender will survive as part of a longer term vision of life. It will succeed at providing valuable insight into what we create next, even if it fails as a product.

Maslow’s Hierarchy of Needs

Q: How does AI impact physical space design?

There are two sides to that question—how we design environments, as well as how AI integrates into our environments.

For the first one, I think one of the best use cases I’ve seen to date for AI is the work being done using machine learning to analyze potential solutions for a design, then recommending the best outcome based on specified parameters. For example, Boeing is using this technology to redesign airplane parts that maximize strength and minimize weight. This same technology is being used to design everything from bike helmets to bridges. The results don’t look like anything we’ve seen from human engineers. They’re organic, non-linear, and alien looking. That’s the moment you realize that AI truly is a tool for improving our own work in ways we don’t expect and couldn’t achieve on our own, but desperately need.

I love this new area of design. Imagine what your office would look like if the architecture and ergonomics were informed with years of data on how people navigate, work, and live within that space. Straight hallways might morph into a simplified network of passageways. Stairs replaced with ramps or spirals. Desks become notches and nooks within walls. Whatever it is, it will be informed by human behavior and needs in ways we could never take into account to this degree before. Now extrapolate that out to a city. And combine it with data about accessibility and pollution and utilities. Everything—simply everything will change.

As for how we will use AI in these spaces, this also poses new challenges for design. Everything we’ve ever designed up to this point has been about function and transaction. A chair does this. A website gives us that. AI puts a new dimension into the questions we need to ask, and the ways we can solve for them, because it’s a technology that fosters relationships.

We now need to consider, explore, and research exactly how we want those relationships to behave. There will be varying solutions for that based on users and purpose and the context of the moment. Do we want to just talk to a box on a table or do we want to look at a face? Should it even be a visible piece of hardware or something built into our surroundings—or our brains? Should it use gestures? Be witty? How does it earn our trust or should we even want it to attempt that and rather design it to always remind us that it’s a machine?

These are philosophical, moral, emotional, and psychological questions that we barely understand how to pose right now. But we can hack through how they might be solved and hopefully do enough work to be prepared for the time when technology needs the answers.

Example hack: Want to design an omnipresent Computer like the one in StarTrek? Give your friend access to Google, lock them in a closet, then try conversing with them for awhile like you imagine you would with an AI. Now think about how that felt vs what you’d like it to feel like. Did you like talking to the air? Was it cool or annoying when they were sarcastic?

Q: How does adaptive design come into play and where do we see it today?

So, let’s be clear about what you mean by “adaptive”. I believe you’re describing a higher level of what today we call “responsive” design—design that can adapt to the device it’s being channeled through. For example, a website that changes from what you see on your laptop to an optimized view with prioritized functionality on your phone.

I’ll point again to the fact that AI is adding more complex layers of usability onto our formerly 2 and 3D digital world. Because the algorithms we’re building are layered with potentially hundreds of points of input—through sensors, profiles, news, locations, events, weather, and so on—we can generate countless possible responses. Those responses need to be designed to take into consideration timing of delivery, proximity of the user to their devices, proximity of the user to other people, safety, tone and style of the response, expression, gesture, priorities, permissions, and of course all of the audio, visual, sensory, and robotic devices we can deliver these responses through.

On the flip side, we also need to be designing interactions that allow us to gather valuable feedback and data from users so the system can continuously adapt and improve upon it’s knowledge, and in turn, the quality of it’s responses. Connected devices, from edge to center, should be learning from the users’ reactions to the outputs, as well as noticing everything that’s happened during that cycle to improve it’s underlying knowledge.

Where I see this all headed is towards new fields of design. Design Choreography. Conversational Design. Relationship Design. I’m sure many more I haven’t even considered, but all representing new ways of thinking about how we design for relationships and machines we can cognitively communicate with.

So what’s the most important thing designers can do right now to learn how to design for AI?

The human mind is the most sophisticated piece of technology on the planet. It makes sense that, as a society, we are determined to discover how it works and imagining ways to turn those findings into new solutions. Luckily, every single one of us has our own testing lab sitting atop our shoulders. Every mind is uniquely formed, differently wired, and can shed light on our quest to understand how it the brain works.

If you want to design and build AI, begin by observing, testing, and understanding the inner workings of your own mind.

Jennifer Sukis is a Design Principal for AI and Machine Learning at IBM based in Austin, TX. The above article is personal and does not necessarily represent IBM’s positions, strategies or opinions.

--

--

Jennifer Aue
IBM Design

AI design leader + educator | Former IBM Watson + frog | Podcast host of AI Zen with Andrew and Jen + Undesign the Grind