6. What’s Good? A Co-Design Experiment with AI

Building a Spectacle to Explore Human Feeling and Good Design

E.Louise Larson
How Might We…
8 min readDec 5, 2018

--

Abstract

This project proposes to explore the relationship between images, design, and autonomy through co-design. A co-designer will be built using a Bayesian Generational Adversarial Network (GAN). This algorithm will autonomously design advertising posters so we might observe what autonomously generated posters reveals about human aesthetics and the design process.

Introduction

Art and Spectacle

The idea of autonomously generated art is inspired by Situationist thinking and Avant-garde art. Not only were these movements related, but they both explored the relationship between identity and image.

Situationists explored the disconnect between hyper-capitalism and social alienation. They adopted Karl Marx’s theory of alienation and believed that people felt social alienation when they gave up living authentic lives in order to do more work.

Situationists were concerned that the more people wanted to buy, the less they wanted to live.

Avant-garde art is a broader movement than Situationism: but these artists were concerned with living authentic lives. A consistent theme in the Avant-garde is questioning the relationship between consumer and artist. Illuminating this power dynamic called into question consumer desires and social roles.

These exploratory thinkers wanted people to contemplate how their identities were largely designed by the society they were born into. As a designer, I’m also interested in this dynamic. Specifically, I’m interested in the intentions and ethical implications of designed objects, experiences, and environments? Who are these things for? What is the relationship between these things and the larger contexts they exist within?

Science and Anathema

Computer Scientists use simple math, like probability, to write complex algorithms for Artificial Intelligence (AI). The AI uses these algorithms to make decisions and follow steps to accomplish a task. For a human, this would be like using a coin flip to make every decision.

On its own, AI doesn’t do a very good job of learning. There is new research, however, exploring how robots can mimic embodied cognition and procedural memory.

Researchers from MIT have been working on a robot named Anathema that autonomously learns. Learning, however, isn’t like following a recipe. It is dependent on the context and content. For this reason, learning is something that science is still researching.

Even though learning can’t be prescribed with 100% accuracy, there are some general methods that can commonly be applied. Lectures, experiments, instructions, and practice are all variations on how people learn. Each of these kinds of learning have different kinds of associated reasoning. The most common is deductive reasoning. This kind of learning can be imitated.

To mimic human learning using deduction, these researchers taught Anathema to plan.

Planning is a process by which you use your senses to observe your current situation, frame the problem space, ideate potential solutions, and test what is possible. Sound familiar? Design Thinking is a kind of planning.

If this robot can learn to plan, can it learn Design Thinking?

Robo-planning

MIT’s planning research is a big deal for AI and robots. The paper they recently published is here. To explain how the robot works they use the example of a robot butler.

Let’s imagine that the family who own’s this butler moves into a new house. In this new house the robot has to learn a new floor plan in order to perform its tasks. If you’ve ever moved into a new home with a pet, you have probably seen a pet adapt to their new environment in a similar way.

A pet dog might run around the house sniffing each room until he finds a spot that meets his dog-requirements for a nap. Your robot butler doesn’t have a sniff sensor, so instead it uses vision sensors for data input. This “seeing” is like taking a 3D picture of a room. The robot uses this 3D snapshot to navigate then processes how it might perform its task in this environment. This is how robot learning, or planning, happens.

If the robot is tasked to clean a glass, what does that planning process look like? In this scenario, the robot butler is in a new home. It doesn’t have a mental map of where the kitchen is. The robot first needs to interpret the data around itself to determine if it is already in the kitchen.

If the glass has liquid in it, the robot needs to pour the liquid into the kitchen sink then place it in the dishwasher. What if the dishwasher door is closed? How does the robot need to adjust its patterns of behavior in a constantly changing environment?

Once the robot has identified the kitchen and dishwasher, it will follow a process like this:

What human thinks of as a 2-step process takes a robot almost 10 steps.

This dishwasher example seems simple at first, but ends up being really complicated. The robot has a lot of data to calculate in order to perform simple tasks. But, the research team working on this robot, came up with a really clever system of reason for the robot.

When this robot has identified a kitchen, it also identifies all the data it is programmed to associate with kitchens. It assumes that a sink, oven, refrigerator, and dishwasher are all in the kitchen. If the robot is in a kitchen, there must be a sink. If the robot has a dirty glass, it must have some liquid that needs to be poured out. All liquid must be poured into a sink. All kitchens have a sink. All dirty glasses must be taken into the kitchen, then poured into the sink. These assumptions drastically shortens the butler’s process list.

In this second scenario, the butler is programmed to better plan how it will load the glass into the dishwasher. Many of the yes or no questions that complicate the first scenario are compressed in the second scenario. There is a big difference in this kind of thinking.

The robot’s raw sensory data, like the 3D snapshot, is low-level input. The robot must analyze this input to determine what might be useful for the task at hand. Analyzing these complex situations to make process judgements is high-level planning. This high-level planning is similar to how humans reason.

Robo-reason

As discussed here, human reason is fundamental to Design Thinking. Reason allows humans to use their senses, intuition, and best judgement to frame a design problem. This process requires balancing empathy and definition with testing and ideation.

This high-level planning is choosing which bits of information the robot might need in order to accomplish a given task. This framing helps the robot make better decisions.

If robot butlers and complex AIs are learning to conduct high-level planning, how might they also learn abductive reason? If the robot can process millions of data points to make millions of frames in just a few seconds, wouldn’t these robots make excellent designers?

This line of inquiry involves many interdisciplinary fields. The least of which is Design.

Research Hypothesis

A robot like MIT’s Anathema creates many questions at the intersection of technology, design, and ethics. This proposal outlines how to explore a subset of those questions using a Bayesian Generational Adversarial Network (GAN). This Machine Learning (ML) algorithm will use the Carnegie Mellon Swiss Poster Collection and a set of design rules in order to autonomously generate new Swiss posters.

There are primary questions I’m asking in this experiment:

  1. What’s Good? What might humans interpret in these artificial simulations of Swiss posters? What might human interpretations of GAN-generated posters reveal about aesthetics?
  2. What does a Swiss advertising language look like? These posters use the national languages of Switzerland (German, French, and Italian). Presumably, the AI will generate a language of its own derived from the Swiss training data. Will human observers be able to tell what was GAN-generated?
  3. How might AI use data derived from Design (and therefore human sense perception, empathy, reason) to create “good” design? How might Design (and art) change the way an AI is trained to perceive the world and plan?

Swiss Posters

Swiss posters are a hallmark of 20th century design that continue to shape visual communication to this very day. Carnegie Mellon University has a collection of over 300 posters donated by Ruedi Ruegg.

http://luna.library.cmu.edu/luna/servlet/view/all?sort=date%2Cdesigner%2Cclient%2Ctitle

The CMU collection covers the 1950s to present day. These posters are known for a clear and expressive style that specifically leverages grid, color, form, and typography. Posters were created for a variety of advertising purposes from business communications to museum exhibit announcements. German, French and Italian are all frequently used in the poster design.

Materials and Methods

Generational Adversarial Networks are unique algorithms that make and refine images. A researcher gathers really good examples of the kind of images the GAN should make and puts these images in a training data set.

The GAN uses this data set like an answer key. The GAN uses random noise to generate random images. These images are fed into the discriminator. The discriminator decides what images are true to the answer key. Images that don’t look like the training set are marked “false” and discarded. Images that look more like the training data are “true” and act as positive reinforcement.

The ideal output for a GAN is an image that accurately resembles the training data. That whole process looks like this:

This project will use pyTorch to build the GAN. The training set would be made of Swiss Posters scraped from Carnegie Mellon University’s special collection. The Bayesian analysis would rank grid, color, form, and typography in the Swiss poster training data.

The addition of these data points make for a more complex GAN. Each GAN-generated has to be marked “true” for grid, then color, then form, then typography.

The below image demonstrates how the GAN uses “true” training data to generate grid-aligned Swiss posters. “True” grid posters would then have to pass color in order to continue the positive reinforcement cycle.

Conclusion

This experiment is rooted in science but is meant to explore art, design, and psychology. What’s Good is a proposal to build a generational algorithm to create art. This project uses recent breakthroughs in artificial intelligence and computer science to explore how people might feel about autonomously generated design.

The below examples are the top Google Image results for “Bad Graphic Design.” These images may not be the most beautiful, but they were clearly designed with a specific intention.

Will humans value these designs more because they were created by another human? Presumably neither these posters nor the Swiss posters were made using the Design Thinking process as we know it today.

Design Thinking or not, humans use their senses, empathy, and reason to negotiate problem spaces. Can AI be trained to analyze problems in the same way?

Bibliography

From Skills to Symbols: Learning Symbolic Representations for Abstract High-Level Planning

--

--

E.Louise Larson
How Might We…

Easily excitable. Carnegie Mellon University School of Design. IDeATe adjunct. CEO and co-Founder @ Prototype PGH.