The Future of Design: Computation & Complexity.

A transcript of my SXSW 2019 talk

Stephen P. Anderson
Mar 10 · 37 min read
Cover slide!

Design is in the midst of a shift. A shift that will make much of our present skills obsolete, and demand we learn new skills, or become… irrelevant. That’s what I want to talk about today. Nothing too serious! [Laughter]

Before we dive in though, I want you to do two things:

[Aside] Yes, I’m a former teacher. Even in a keynote style talk, there’s gotta be some class activity, right? ;-)

First, make a list of all the things that you — as a designer — do in a week. Prepping screens for handoff. Exploring icon options. Customer interviews. Facilitating meetings.(And while we’re on the topic, I will be speaking primarily to designers today; I do take a very broad view of who is a designer, but this is aimed mostly at design practitioners).

[Pause]

Second, (and something a bit more fun)I want you to answer this question:

What stories (truth or fiction) have helped you reflect on the unintended consequences of tech?

Think Black Mirror, BladeRunner, Ex Machina, Her, Continuum, Little Brother, Fahrenheit 451…

Full credit to Kim Goodwin for this question

Take a couple minutes to answer these questions.

[Pause]

Okay. We’ll come back to both of these…

I want to kick things off with a statement:

Over the next 40 minutes, These are the questions I want to answer:

  1. What is the shift that design is going through?
  2. What new skills will we need to develop?
  3. What doesn’t change in all this?

Stated another way, here’s the emotional structure for the next 40 minutes:

  1. A Splash of cold water
  2. A warm blanket
  3. Some comforting words.

[Laughter]

Let’s start with a splash of cold water. Also, the first half of my assertion:

1. Design is in the midst of a shift

I want to roll the clock back a bit, to one of several defining moments for me, personally.

This one was in November 2014, the weekend before Thanksgiving. I had been at a growth hacking/conversion optimization retreat at Texel (pronounced ‘Tessel’), this really cool island north of Amsterdam. It’s one of those remote places you can only get to via ferry a few times a day.

Anyway, this was a Sunday afternoon, following the close of the event. A group of us, mostly the speakers and conference organizers, were chatting about the industry, while we waited for the ferry to pick us up. It was during this conversation that three worlds converged:

  1. First, the world of web page conversion and metrics, analytics, and so on. The stuff I’d been learning about all weekend long.
  2. Second, what I know about component libraries, design systems, style guides, and so on— how we’ve done a great job articulating and refining the bulk of UI design patterns.
  3. Third, what I was seeing begin to happen with AI and machine learning. (This was either shortly after, or just before The Grid was announced.)

Put these three things together, and it quickly became clear to me: Why would you ever hire a designer to design a custom web site, especially if your focus is on conversion metrics and click-through rates? An AI could test out 1,000s of combinations, optimizing for the one that performs best, all while a designer is still exploring three options in Sketch. Custom web site design is something that could be completely outsourced to technology, with occasional monitoring by a single, low-paid worker. (I’m not saying this is ideal, but hopefully you can see how this would be appealing to most businesses.)

While I haven’t done web site design in a while, I saw the same thing coming for traditional software, enterprise applications, mobile app design, and so on. This stuff is far harder for a machine to take on, but it’s only a matter of time. In that moment, I realized I had built a career of of being an expert at User Interfaces, a skill — I believed— would soon be obsolete.

I could share more such moments, but this one should suffice. It was — for me — an early warning sign of what was (and is) coming. What designers do is changing.

Think of this shift as moving from a craft—such as pottery—where we have direct, hands-on control over the outcome, to a situation where we’re twisting knobs and dials, adjusting the parameters that might influence a change.

Let’s call this the shift from Design 2.0 to Design 3.0. Yeah, I know… Shoot me.

To adequately understand this shift from Design 2.0 to 3.0, we need to go back to the first big shift, from Design 1.0 to Design 2.0. This is the shift from print and industrial design to that of digital design. To be clear: (1) these shifts often happen over a decade or two — some of us still struggle with this first big shift. (2) A shift doesn’t mean the old ways go away — but the demand for those skills will shrink. (3) A shift is really a broader scope of concern.

If you came to digital design by way of print, packaging, or industrial design, then you know that one of the biggest changes was a loss of control.

With something like the design of a poster, an ad, or packaging, you had control. There were specified sizes and formats you could work with. With browsers, however, you have to think about system fonts, rendering at different browser sizes, all the quirks of different browser settings.

Then, there’s dynamic, personalized data. Not to mention plug-ins and other customizations that let users control how things are represented.

My version of Amazon.com is not the same as your version of Amazon.com. Not just because of personalized content, but because I have javascript plug-ins that further alters the page in ways that are personalized for me. Example: I use a plug-in called ‘FakeSpot’ to identify whether reviews are fake or real.

We could add to this loss of control a lack of finality. With a book, it’s done. A poster, it’s printed. With digital products, things are released, then iterated upon. This shift to ongoing, iterative design is one many teams are still struggling with.

But it’s more than a loss of control or lack of finality. If we look at something specific like designing search functionality, a design 1.0 mindset would focus on the iconography, the search field, or the page that displays the search results.

But if we wanted to really design the search experience, it meant we had to learn new skills. We had to go to that mysterious, other side: We had to learn how to manage data. I’m talking about Metadata. Keywords. Search queries. Semantic structures. Ontologies. None of this is the sexy kind of design that drew many of us into this profession, but if we wanted to be good designers, if we wanted to create great experiences, we’ve had to engage with how things actually get to the page in the first place. This gave rise to new roles and skills such as information architecture. I include this search example for a reason: I think how we responded to this challenge is a good indicator of how we’ll respond to the shift from Design 2.0 to Design 3.0. Again, I see the biggest shift being from the design of products to the design of ‘parameters’. We’ve moved from a place of direct control, to a loss of control, to a place of tenuous influence and iterative testing.

And then of course, there’s the addition of all these new devices and touchpoints: VR, wearables, thermostats, voice assistants…

It’s no wonder we’ve incorporated tools like customer journeys and service blueprints into our design processes— they’re vital for orchestrating so many touchpoints.

And with all these touchpoints and integrated services, cross-functional teamwork is required. We have to work together.

In a sense, the shift from 2.0 to 3.0 is the same as 1.0 to 2.0, but more extreme and at an unprecedented scale.

Case in point: I love this photo from Josh Clark, surrounded by dozens of different mobile devices.

But what happens after this? What happens when there are thousands or millions of devices, all with different constraints? Think voice control, no screens, use of projections, and so on. We can’t account for all of these through breakpoints and page archetypes.

To counter this, I’ve recently seen more design teams move from designing experiences to defining rules and parameters that might govern such instances.

If you’ve worked with a tool like Intercom to set up metrics, then you know what I mean. We create these elaborate mad-lib statements where we set up triggering events and subsequent actions.

This is design. This is also a different set of skills than we normally see discussed. But, as we begin to think about AI, this “rule-based decision making” is the first step into a larger world.

This is from a fabulous post Intelligent Things >> It’s all about machine learning. I found it very useful.

And then there are web sites generated by AI. While it’s still early, at least five major companies are working on this technology, with analysts predicting “prime time” in two to three years.

Where does all this lead? What then is Design 3.0? I’m not entirely sure, though I’d say it’s concerned with:

  • the system behind the things
  • how things relate (seams and connections over nodes)
  • outcomes
  • design as facilitation, or creating virtual facilitating structures

If we step back, there are many themes would could pull out of this long arc, but let’s stick with a loss of control.

On this note, a good designer-technologist friend of mine (someone who is already living in this near-future) had this to say:

Brilliantly stated.


Now, a curious thing happened as I’ve been sharing these ideas. In multiple conversations, when I suggested a near-future where no one needs an interface designer, folks responded in a way that surprised me.

While I’m thinking about designing the huge, invisible infrastructure that fuels an interface, others are thinking about new and different kinds of interfaces: Voice UIs and Virtual Reality are frequently mentioned.

To draw a sharp distinction between these two lines of thinking, let’s return to one of the questions I opened with:

What stories (truth or fiction) have helped you reflect on the unintended consequences of tech?

While I could share any number of answers, I want to share the one that has helped frame my thinking about what’s next. It’s this really creepy moment from the movie Elysium.

For context, our protagonist is checking in with his parole officer, and things go… badly.

I reference this scene for a couple of reasons.

First, because this is all too real, already. While we tend to think of Sci-Fi as predicting the future, it’s more true that Sci-Fi comments on the present. This scene is chilling because we’ve all experienced this. A phone call to customer service. Refilling a prescription at the pharmacy. Dealing with voice assistants. We can relate. And the tech? It’s already here, or in its infancy.

But there’s another reason I’m using this clip: It’s a great frame to consider how we define our role as designers. One is focused on craft and interface. The other, on outcomes and the overall experience. As a designer, what do you want to fix about this experience?

Is it the the creepy face, or the robotic voice interface? This is the interface for the experience. There’s nothing wrong with caring about this — I’ve built a career arguing for making things more humane and relatable. There will always be a role for people to design the interface, whether that is on a screen, via goggles, or voice or whatever. But even with a friendlier face or human sounding voice, what’s it all for? Why does this matter if the outcome is the same? The dance between machine and human goes deeper than interfaces and touchpoints. I look at this and see the design of the AI and the entire exchange as the bigger — and perhaps more honest — design challenge. How might we engage with the engine that drives such a negative outcome?

I’m not alone in this conclusion. The outrage over machine bias, combined with the rise in design ethics over the last few years suggests that many of us do care about what is happening, and how we can design better outcomes.

To make this new role for design even clearer, let’s return to our illustration, and shift our language, just a bit.

Design 1.0 is about Products.

Design 2.0 is about Experiences.

Design 3.0 is about Outcomes.

It’s really a shift from Products to Experiences to Outcomes.

I like how Sheryl Cababa visualized this in her talk at Interaction19 last month:

I like this view as it’s less suggesting of evolutionary “stages” and more about where our work is situated.

  • With a Product focus, we design things that are appealing to use.
  • With an Experience focus, we design touchpoints that create a desired experience.
  • With an Outcomes focus, we design experiences that contribute to positive societal outcomes.

Okay, but how? When it comes to a discussion of ‘Products’ or ‘Experiences’ we have no shortage of material. How do we design for Outcomes?

Here’s what I’ve concluded:

If we want to design experiences that contribute to positive societal outcomes. then we need to get comfortable with:

  1. Designing with Machine Intelligence
  2. Designing for Systems & Scale

But, what exactly does this mean? What does it look like to design with machine intelligence and design for systems and scale?

To be perfectly candid, this talk originated with these questions, questions I did not have the answers to:

  • How should designers be thinking about machine learning?
  • How will the design profession evolve to meet 21st century demands?
  • How do we develop our ability to think in systems and prototype possibilities?

I’d find myself watching documentaries such as AlphaGo, and find myself asking “what is the role of designer” on a team of engineers and data scientists writing machine learning algorithms?

Essentially, I wanted to know what new skills do I need to develop? Not just for myself, but also for the design teams I support. I know design is vital to humanity, vital to our future. But, I was struggling to articulate what, exactly, this looks like in a post-interface world.

Which leads us to the next section of this talk…

2. We need to develop new skills.

Before we look at what new skills we need to develop, let’s start with what doesn’t change in all this: People.

The Human Experience at the Center

I want to be very clear: What doesn’t change in all of this is a focus on the Human Experience. Research. Empathy. Consideration. Sustainability. Dignity. Accessibility. Mindfulness. Accordingly, this is the central, focal point in this model.

To this, you can add in interfaces, whatever they may be — a PC, mobile devices, VR headsets, wearables, IoT devices. Things we can’t even imagine yet! Like I mentioned before, there is always a place for the interface designer, whether that’s inventing new touchpoints or optimizing existing ones. For example, there’s a whole lot of work to be done with Spatial Design (for virtual reality), and we’ve barely started there. But there’s a bigger story than the touchpoint itself. What are the broader things we need to consider if we want to design for outcomes?

And this is where I was at nine months ago. And where many of us are presently. We know there’s a new set of things we should be learning, but it seems so… daunting? Perhaps scary? While the shift from GUI to mobile was challenging for many, at least these things were adjacent. This feels like we’re leaping out of the safety of the boat, for… what?

My hope is to shed some light on this darkness.

While I don’t have all the answers — indeed this is a learning journey we’re all on together — I am excited to share where my thinking has led. Through conversations, research, testing things out, and so on, I’ve got a good a solid model to share that should pull all the pieces together in cohesive way. What I’m sharing today is scaffolding, to help us think about and prepare for what’s ahead. Think of this as a learning map. A guide to what’s next.

Here’s where my research has landed.

I’ve organized the things we need to think about into 4 themes:

  • “Training the Engine” — this is all the scary AI/ML stuff that we’ve kept in a box
  • “Monitoring Outcomes” — this is something critical that few organizations do well
  • “Modeling Possibilities” — this is about using generative design and sandbox environments to explore what ifs and what could be questions
  • “Reframing Context of Work” — this is about designing for complex, adaptive systems and thinking about broader implications, and future, downstream effects.

Within each of these themes I’ve tried to identify the specific skills we should begin developing.

You should all have copies of this model. I thought a printed takeaway might be nice, to help organize all this! Also, I’ll share a link to this slide deck at the end, as there may be some links and references you want to dig into.

At a glance, this is what the model looks like:

Let’s dive in…

THEME: “Training the Engine”

Let’s start with what everyone is talking about, the Machine Learning stuff… Specifically, the “black box” engine that seems like the domain of engineers and data scientists. What is our role in all this? What skills do we need to develop? Let’s dive into this mysterious world…

So, how many of you have seen this video from DeepMind?

[NOTE: I let the video play, while I voiceover the next few paragraphs]

Basically, you’ve got an ML algorithm teaching itself how to play the Atari game Breakout. I want to stress this point about teaching itself. To be very clear: No one programmed the rules of the games. The machine had to learn how to play, every step, through trial and error.

As you’d expect, the first few rounds are uneventful. The AI player does nothing, and loses. But what’s amazing to watch is how it learns over time, until it invents an optimal strategy for beating the game.

While everyone talks about the machine learning parts of this, here’s where my mind went to: This works where there is a simple, clear goal. Indeed, while all of these video games challenge the AI in different ways, what doesn’t change is the goal: Win. Beat the system. Don’t lose your lives. Maximize the points.

Here’s my question: What happens when there isn’t such a clear goal? What happens when there are many, competing goals? Think about your team or your division? Is there a singular goal? Does everything ladder up to a shared definition of success? How many large organizations have this kind of a shared purpose? Is the purpose about more than profits? And if you can answer yes, does human conscience, or unwritten implicit rules of conduct, temper this goal in some way? The real world isn’t so simple, and often has many competing goals and constraints. As a designer, what are you already doing, to help define the objectives and goals for a project? As critical as this is to teamwork, it’s the single, driving factor for machines. We can — and should — absolutely lean into defining these things. This has always been important, but will be vital as we hand over more decisions to Machine Learning.

So this is my first challenge to all of us: We need learn how to…

☑ Define Clear Goal(s) & Constraints

Double down on defining a clear set of objectives, goals, etc

Of course, the next best way we can influence the “engine” is by understanding the algorithms that shape and define how machines solve problems. This was the first avenue I ran down, mostly as it’s the one I find most daunting. I mean, I graduated my high school in the top 99% percentile for math. I’m good at math. But this stuff, I find this scary!

But, you don’t have to go this deep. That’s the job of data scientists. But, we should know how to speak this language. We should know the basic pros and cons, blind spots and strengths of different kinds of algorithms.

There are some ways to make this more approachable…

One, you’re not alone. groups such as the d.School at Stanford are trying to translate this stuff for designers. They’ve identified the six ML algorithms that every designer should know.

Of course, I think there are more than just six algorithms, and I don’t like the idea of someone else figuring this stuff out for me. So, I’ve been looking into some books on the topic.

But, there’s another way to tackle this. Anyone use Pinterest? Yeah, I started a board for “Machine Learning, AI, and other Stuff”.

My Pinterest board for “Machine Learning, AI, Etc.”

This is fine for exposure, but it’s rather shallow for explanation or understanding. So, I reframed things to make this a fun learning challenge: Recall how I mentioned DeepMind using Breakout to train the algorithms? It turns out, video games are an incredibly popular way to test and train these algorithms. One, they all share a simple goal. That’s one less variable to worry about. But two, each game challenges the AI in different ways. Breakout. Space Invaders. StarCraft. Montezuma’s Revenge. Pitfall! Chess. Go. Each of these have revealed a different blind spot or weakness in the AI.

Trying to make sense of what’s challenging about these games, and how these challenges are being “solved”

The same AI that conquered Breakout struggled with Pitfall. Why? Reinforcement-learning algorithms work great in a positive reward environment such as Breakout of Space Invaders, but struggle with a game like Montezuma’s Revenge or Pitfall! where there isn’t this immediate payoff. For games of this nature, you need an AI that keeps a ‘memory’ of previous encounters, and can see how that might be Useful. Each new game reveals some new weakness of the AI, that researchers then work to train out. While watching the documentary AlphaGo, I paused and took notes when they explained for a non-technical audience the “Policy Network”, “Tree Network”, and “Probability” algorithms employed in training the AI. A Google search on these terms led to several posts explaining each of these, and why the latest version only needs two of them. By understanding the role of games in training machine learning models, I’m learning a lot more about algorithms and developing my vocabulary. No ‘maths’ required!

So, challenge number two: Learn how to…

☑ Understand Algorithms

Understand the pros & cons of different algorithms; be able to monitor these metrics & dashboards

More to the point, how are YOU learning to speak the language of data science? For some, a course or training will work. In my case, I had to reframe this as a fun, personal learning challenge. However you tackle this topic, just do it.

But there’s something more fundamental here, that we should be engaging with already: Data. Whether we’re conscious of it or not, I think many of us assume there’s something objective about data. It’s data, after all. We have to start by dismantling this belief. It’s critical that we understand and review the input data. Consider the large companies that have had spectacular—public—failures with machine learning. Where does the data come from? How is it collected? Is this bundled data or raw data? These are fundamental questions we should be asking, right now. And trust me, I’ve had some funny looks when I asked to examine the data set and pore over a sample set with 1,000s or rows and 100s of column headers. “But, you’re a designer…” “Yes, yes I am”. As designers, we care about the outcomes. As one professor I spoke with emphasized, “Data is reductive and political”.

The very nature of what was collected, how it was collected, and what was not collected means data is never inherently unbiased. This same professor has her students go out into the world to research where and how the data is collected, before ever working with it.

So, next challenge, and one we can be doing already:

☑ Engage with Data

Understand and review input data; where did it come from? how was it collected? Data is reductive & political! Bundled data. Boundary objects.

At this point I want to be very clear about something: There’s a call for companies to be transparent with their data and algorithms. I fully support the spirit of this request. However, when it comes to many of these algorithms, even the makers don’t know what’s going on. We can tweak the data, the algorithms, the goals, and monitor the outcomes, but what’s going on “inside” is often a black box. Few companies have any idea how the systems they employ actually work.

Which leads is to…

THEME: “Monitoring Outcomes”

I’ll pick up the pace a bit here, as there’s less to unpack. But, vital to working with machines is setting up good feedback loops and ways to monitor what’s actually going on.

We have things like analytics and metrics, but to be honest, I’ve met very few companies who are doing this well. In fact, I’ve heard stories of changes to algorithms that lost companies money. How did they find out? In one case, it wasn’t through sales dashboards or any kind of digital real-time monitoring. No, it was from qualitative customer research that led the researcher to pass along the odd pricing information being generated.

Here’s the biggest mindset shift we — and everyone we work with — will need to make: As things get more complex and the scale increases, we have to design in real time.

Think of this like a sports game, or surfing, where we have to be “in the game” and in the moment, reacting as things come up. This sounds simple enough, but think about how much planning, research, and testing went into that last feature you designed, before it was released. Days? Weeks? Months? Now, what if I told you nothing could be designed that can’t be pitched, coded up, and released in an afternoon. Sound insane? I know of at least one company — the largest in their industry — that works this way. This is what I mean by designing in real-time and responding to what’s going on. And no, this doesn’t mean there’s not a place for thoughtful planning, but that place isn’t with the bulk of the work we do today.

So, we need to learn how to…

☑ Monitor Results

Close the loop by monitoring what is released, user behaviors, and outcomes

☑ Design in Real-Time

Continuous learning with dynamic real-time data

There’s one other critical piece to this “monitoring” theme that (I believe) falls uniquely falls to designers. Quantifying good intentions.

☑ Quantify Good Intentions

Values and principles must move from aspirational things to things that can be quantified and actively measured.

I don’t think it’s a stretch to say most of us care just a bit more about the human experience than anything else. We’re wired to focus on what we feel is right. This is a healthy tension that sometimes puts us at odds with the rest of the business. Ideals, principles, and values are primary motivators for designers, ahead of other things like dates, adherence to process, or theoretical approaches. In short, we work from ideals. This is a good trait that creates a healthy tension with other roles.

Now, think of all the values statements, and work we do on design principles, to help keep teams and organizations from going “off the rails”. As much of this is scaled across organizations, and increasingly abdicated to machines, here is the new challenge:

  • How do we quantify things that are fundamentally intangible? (e.g. “trust” “understanding” “joy”)
  • Can we measure these things directly, or do we triangulate and infer based on other metrics (e.g. Google’s “HEART” model)

This is vital not just for organizations, but for organizations that increasingly rely upon machines to make judgement decisions. Accordingly, the end-game priority here is this: How do we translate values into metrics and goals for machine learning?

I think this quote from a Christian Beck is a fitting one, at this point:

Never before has technology allowed individuals to do more harm (or good) with such low effort.

— Christian Beck

[Dramatic Pause]

THEME: “Modeling Possibilities”

Let’s shift gears a bit, into something that might feel a bit closer to home: Modeling possibilities. This is the human working with machines to explore what is possible or what might be.

☑ Model Possibilities

“Design WITH Machines” Generative design, simulations, exploring what could or might be…

There are a variety of ways to tackle this topic. I’ll pick two: Generative Design and Explorable Explanations.

“Generative Design”

Projects such as Dreamcatcher from Autodesk demonstrate a very specific way we might work with machines to transform manufacturing. In the time it takes to create one idea, a computer will, according to Autodesk, “generate thousands, along with the data to prove which designs perform best.” You set the parameters — materials, manufacturing methods, cost constraints, weight, and so on — and the machine can generate and explore literally every possibility. Out of this kind of exhaustive exploration, you get things such as a bridge joints and motorcycle parts that use few materials, are lighter, and as strong if not stronger than the parts traditionally designed by humans.

Using generative design tools, Arup has produced a structural node that is just as strong as its conventional counterpart (far left) , but weighs 75% less and is only half as high (Source and an interview)

Play this out, and the role of human shifts from that of hands-on creator using software to render an idea to that of a conductor (or curator or cultivator?) working with software to explore possible options. In a sense, we develop a sort of symbiotic relationship with the machine; the machine generates possibilities that we then direct or tweak until arriving at an optimal solution. We see this playing out in nearly every industry, from manufacturing to the design of web sites to healthcare.

At Interaction19 last month, I saw another example of this from the world of city planning. The interesting contrast that speaker and designer Bilal Chaudhry brought up was how the complexity has shifted, from upstream problem solving, to selecting the optimal solution.

Here’s a question for you: How might machine learning alter or improve the daily work you do?

So that’s generative design. Let’s swing our attention to something completely unrelated, at least by keyword searches.

“Explorable Explanations”

How many of you have heard of Explorable Explanations?

This is the term for a type of modeling being done by Nicky Case and others, where traditionally challenging concepts become accessible through playful interactions. Example: With the “The Evolution of Trust”, Nicki uses game theory to help players explores the very complex topic of trust and cooperation.

“I think game theory can help explain our epidemic of distrust — and how we can fix it! So, to understand all this… Let’s play a game!”

Parable of the Polygons, by Nicky Case

With “Parable of the Polygons”, a prize winning research paper on desegregation has been transformed into a simulation, where through repeated play, you begin to recognize the powerful effects — good and bad — of even small changes to the system. These are difficult concepts that we struggle to recognize, let alone understand what we can do to change things for the better. But through play and simulation, you explore possibilities and get a deeper understanding of the issues.

The common denominator here is humans working with machines, to think about and model things we couldn’t do on our own. This is SXSW. Chances are, you already know about many of these topics. Here’s my challenge to you: What kinds of modeling are needed, as problems get more complex and tricky?

Imagine where these kinds of “games” might go, as we have the ability to ‘play’ with more and more complex topics, such as the effects of climate change or ending poverty. That’s a real design challenge. That’s design as facilitation.

Which leads us to our final section, on systems.

THEME: “Reframing Context of Work”

As with working from values, this is another topic I think the design mindset is uniquely wired for: Working with and within complex, adaptive systems.

☑ Design for Complex, Adaptive Systems

Articulate Broader concerns, unintended consequences, double-loop learning…

Up to this point, I’ve talked about these AI/ML challenges as if we’ve been handed a discrete challenge. This isn’t the case. It’s rare to find even a simple request that doesn’t beg us to ask — and answer — a broader set of questions.

What is new in all this is visibility into the scale and scope of problems we now work on — we have to ask questions about impact and outcomes. Facebook and twitter are platforms that have changed the world. The addictive properties of Pinterest and SnapChat are changing human behavior and social interactions. We can’t treat these things like simple web apps. They aren’t.

What does this have to do with design?

A hallmark of good design is the ability to work in ambiguous situations, or work with partial information. We’re also good at pushing back and re-framing problems as handed to us. This, perhaps, is where circumstances are pushing toward, whether it’s comfortable or not.

Earlier, I shared a visual that showed us shifting from Products to Experiences to Outcomes. This focus on outcomes — thinking about where all this leads us — has led to several shifts in my thinking.

This ‘outcomes’ thinking is making even the simplest of requests more challenging:

  • We can’t release a feature, it seems, without thinking about all the things that might go wrong. “How might we…” is replaced with “What might go wrong?”
  • Increasingly, it’s hard to focus on a ‘user’ when there are so many to choose from, each with competing goals. We naturally shift from human-centered to humanity centered.

It is this focus on Outcomes that forces us to examine things well-beyond the product or experience level. Indeed, I see design teams asked to do very ordinary tasks, but they’re asking the hard questions about context and motivation, and ultimate outcomes that business partners have trouble addressing. This is okay. There’s nothing territorial about this statement. This is affirming something many designers are good at: Thinking in systems. Of course, many of us aren’t trained in this, nor do we have the language for articulating things in this ‘systems’ way. I’m still learning myself.

There’s a lot we could unpack here. I’ll share just a few choice things I’ve picked up.

Simple rules give rise to complex behaviors.

In researching complex systems and how large groups of individuals work together, one noteworthy idea emerged: The power of simple rules.

Don’t go making sweeping changes. It rarely works. What does work is introducing and testing small changes. This is true in nature as well as programming AI. A few simple rules are all that’s needed to ripple throughout the systems and give rise to complex behaviors.

Researchers have found that the flocking patterns of birds are governed by three simple rules.

Let’s take the idea of ‘personal kanban’, a way to manage work that is now “used in almost every vertical, globally”, according to creator Jim Benson. Behind personal kanban are two simple rules: 1. Visualize your work, and 2. Limit work in progress. That’s it. But, as Benson adds: “the simplest systems are the most flexible and the most universal”.

Much of the literature on formal systems thinking is dedicated to this topic of small changes. Reinforcing loops and balancing loops are the two foundational structures of systems thinking. The idea goes like this: Want to introduce a change? Don’t try to change the system (you can’t!). Instead, introduce a small change, or tweak an existing rule, then see what happens.

This is applicable whether we’re changing user behavior, corporate culture, or getting world leaders to work together. Powerful stuff.

In this respect, I can’t help but think of the differences between improvisational jazz and orchestral arrangements. With jazz, a group of performers who have never played together before can improvise before a live audience because they agree upon a few governing rules. Contrast that with the orchestra structure where a conductor leads a group of performers through a prewritten musical score. Not that either approach is right or wrong — that depends on the situation. But, given a space where things are unpredictable and uncertain, a few simple “rules” that allow everyone to ‘jam’ together, seems to be a better strategy.

In the realm of habit change, we see this same idea of “chaining” together small behaviors. Years ago, when I drank too much Dr. Pepper, I introduced a simple rule: I must drink a cup of water, before drinking the soda. Funny thing, after drinking the water, I rarely went for the soda. Over time, I kicked the habit altogether. Simple rule. Big change.

So, next time you hear about a big, sweeping change, instead ask what’s the smallest change we could make or the simplest rule we could introduce that would have the biggest impact?

What else have I learned?

Double-Loop learning helps reframe problems.

This one is relatively new to me, but should resonate with designers.

Consider how we traditionally approach a problem:

  1. We observe current customers.
  2. Assess possible corrections
  3. Develop new strategies
  4. Implement new actions.

Sounds good, right? This all assumes the problem as handed was on point.

With double loop learning, you also step back and assess the current structure that led to that situation. Maybe solving the problem as defined is misguided? Maybe there’s a broader problem we should solve instead? Double loop learning is a structured way to move from seeing change within the framework to seeing ways to change the framework itself.

Of course, this kind of thinking should be nothing new to designers. We’re good at reframing problems and asking “Why?” At least 5 times.

But, what double loop learning does do is to start making what can seem like a fuzzy and ambiguous kind of reasoning more accessible to others. Moreover, this kind of formal structure is the kind of thing we could start engineering into our machine learning algorithms. Indeed, I’ve had conversations with folks working on the frontlines, who’ve discussed the brain, or meta-AI that arbitrates between more discrete ML functions.

What else? I’ll share one more thing. A recent thought.

We need tools suited for the complexities of the 21st century.

The problems we face are increasingly complex.

  • Changing organizations.
  • Platforms with multiple customers, often with competing interests.
  • Getting teams of teams to coordinate their work.
  • Planning for the future, when whole industries can change with a single announcement.
  • Business models that are anything but straightforward.
  • How to design with and for increasingly sophisticated technologies.
  • Innocent enough asks that require us to navigate up zoom levels, to solve properly.

As designers, we’re more accustomed to ambiguity and complex situations, and likely more suited for these kinds of challenges. But, it’s not about us. We have to work at a broader scale. Which means, we also have to work with others who bring their own unique — and valuable — mindsets. For these interactions, I’m increasingly looking for tools to help whole teams (across functional groups) and entire organizations (across silos) tackle the complex challenges head on, together, without dumbing down the challenge. We have plenty of simple tools for simple problems. And I’m sure we’ve experienced failed attempts to dumb down necessarily complex problems. We need tools suited for the complexities of the 21st century. Where there are activities that can help us think in systems and work through complex, nuanced tensions, we need more of these. And, we need to all be aware of and able to use these thinking tools when needed.

I’m currently looking for, curating, and sharing those tools and activities that might help us tackle our most pressing and complex issues.

In December, I wrote about one such tool, Polarity Mapping. I won’t go into detail here about this tool (you can read the article on Polarity Mapping), but it does one really good thing: It helps us facilitate a fruitful conversation around a tension where there is no clear “right” answer.

  • Should we do more Learning or start Building?
  • Should we focus on Innovation or Efficiency?
  • Should we prioritize Deadlines or Quality?
  • Growth vs. Consolidation?
  • Short-term Gains vs. Long-term Organic Growth?
  • Centralization vs. Decentralization
  • New Features vs. a Stable Codebase?
  • Generalist or Specialist?

These kinds of conflicts are NOT problems to be solved, but rather paradoxes to be explored.

Unfortunately, this kind of thinking is not widespread enough. Traditional MBA programs are (a) reductive, and (b) analytical. Both good traits, except when they’re not.

I’m on the lookout for more tools like this. Backcasting, to handle competing visions of the future. Consolidated Flow Models, where there are many stakeholders with different concerns. Wardley Maps, for understanding market context. And so on.

My point? When we’re not busy cranking out solutions, we — as designers — are curators and facilitators of these kinds of artifacts. We love our artifacts. But the one’s we rely on the most — I won’t name names — are a bit antiquated and aren’t suited for more complex challenges. Let’s up our game.

[Pause]

There’s one final thought, that I would have lumped into thinking about complex, adaptive systems, but a friend suggested pulling out: designing for the long arc. How things play out over time.

☑ Design for Long-Term, Downstream Effects

Scenario planning, Edges & exceptions, worst-case possibilities, bad actors

I want to return to that conference on growth hacking and conversion optimization that I mentioned. Before the event, when my host greeted my at Schiphol Station in Amsterdam, we had a conversation that has stuck with me. First though, you have to know I feel a bit icky about most conversations dealing with getting more signups. Especially in the US, we often seem very short-sighted. My host shared with me a story that has stuck with me, one that also reflects the maturity of this discipline elsewhere in the world. He talked about a company, it may have been a bank, that he worked with. They wanted to run an A/B test to see if entering folks into a drawing for a free iPad would increase sign up. As you might predict, it did. Far more people in the iPad offered group signed up. The bank’s conclusion, as you might guess, would be to roll this out to everyone. But, my friend asked the company to do something else — monitor the results over time. See what happens between these two different groups. They did. And do you want to know what the results were? 9 months later, not a single person from the iPad group converted into a paying customer. In contrast with the control group, which had the usual conversion numbers.

My reason for sharing this? We have to get better at helping organizations think about, articulate, and make long-term decisions. Short-sighted decisions lead to downstream problems.

In the last year, we’ve seen some great conversations about design ethics, especially as it relates to technology and AI. The goal of these conversations is to get ahead of bad outcomes, before they happen. One of the better tools out there is The Tarot Cards of Tech. While small in size — there are only a handful of cards — each asks the right questions, the questions we should all be thinking about to avoid negative outcomes in the future.

The “Tarot Cards of Tech” from Artefact

Tools such as this, tools that help us peer into possible futures, need to become a routine part of our design practice.

As designers, we think about what might be. We are also good communicators. We just need to get better at combining these skills to plan for, anticipate, and communicate different possibilities to those we work with. whether it’s a card deck or facilitating a workshop, there’s much we can do to anticipate what might come to be, to facilitate longer-term thinking. This is true of sign-ups. It’s also true of getting governments and corporations to address climate change.

3. What hasn’t changed?

I stated earlier that “What designers do is changing.” I also want to say “What designers have always done is not changing.”

Huh?

What doesn’t change in all of this is what has always defined us: A mindset.

As designers, the things people see us do will change all the time. In college, I took an X-Acto knife to ruby paper, to do layups. I doubt many of you below a certain age even know what I’m talking about!

What hasn’t changed in all this is how and why we do what we do. The exercise I had you do at the beginning, writing down what you do in a week. I’ve done this with designers, and what I do next is have them run the list through two filters, or “meatgrinders” as I call them, to see what comes out the other end.

Meat-grinder #1: Remove anything at all related to screen interfaces.
Scenario: The demand screen design has disappeared. Design systems and decent machine learning algorithms, combined with robust metrics driven business and marketing tools have automated most of the work currently done by hand.

Meat-grinder #2: Remove anything that could be done reasonably well by any other member of your team.
Scenario: Research. Customer journeys. Copywriting. These are all things that can now be done reasonably well, by most members of a cross-functional team. Set aside any concerns — however legitimate — about expertise and quality. If “good enough” from a non-trained expert is sufficient, then cut it from your list.

Now, the interesting part: What’s left?

These questions are artificial, and we could debate the validity of these scenarios — that’s not the point. The point is to confront those activities that define our profession, challenge these things, and examine what’s left.

When anyone can do good-enough design, and things like designing user interfaces are now a commodity — what does it mean for the designer? What future activities are still worthwhile to invest in, when so much of how we presently define ourselves might become automated or democratized?

My hope with this exercise is to separate the design activities we’re presently concerned with from the timeless ways of being a designer. My hypothesis? What’s left starts to get to the core of what it means to design and to be a designer.

So, where has this led me?

I’ve identified about 11 ways of being that describe design. I won’t go into them all — there’s a post coming for that. But you’ve heard me mention some of these:

  • Frame & Reframe Problems
  • Work from Principles & Values
  • Think in Systems and Contexts
  • Focus on Human Needs & Motivations
  • See Possible Futures (where Others See Present Realities)
  • Thrive on Ambiguity

This, to me, gets at what makes us designers. For the stuff that changes, this doesn’t. While we may see ourselves doing different things, this is what doesn’t change in it all.

[PAUSE]

A final note…

I want to close with something near and dear to my heart: typography. I love great typography, as I’m sure many of you do. But let’s play the 5 why’s game with our love of typography.

Why do we love type?

We could talk about the balance and symmetry. Proportion. Legibility, of course. I know we’re all driven mad by bad kerning!

But why, why does this bother us so much?

Because it’s not right. It’s not harmonious. Or balanced. It’s ugly.

And why do we care about things being ugly or not?

Because, we care about —actually, obsess over — aesthetic details.

And why do we care about these aesthetic details?

Because we love beauty. Because we want to make the world a better place, that we all enjoy living in.

And that, right there, is why we design.

To make the world a better place.

This may involve typography. This may lead us to fix creepy looking faces. This may lead us to engage with algorithms and mathematics. The way design is expressed varies and changes, but the desire that drives us all does not change. We design because we care. We care about making the world a better place, for all of us.

Thank you.

……………………………………………………………………………………

NOTES / REFERENCES:

Thanks to Jason Mesut.

Stephen P. Anderson

Written by

Speaker, educator, and design leader. On a mission to make learning the hard stuff fun, by creating ‘things to think with’ and ‘spaces’ for generative play.