Design without Surfaces

Turner Carroll
Originate
Published in
10 min readDec 11, 2019

Making more human, and more ethical products means getting your design and product teams more involved behind the scenes.

Two years ago I started giving a talk I titled “Design Without Surfaces” in which I argued that designers needed to embrace and advocate for their role in building the invisible features that power so many of the products we interact with.

I believe that it is no longer enough for designers to engage with the surface-level elements of a product, and that we must bring the environment of collaboration and critical thinking the design process fosters to the entire ecosystem of a project.

I bucketed things like algorithms, recommendation engines, search features, databases, APIs and so many more features under this “invisible” moniker because of how vital they are to our products, yet go completely unseen by most of our users. I used examples like Amazon bridging the gap between digital and physical data tracking with their brick and mortar locations, YouTube kids recommending toddlers violent videos of Elsa and Spiderman with machine guns, and facial recognition software that most countries now employ in passport scans locking out people of colour because of biased training data.

A lot has changed since I first gave this talk, but my key thesis — that we must acknowledge and rethink the ways in which we build “invisible” products and features to be more usable and accessible or suffer the financial consequences — is more relevant than ever.

To underscore this point I would point to one of the biggest political stories of 2019 being the increasingly critical lens placed on tech companies. We have Mark Zuckerberg testifying before congress about how his platform harvests and uses private data from its users; Jack Dorsey and Twitter unveiling a string of unsuccessful product features to try and make their platform less toxic; and YouTube has faced uncertainty about how to curb the rising tide of extremism on its own platform through the rabbit-hole effect of video recommendations.

Left Mark Zuckerberg (credit: Jim Watson/AFP/Getty Images), Right Jack Dorsey (Credit: Bloomberg), bottom Youtube (Credit: Bloomberg)

My point here is not to have a partisan debate, but to point out the underlying truth that product design and development decisions have massive, often unforeseen, consequences that extend beyond the intended impact, and that these consequences boomerang back on to the companies that fail to anticipate them. There is a very real possibility of antitrust legislation breaking up the tech giants within the next four years, but this was never a forgone conclusion. Facebook had ample opportunity to address its opaque data harvesting problem years ago and chose not to. Regardless of political motivations or affiliations, the possibility of breaking up companies like Facebook or Alphabet will have massive consequences on the tech industry, and I would argue that we as an industry are on the precipice of something entirely new.

Now, I can’t predict what will happen politically in 2020, but regardless of what happens in Washington there are very real changes coming we as product builders must anticipate and adapt our ways of working to, especially in regards to these “invisible” components and features. I’ve pulled some of the key points of my talk into four main categories I want to explore a bit further.

These categories come from my own experience working on these products and many teams already use these strategies effectively. They are also by no means the only strategies for building better, more human products, but are they are useful for design and product teams struggling to understand their roles in this process.

1. Data is a design problem

2. Building trust means being transparent

3. Uncertainty is a very human trait

4. Diversity is functionality

Data is a Design Problem

Data is everything when you’re building components like recommendation engines, search features, or algorithms into your product. But one of the biggest challenges I’ve watched teams have comes from treating that data as an inert component and never critically engaging with it as a design question. Whether it’s in how the data is collected, how its stored and access, or even what data you use.

I now ask all my teams to view data as a design problem that requires creativity and user-centered processes to effectively solve. For example, I was recently part of a build of an app that wanted to use image recognition tools to identify postage stamps. The assumption of that team was that we would use images of stamps as our training data because why not? It’s what we wanted to identify after all.

The problem was that the database of images we had access to was too small to effectively train any kind of model for a rapid product release with any kind of meaningful functionality in our 3-month timeframe. At first the team wanted to build in features to the app that would get users to submit more of their own photos, but that seemed clunky and time consuming. So we had the team rethink it’s approach and we very quickly realised we could be training our model on images that related to the content of the stamps, the most obvious being human faces, but also birds, planes, trains, etc. It’s a pretty obvious example, but the outcome is obvious too: by confronting data as a design and product challenge instead of a static component we could make choices that saved a ton of time in getting our product live.

Another fantastic example of this comes from the study “Men Also Like Shopping” from the University of Virginia where researchers found that visual recognition software trained on data gathered from the web misgendered human faces in images based off a gendered assumption of the task that person was completing. The primary example was images of people cooking gathered online were disproportionately female, leading the model to incorrectly identify men in images as women when it wasn’t sure, reinforcing and even amplifying a societal bias. Outside an academic setting you can easily infer the problems that would arise from biased models trained on unintentionally biased data, and it’s important to scrutinise your data for gaps, testing it as you would other product features.

Source: Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints, University of Virginia

My advice here is approach all data as you would any other feature and ask is this the right data? Could we be using better or more comprehensive data? Could we be creative with where we get it? What are the anticipated holes in our data and how will they affect the functionality? And most importantly will the way we harvest and use data be fair and transparent to our users?

Building Trust means Being Transparent

This makes a good segue to my second challenge I put in front of all my teams: are we building trust with our users? A decade ago when I was working on products like FitBit this question meant do we appear to be a trustworthy company mostly through branding and aesthetics, but this model no longer works. We’ve all seen Google quietly drop “Don’t Be Evil,” and users are rightfully more wary of the aesthetics of honesty. I believe to build trust with our users we now must drop the appearances and start creating the functionality of honesty, and this means transparency, clarity, and user autonomy.

I now make it a point to ask my teams how will a user of our products understand what happens to their data and how we use it in our products? Can they control it? Can they delete it? Are these features easy to access and execute?

Traces in the sand left by the stones — Racetrack Playa, Death Valley National Park, Inyo County, California, U.S.A.

Once again I’d point to the politics of this as well with GDPR and the necessity of building some of these features to release in the European market, but I also believe this level or trust-building goes beyond the legal bare minimum and touches on a truth all brand designers know: if a user trusts your product they’re going to keep using it, and if they don’t they’ll quickly abandon it.

It is vitally important that your teams build trustworthiness into the product design phase and prioritise features that offer clarity and control to your users.

Uncertainty is a Very Human Trait

Part of working on invisible products is the reality that whey they do and how they do it is completely inscrutable to your average users, and it can be incredibly frustrating when these things don’t work in the way we assume they should. Often these products are built as black boxes that have an input and an output and nothing in-between.

When I go on YouTube and my recommended videos are all video game trailers because my nephew used my account for a couple hours I have no way to tell that algorithm “please showplease, show me something else” other than trying to retrain it by rewatching my own videos. I know that something is happening, but I can’t talk to the machine and it never makes any effort to talk to me, it just does the simple job it was programmed to do.

My teams always focus on what are the possible ways in which these products — especially the ones that serve content or guide our users — can break, mislead, or misinform, and how can we build a more human response into the machine when it does these things?

Source: https://hcri.brown.edu/

Uncertainty is a growing viable product direction where the system admits its doesn’t understand or might not know, and asks for help. The prevailing attitude in many tech platforms has been infallibility (which I believe ties into the aesthetics of honesty), but that’s not how people interact with one-another, and it’s not how they like their tools to work in practice. One great example of this comes from robotics and the work of the Brow University Human to Robotics Lab where they taught a robot to ask for guidance when told to select a single spoon out of a whole row of spoons. When the person says “can you hand me that utensil?” a robot without an uncertainty protocol would select any given spoon from the lineup, while one with the protocol would ask “which one?” A simple change in how a tool communicates makes an interaction much more pleasant and successful, and is applicable in tons of situations from error messages to search results.

Diversity is Functionality

Diversity gets a lot of coverage when we talk about tech, usually in terms of hiring practices or some other human element, and while that is a vital and meaningful conversation, it’s not exactly what I mean here. When I talk about diversity in product building processes, I mean that it’s incumbent upon your teams to build diversity into your products and to account for the type of users you intend to serve. This is easy when you’re thinking about the interaction design of a screen, but becomes a bit more complex when you’re thinking about something like data.

YouTube Kids videos with adult content

An easy to see example is something I mentioned earlier: the YouTube kids problem that arose from using a recommendation engine built for adults on children. Because kids (especially very young ones) have a fundamentally different psychology, an algorithm that took frequency of content and gave users more similar content created an environment where auto-generated videos, often with explicit adult images, could thrive. YouTube didn’t take into account the diverse needs of its users and created a one-size-fits-most solution that ended up being harmful.

The way I advise approaching bringing in diversity is the same way designers are already familiar with: make personas, build journey maps, conduct research, and respond to the discovered needs of your users. Ask how are my user’s needs different, and how might my product fail to meet one user’s need while fulfilling another?

Ultimately, much of what I’m advocating here will come as no surprise to designers or product managers. These are the same questions we ask ourselves when we’re building conventional products.

What I would like to see is more of that process brought into the “back-end” world, and more people versed in the tools of product design giving input. The most successful products I’ve worked on have engineers, business strategists, and stakeholders all working together asking and answering these questions. Part of advocating for design to be present on these projects is also to bring the mindset and tools of design thinking to our teammates, and leaning on their expertise to help us be more creative and more human-centered in our approach.

As the products we work on become increasingly complex with the addition of elements like machine learning and Ai, it will be more vital than ever for all of us to communicate and collaborate in more critical and ethical ways, not just to make better products, but to also get ahead of the issues we know are coming from the impact of these technologies.

There is a snippet of a Paul Virilo quote I opened this talk with that goes “When you invent the ship, you also invent the shipwreck,” and it’s something I always keep in my mind as I’m working. It is incumbent on all of us to attempt to foresee the shipwrecks, and do everything we can to make them as infrequent and do the least amount of harm as possible. That way, we can all spend more time celebrating the ship.

--

--