Two (more) thoughts on design and AI

Abbey Kos
Doteveryone
Published in
4 min readOct 31, 2017
Music via Google’s AI Duet.

Richard Pope’s recent piece on design and AI was well timed, at least for me. Along with our technical principal Laura James, I spent last Monday and Tuesday in Berlin with the other members of the Partnership on AI to Benefit People and Society (PAI, which we’re pronouncing to sound like “pie”).

PAI was founded by Amazon, Apple, DeepMind, Facebook, Google, IBM and Microsoft (which, wow), with Eric Horvitz and Mustafa Suleyman as founding co-chairs. The way I understand it, the goal is to use all this collective power and influence to meet the rise of AI with ethics, values and best practices — to build a world that’s a little less Wild West than we had when the World Wide Web or social media were in their infancy. Doteveryone, along with organisations like Amnesty International and EFF, are bringing a civil society voice to sit alongside the corporates.

Building on a couple of Richard’s points, here’s some Doteveryone perspective influenced by what we saw on the ground at PAI. I’m hoping this post helps blend our questions and our experiences together — a bit of shared thinking on where we are today and where we need to go.

There’s no single definition of “understandable” to design around.

One of the reasons we have clear food packaging and road signs is because food and roads tend to be static and straightforward. Even the most imaginative food can still be broken down into calories, fat, nutrients, etc.; even the sleekest road connects two places.

But understandability is wiggly, contextual and highly personal. Do I need to know how an AI functions? How it uses my data? What the safeguards are and where humans step in? Should I be able to know the data it’s using is free from bias? And do my questions change based on the particular AI I’m interacting with? (I care more about the AI sentencing me to prison, let’s say, than I do the AI recommending targeted browser ads.)

What’s more, road signs and food are regional. (Check out the American food section in your supermarket to see this in action — all the health claims are taped over because they don’t meet EU rules.) But AI is and will be a global phenomenon. How do we balance Chinese, British, American, etc., expectations around understandability?

This last point came up a fair amount at PAI. Despite being held in Germany, the crowd in the room was solidly American. Everyone in the room knew the size and ambition of China’s AI programme, but there weren’t a lot of Chinese voices and no tangible plans for building truly cross-cultural guidelines and standards. This is absolutely understandable, since the partnership’s still in early days, but it’s a question that will need addressed sooner rather than later.

Doteveryone’s doing a lot of work into understandability at the moment (albeit from a British perspective), with more to come. Here’s a primer on where our heads are.

The burden of understandability can’t just be on designers.

Richard talks about designers needing to learn more about “new materials” like version history and software tests. And that’s a fair point — but it can’t be designers’ burden to make AI intelligible to the rest of the world.

We’re going to need a lot of people to get a lot smarter about AI: legislators, regulators, business leaders, academia, journalism, civil society. That whole thing I mentioned earlier about minimising the Wild West hinges on our collective ability to figure this one out.

It’s easy to be a bad actor when nobody gets what you’re doing. So as much as we need designers to help explain things to end users at the point of service, we need senior leaders across sectors to get clued into AI’s basics and implications. This not only reduces the pressure on designers themselves, but emphasises our shared responsibility in making sure it’s not just the profiteers who know what’s happening.

Understanding AI cannot just be down to seeing something on a website paired with a bit of explanatory content. We need journalists to conduct investigations, senior leaders to know the implications of what they’re investing in and regulators to ensure the right standards are in place. This ties in closely with Richard’s fourth point about collective action — how trusted organisations can play a role in helping people know what’s what.

One of the most common topics of conversation at PAI was how to build this sense of shared responsibility. For instance, although self-driving cars are one of the “sexiest” and most well known aspects of AI, we felt auto manufacturers may not yet be thinking of themselves as fundamental to these conversations. We need a seismic shift in terms of thinking and leading — not just for auto manufacturers, but for all organisations to understand that even if they don’t make tech they still live and operate in a world full of it.

Doteveryone’s also thinking about (and piloting programmes around) digital leadership. Here’s Janet Hughes’s iconic post about building an organisation fit for the digital age, regardless of what industry you’re in.

There’s a prodigious amount of ethical AI initiatives and events, as well as people ready to make billions off AI’s potential. Even though it’s in early stages, we’re glad to be part of a partnership with the industry reach and motivation to make real change through it.

We’re keen to see next steps, and to get a sense of the practical changes we as a group can move towards together. Thank you to PAI for having us on the team — and thank you to Richard for the original post.

--

--

Abbey Kos
Doteveryone

Writer, editor, strategist, fangirl. Trumbull County Fair Spelling Bee winner, 1994.