Ex Machina, directed by Alex Garland
Image: Allstar/FILM4/Sportsphoto Ltd./Allstar

Above & Beyond; Digital Ethics & Responsible AI with Schibsted

Identifying design practices as inspiration for tackling ethical issues such as bias within the field of AI.

Published in
5 min readApr 17, 2020

--

Above & Beyond is a new bi-monthly online talk where we (Above.se) bring together some of the world’s most curious minds to inspire, motivate, and jump-start ideas that will improve life for today and tomorrow. Following each talk, we will be publishing a short (5 to 8-minute read) report, such as this one, on the discussion with key “opportunities” for readers to take-away.

Most recently, we were joined by Agnes Stenbom from Schibsted for a discussion on Digital Ethics & Responsible AI (artificial intelligence). Schibsted has spent nearly 200 years exploring new opportunities in media innovation tools with the most recent interest in how we can use machines to perform tasks that can improve people’s lives through the products they use daily.

While machines are capable of being trained to perform tasks that are considered draining and complex for humans, they don’t always behave how we predict them to. The real dilemma is not that machines are sometimes wrong but, rather, the consequences of their inaccuracies can be problematic. For example, the data sets used to train a machine to perform a task in a process known as machine learning have been repeatedly criticized for creating racial, gender, and, even, political bias. Furthermore, the humans who create the algorithms and organized datasets that train machines carry their own biases with them.

The design process itself also frequently encounters bias, so our colleagues at Above were able to offer Agnes insight into how we try to minimize design choices that might marginalize users. For example, our industrial designers and engineers work with established models and standards for ergonomics and usability when designing physical products, whereas, our UX teams use immersive research to design with empathy and inclusivity.

Highlighted below are🖐 five opportunities for using design practices as inspiration for tackling ethical issues such as bias within the field of AI.

1️⃣ Expanding perspective through fiction

During our talk, Agnes mentioned that science fiction can be a productive tool to keep perspectives expansive and, therefore, diverse among those who work with AI technologies. Above has also worked with similar speculative design frameworks in past projects to create products and services based on how we envision the future to look like. The idea is that if one immerses themself in the farfetched alternate realities, the perspective in their own reality might grow to become more diverse — something that can be beneficial in minimizing the aforementioned bias currently present in datasets.

2️⃣ Diverse Perspectives over ‘diversity’

While the need for diversity in the tools used to train machines (e.g. datasets) is clear, the definition of what we mean by ‘diversity’ is less so. Diversity often tends to focus on fairly broad, categorical definitions like gender, race, social class, religion and, even educational background. But, much like the systems and tools used to train machines, the people who build them are equally complex, having been exposed to a myriad of social conditioning factors, not just ones that can be neatly categorized. Both Schibsted and Above have focused on a spectrum approach to creating diverse teams of humans building machine-based technologies. Schibsted brought on a philosopher to offer a new perspective on the societal impact of AI. Similarly, Above places value in the diversity of interests and personalities that make up our team rather than the demographic statistics we represent.

3️⃣ Diversity user testing

A project manager at Above pointed out during our discussion with Agnes that the team developing any kind of product — AI or otherwise — will always have a limitation or handicap in perspective because they work so closely with said product. Accordingly, part of maintaining a diverse perspective means going outside your network and testing with external users. User evaluations can help to remedy such diversity blindspots within an insular team by providing much needed outside perspective.

4️⃣ Making AI approachable

How we talk about intelligent machine technologies needs to be more approachable to everyday consumers — a task that we believe can be accomplished through consumer-driven tools like UX research and empathetic storytelling. For example, when the average consumer hears the term AI, they usually think of some Terminator-esque robot that acts like a human and will destroy the world. What these consumers need to understand is that AI isn’t the Terminator come to life so much as an artificial tool within Netflix’s platform that has been trained using large amounts of data to suggest a movie they might like based on the fact that they recently watched Terminator. Explaining AI in real terms that real people can relate to makes it a lot easier to understand, which, hopefully, increases awareness of things like data privacy.

5️⃣ Drawing the line of “responsibility”

As machine intelligent technologies become increasingly present in everyday products, consumers must better understand how these technologies can be misused. A great example of this is the growing use of deepfake technology to create incriminating pictures of videos of public figures like Facebook’s Mark Zuckerberg. The growing exploitation of AI has created an ongoing debate about who is responsible for regulating such technologies. While our discussions with Agnes did not lead to any conclusive answers, some colleagues at Above who normally work with designing physical products had some great insights that can be used as a starting point for navigating topics of ethics & responsibility regarding machine intelligent tools. The physical product designers said that they felt responsible for any design choices that caused accidental misuse, such as a user dropping a hammer on their foot because the designer chose a slippery material. However, they also said they would not feel responsible if the product was grossly misused, say if someone used the hammer they designed as a weapon. They further added that responsibility is nuanced by factors like how products become available to users, something that designers & developers alike may not have actual control over.

This article was co-written by Renee Semko Gonzalez and Fanny Carlsson. A special thank you goes out to Agnes Stenbom and SchibstedMediaGroup. We hope to welcome you to Above again!

📣 Let’s continue the conversation: Share your thoughts on AI ethics or your business challenges with us: hello@above.se

✍️And if you are interested in being part of the next Above & Beyond, send us a few lines here: zoey.tsopela@above.se.

In the meantime, follow us on Medium and on LinkedIn to stay updated on future Above & Beyond insights and reports✌️!

--

--

Building narratives left & right with a chocolate bar held firmly in one hand.