Ethics in AI — a service design perspective

I wrote this to accompany a talk I’m giving with Hyper Island and R/GA in May. All views are mine and not reflective of my employer.

Artificial intelligence is a huge topic and any blog-post can only hope to capture a small sliver of perspective. With that in mind this post explores some of the ethical questions facing those implementing AI (in its broadest sense) within service-based businesses. I’m neither an expert nor an idiot, more an interested party wanting to create informed debate on the topic.


How much do we need to understand the technology? Pt. 1

There is a lot of ignorance around the topic of AI; if you’ve been to a AI event or read the marketing gumpf around a new AI-powered product recently, you’ve probably encountered something similar to this:

the Chatbot experience gets smarter each time and evolves automatically to provide a service that is unique and tailored precisely to each customer’s needs

It’s meaningless rubbish that promises everything without explaining anything. Systems will not just get smarter and chatbots more often than not are elaborate decision trees similar to those driving IVR. If you’re working to implement any kind of AI within your service it’s your duty to understand it, even at a basic level.

I’ve written about this before, but as a reminder here are some key terms to get started with:

  • Robot “is a machine — especially one programmable by a computer — capable of carrying out a complex series of actions automatically”.
A robot arm (left), Wit.Ai chatbot code (middle) and Nadine the robot receptionist (right)
  • Machine learning is “the science of getting computers to act without being explicitly programmed”. An example being the algorithms that make google search results better and better.
  • Neural network is “a computer system modelled on the human brain and nervous system”
  • Deep learning “is a subfield of machine learning concerned with algorithms inspired by the structure and function of the brain called artificial neural networks”. Again, examples are algorithms powering things like self-driving-cars.
  • Supervised versus unsupervised learning: “In supervised learning, the output datasets are provided which are used to train the machine and get the desired outputs whereas in unsupervised learning no datasets are provided, instead the data is clustered into different classes”.
  • Big data is simply lots of data that can be harnessed using the technology above.

How much do we need to understand the technology? Pt. 2

“If the human brain were so simple that we could understand it, we would be so simple that we couldn’t” — Emerson M. Pugh, The Biological Origin of Human Values

Once you’ve understood the basic terms, a bigger question arises. Imagine you’re providing advice to customers based on the output of some deep learning, how much do you need to understand about the deep learning itself? Do you need to be able to understand the nuts and bolts of the algorithm and why it suggests X over Y?

The biggest advances in AI will no-doubt be the most difficult to understand, and so do we need to be open about this as service providers, knowing the limits of our knowledge and accepting the ‘black-box’? To draw a parallel with another scientific advance, the mechanism behind aspirin, the most widely used medicine of all time, wasn’t understood for 70 years.

There will be no binary way of approaching this so I suggest some kind of sliding scale or method for assessing instances where understanding is required. Maybe something that looks like this:

For simple chatbots it’s easy to imagine giving users insight into how and why the operate as they do, but for anything involving deep learning this is problematic. At best a service can explain the input and then the output, but the messy bit in the middle will be a mystery to all but the machine; we’ll need to address this by being more transparent than is normally comfortable about the surrounding areas of the service.

How much do customers need to understand the technology?

A logical follow-up question is how much do your customers/users need to understand the technology, specifically how you’re using it to provide service? We all have an idea that Google does some clever stuff to make search good, but do we need to know why search result 1 places above search result 2? Arguably not. But if you lost out on preferential health treatment to a seemingly identical patient due to some algorithmic calculation, you’d want to know why.

Services will need to decide how transparent they are with customers about why they’re making the decisions they are; this is not an new problem, but it’s one made more complicated by the fact that much of what happens within something like a neural network is beyond simple human comprehension.

Again, a simple framework might help assess options:


To what cost?

In his excellent piece on the ‘myth of superhuman AI’, Kevin Kelly examines the need for ‘wetware’, or software designed to work like our own ‘wet’ biology:

The costs of creating wetware is huge and the closer that tissue is to human brain tissue, the more cost-efficient it is to just make a human. After all, making a human is something we can do in nine months.

There’s a parallel here with a trade-off currently happening in any service-based industry — automation versus human. Again this is not a new thing (automation drove the industrial revolution), but AI brings inflated promises of superhuman abilities that threaten to do away with the need for humans altogether.

In pursuit of this promise large amounts of money and time is being spent attempting to automate something we can already do very well, for example training algorithms based on the work of humans so that the AI can replicate this task in future.

where an iPod was a better Walkman, a Kindle is not a better book — Ben Evans

AI is often seen as a solution for customer service, but ask any customer what they want and top of the list will usually be the ability to talk to a human being. Are chatbots placing technology before user-need in the same way that the Kindle has (thousands of books on one device over the tactile feel of paper and ability to lend books to friends)?

DoNotPay helping with a PPI claim

The awkward truth for many organisations is that the most inefficient processes, and therefore most ripe for automation/AI, happen far from a customer and would probably impact the very people tasked with such a decision. The success of bureaucracy-killing bots like DoNotPay proves this.

We as service designers will be able to influence a series of decisions where AI will be touted as an alternative to an existing human process. In answer to the question of ‘to what cost?’, I’d encourage a mindset based on the following:


CONCLUSION

AI is developing at such a brilliant pace it’s hard to draw a meaningful conclusion other than the importance of remaining curious. While the biggest leaps in AI will be made by tech giants and (probably) secretly deployed by governments, we in the design community have immense influence over how it manifests in our daily lives. Our critical thinking around AI will determine how much control we retain over it, and our knowledge will ensure it doesn’t simply ‘happen’ to us.


Further reading and other random bits

If you’re interesting in the topics above I’d recommend the following:

AI lols from xkcd

Some other questions I’m wrestling with:

AI as colleague — when do we need to start considering AI as a colleague? A sensible point would be when our use of AI goes beyond our human comprehension and we need some kind of translation. How will already messy human dynamics like reward and collaboration cope when a new colleague who never sleeps, demands payment or shows emotion joins the team?

Uncanny valley — all the user testing I’ve done with chatbots suggests that the majority of people would prefer to know they’re talking to a robot rather than a pretend human. And yet so much AI conversation fixates on personality and other human characteristics. What if services focussed on function over form in their design of AI services in the same way that manufacturing has done?

Computer says no — the incredible Tom Chatfield talks about AI as the inflexible partner in the relationship, forcing us humans into ever-more rigid patterns. When do we call this out and shift it to “human says no”?

AI as crutch— arguably humans were more capable in the stone age, able to hunt, gather etc. With more and more technology we devolve more and more responsibilities until we hit a WALL-E type future. We need to stop and question each new ability-outsourcing and ask what we’ll lose as well as gain.


That’s it. I’d love to start some debates on this stuff so please let me know if you agree, disagree or just want to talk about anything here.