Designing ethical artificial intelligence

Jivan Virdee
Design Voices
Published in
4 min readJun 21, 2018

--

By Jivan Virdee and Hollie Lubbock

There are very different views on what the future of Artificial Intelligence (AI) looks like. “Celebrities” of the tech industry are often seen battling it out, but those of us whose daily lives focus on designing digital products should also get involved in the debate. We spoke at this year’s Cannes Lions, to share our thoughts and prompt a debate. If you missed it, here it is in a nutshell”

Surprise!

Data is shaping our digital service — this much we’ve known for a good while now — but it’s only recently that its potential pitfalls, dangers and shortcomings have found the limelight. When people have started using their platforms for nefarious purposes, big tech companies have been slow to respond — not because they don’t care, but because such activities were unforeseen when their products launched, so there weren’t measures in place to deal with them efficiently.

Technology is moving so fast that legislation can’t keep up with the rate of progress. As a community, digital designers can legislate ourselves much more efficiently. We should do exactly that — and hold ourselves to a higher standard at the same time.

Human-centered and humanity-centered

Technology should be a force for good. It should be something that propels us forward to become the people we want to be and live the lives we want to live.

The key lies in being not only human-centered, but also humanity-centered. As we design a new service or product, we need to imagine the worst-case scenario for humanity if every single person in the world were to use it. Many of the negative consequences surrounding some of today’s highest profile services were unforeseen, so nothing had been put in place to mitigate for them. We must now make it our business to design accountability measures into our products and services.

We should look at three key areas when designing with human values in mind:

1. Fair and transparent data science

How do we build transparency into our products so that people understand why decisions are made, and can then feel confident to question artificial intelligence results rather than blindly believing the “magic”?

2. Trust and human-machine collaboration

How do we build a foundation of trust, and design true collaboration between people and AI so that we can work together without feeling threatened?

3. Responsibility and accountability

While we might be working in hybrid human-machine systems, we bear the ultimate responsibility for ensuring they operate safely and for the common good.

Every system we create inherits our bias. Unless we assume that all systems are biased and actively look for it, we won’t discover it until it’s out in the wider world, creating issues. One way to reduce bias in the design process is to ensure the teams behind the service or product bring diverse attitudes, experiences and opinions.

As Mo Gawdat says here, we have to work to make artificial intelligence not just intelligent, but focused on the right data sets to be ethical and good. Think of artificial intelligence as a three-year-old child. If that child is regularly exposed to strong language, to violence, to biased views, how will he/she assimilate that information to shape his/her behavior? We cannot expect AI systems to be able to discern good from evil.

The echo chamber

A close relative of bias is the concept of the echo chamber. The majority of us no longer live in small communities — even those who physically live in remote villages are also part of a huge metropolis online, with friends in far-flung places who communicate daily via social media. However, even with easily accessible worldwide connections, our own circles are likely to expose us to a limited set of ideas and opinions.

Add to that the unseen and misunderstood algorithms that surface content they know aligns with our views, and we end up in a bubble (and we might not even know it). If we want to read opposing opinion, we have to know it’s out there and go looking for it.

AI and jobs

Plenty of headlines have focused on the damning impact artificial intelligence might have on careers. The fact is that technology will simply continue to do what it has always done when it comes to evolving our professional landscape — it’ll be a continuous and slow shift, not an overnight shocker.

We’re not talking about using AI to automate jobs across the board — we’re not even talking about automating one person’s entire job. We need to look at the areas that are most ripe for automation and consider what we’re doing for those whose careers will be affected.

As a start, are we educating the next generation in the fields that are most likely to lead to steady, long-term careers? What do machines do best? Where do they fall short? Crucially, machines struggle most with tasks that require creativity, empathy and judgement — those are the skills that will best equip our young people for AI-proof careers.

It’s up to us

The future of AI is something of a mystery, but those who have pioneered digital services and data science have exposed areas we can and should work on to ensure that technology contributes to our world in a positive and meaningful way. In the words of Dan Hon, “No one’s coming. It’s up to us.”

--

--