Build a Minimum Ethical Product

Because if you don’t, it will kill you.

Catalina Butnaru
Applied Artificial Intelligence
7 min readSep 30, 2018

--

Imagine this. Your state-of-the-art entertainment AI is worth £21Bn, but it has turned against you. Within 30 days from using it, every person starts to slowly degrade her lifestyle. It’s unnoticeable at first, and before anyone notices the change, it’s too late.

Is it too late to stop and fix what you’ve built?

A few users figured out that, through reinforcement learning, your algorithms have been encouraging people to adopt aggressive virtual avatars as alter-egos because these characters were more likely to stumble upon a myriad of rewards within the platform, which implicitly locked up people into the habit of checking in online multiple times a day. As the line between alter egos and reality blurred, these avatars smudged real-life with a chronic disinterest in normal activities and self-improvement.

Once this rumor got out, your competitors jumped on the bait and drummed up people to close their accounts. 23% accounts erased overnight. Could you have prevented this?

The black swan of humanity

Outliers events, such as Facebook’s security breaches, Brexit, or Bitcoin soaring at $60,000, do not occur more frequently in specific industries relative to others. They’re equally improbable in finance as they are in the food industry. However, as much as we can draw comfort in that a black swan event can hit any market at any time, this is changing.

AI is different. Although it’s considered a theoretical existential threat, DARPA is investing $2Bn in AI Next, the largest research program exploring ways to “enhance the security and resiliency of machine learning and AI technologies, […] and [to explore] the ‘explainability’ of these systems.

You probably know this, but actual existential threats don’t look great on screen: they don’t have a straight jaw line, nor do they repel bullets with their beefed up cyber-body. They don’t even go rogue.

They’re plain boring and could look like zipcodes, time series, or ID markers. They don’t even have an agenda, just a more frequent-than-usual learning rate. They’re probably going under-the-radar as we speak, while you keep your eyes glued to Sophia’s alleged tweeter feed.

Earlier this year, “twenty-six experts have co-authored a groundbreaking report” and have more or less agreed that the most likely malicious applications of AI that pose a serious threat are…

“speech synthesis used to impersonate targets, finely-targeted spam emails using information scraped from social media, or exploiting the vulnerabilities of AI systems themselves (e.g. through adversarial examples and data poisoning).”

But

“The problem with experts is that they do not know what they do not know” — N.N. Taleb

And neither do I. What we do know is that AI is humanity’s most certain black swan. You you might be the only one able to stop it. Here’s why.

The black swan and the MEP

You’ve just closed a $2M round and finally hired one the most brilliant minds who made waves at NIPS with his autonomous decision-making model.

Insurance Company X is ready to sign that check, and your only concern this year is wether transfer learning is good enough for the purpose of building a behavioural model of each and every single customer of Company X.

What you don’t know is that next year you’ll have to make the decision to shut down your AI. It exceeded everyone’s expectations in its ability to predict financial shocks in the lives of middle-earning insurance subscribers, but it also started to hide income protection packages from their feed. Unable to access emergency funds for more than a year, most people took their lives.

If THAT gets found out, you’re dead too.

If you want to be smarter than that, start today. Get your Head of Product and your brilliant AI engineers in the same room with your sexiest new hire: the Ethical Design Lead.

And ask them to build an MEP, not an MVP.

Minimum ethical product

Buzzwords. Can’t stand them, can’t do without them. MEP (minimum ethical product) came to life earlier this week when I met the founder of Constellation AI. He promptly encouraged me to write about it. Thank you, Tom.

Before any product hits the market, founders usually decide on what the MVP looks like. Well, as AI-powered products are expected to turn into black swans within the next few decades, my assertion is that an MEP is very much needed instead of an MVP.

Even if existential risks aren’t truly keeping you up at night, the anxiety of turning your dream and money into a huge failure should.

So please hear my MEP pledge.

Seriously though, MEP

A year ago, I started working on a framework that helps you integrate ethical thinking and decision-making into the agile delivery of AI products. That was too early, however today I see lots of tools and toolkits popping out of the blue, from various institutions and consultancy boutiques. That’s not what I’ve done.

My priority is to maintain a neutral and unbiased stance by refraining from adding a commercial or advisory purpose to my framework.

While presenting this framework at We Are Developers and Codiax, I dabbled with several ways of helping product owners, product managers, and engineers adopt it. There’s no one prescribed way to do this, but it’s one way.

You’re free to use it with any Ethical Standard out there, and I’ve listed several such ethical principles here if you’re curious. It is not for me to decide whether you should use one Ethical Standard in AI over another. You might decide to us your own moral code, developed by your advisors and users, or embed one in the system and let it learn and evolve. You decide.

Oftentimes, If I even asked them who takes responsibility for integrating Ethical Standards into AI, the air would quickly thicken with tension.

So let’s diffuse that anxiety right now, and agree that you — the CEO get your best people in the room, some of which might be engineers, others — designers and user researcher advocates, and together become the Ethical Entity nudging your product towards either compliance with ethical standards, or towards a more ethical version 1.0 before it hits the market.

Start with how your company is perceived

The first step towards applying Ethics to the delivery and design of your AI product is to answer the following questions:

How do people perceive your AI — as a system with embedded ethical rules or as a technology that mediates ethical or unethical behavior?

Embedded ethics refers to hard-coding moral agency or moral engines into AI. This could look as radical as defining moral limitations in weaponised AI that is also autonomous, or as trivial as training your chatbot to identify aggressive, racist, or inappropriate behavior and have a more partial reaction than “I didn’t quite get that”.

Applied ethics refers to developing powerful AI, but limiting the use of it. The system does not make have moral rules, not does it make any decision. But it can mediate ethical or unethical behavior. For example, an algorithm that predict purchasing behavior before customers are even aware of their choice, could be using to increase frequency of transactions, or to help people manage their spending habits.

How do people perceive your ethical standards — as normative or descriptive?

Normative ethics refer to principles, rules, and values formally endorsed and imposed by auditors, regulatory or legal institutions. Descriptive ethics refers to the body of values and ethical beliefs that make you who you are. For example, the life of Mahatma Ghandi is seen as the highest form of expression of ethical thinking in one’s lifetime, but nobody will fine you for not behaving like the spiritual leader.

What works for you, won’t work for society

You’ve figured out where your company sits between normative and descriptive, and between embedded and applied. Now what?

Well, from here onwards it gets progressively easier, then progressively harder. If you are closer to embedded ethics, then reading IEEE’s Ethical Standards and seeking to comply with the standards of transparency and explainability is the most obvious choice for you.

If you’re closer to applied and descriptive ethics, then developing an AI that learns about what we humans care through millions of hours of simulations of ethically controversial situations might be your best bet.

Whichever direction you take, you can use HAI — the framework for applying ethical thinking into the design and development of AI — to get you closer to that Minimum Ethical Product.

The only critical challenge you will face is deciding on whether your ethical thinking is more aligned with business goals or with society’s expectation and hope that AI will support human wellbeing and flourishing.

Where do you stand?

If you’d like to learn about my methodology, do get in touch. You have nothing to lose.

But if you don’t, your AI might kill you.

--

--

Catalina Butnaru
Applied Artificial Intelligence

City AI London and Women in AI Ambassador | Product Marketing | AI Ethics | INFJ