Why Product Managers need to think about AI Ethics

Catalina Butnaru
WomeninAI
Published in
6 min readSep 16, 2018

If you ritually include user feedback into your sprints and constantly worry about building “good” products, then Agile is your mantra. But if you’re deploying cognitive systems into production user feedback, error rate, and performance are no longer complete measures of “good” AI-first products.

You also need AI Ethics.

“printed sticky notes glued on board” by Daria Nepriakhina on Unsplash

Product decisions and ethical consequences in the history of Internet

I don’t personally know of a Product Manager who hasn’t once thought about how digital products change people’s lives, for better or for the worse.

Let’s go through some examples, from companies you have surely head of:

  • Reddit’s former SVP Dan McCormas thinks that his time at Reddit “made the world a worse place”. If you’re familiar with the incendiary accusations of censorship and online abuse brought against Reddit and its communities, you probably understand why he felt that way.
  • Online grocery leader Ocado includes a “Green van” option at checkout, giving people the chance to opt-in for environmentally friendlier delivery. 89% of people interviewed said they’d be willing to choose that option over faster deliveries. I personally do.
  • Both YouTube and Netflix have rolled out auto-play functionality on videos — TV series and videos that are part of a “Playlist”. It’s been proven that a significant proportion of people end up watching the entire video once it starts. This is, however, one of those “chocolate at the till” mechanisms that rewards and automates unhealthy addictions.

Behind every piece of code that drives our decisions is a human making judgments — about what matters and what does not. — Bidhan L. Parmar and R. Edward Freeman

But I am just the Product Manager?

AI is technically built like any of the products you’ve created before, but it’s fundamentally different.

AI is built like a tool, but it behave like a force, learning from its behaviours to minimise entropy when interacting with a system.

For example, a marketplace is a system where information about goods is crucial to potential buyers. In normal conditions, the level of uncertainty (entropy or information) associated to purchasing behaviours.

With AI, one can design a recommender engine that learns how behaviours are associated with transactions, and starts suggesting more convincing information to customers.

The actual tool is the marketplace, the force is AI. That’s narrow intelligence, seeking to reduce entropy of information.

As a product manager, it is your job to exercise good judgement in the design and engineering this online marketplace, such that both customers and sellers have the best possible experience.

It is also your job to have a fair response to ethical concerns coming from people about how AI is applied to influence customer’s behaviours (the system) to increase the number of transactions.

Why is it your job to build ethically aligned AI?

Chances are you know how to tell the good from the bad in the world of digital products, and I’m not referring to good design or engineering.

“The internet is a giant convenience store for human desire, which is good and bad. We desire and need lots of things, and the companies that tend to succeed are those that make it easier for us to get what we want; not much else. But that can also apply to higher needs: to get smarter or to learn”. — Evan Williams

If you happen to work for a company that thrives on doing a whole lot of good while retaining comfortable margins, then you’re either very lucky or haven’t hit the snooze button yet.

If you’re like most other Product Managers, you’ve been told that or you personally believe that “ethics don’t have an impact on the bottom line”.

Despite my naive optimism that ethically-minded product owners would readily volunteer to take responsibility for implementing ethics into their product, I was proven wrong.

While talking with product managers about their view on AI Ethics, I realised they aren’t resisting the thought of building ethical AI, instead they worry over increasing responsibilities.

Most product managers simply do not know how or don’t have time to take on more responsibility, especially the kind that they’re not comfortable with. That’s fair. However, if you don’t have the time to address ethics in building AI, you risk watching your product “behave” in ways in which people will not respond well to.

Where do Ethics fall into play? It turns out, AI Ethics gone wrong could easily kill your product. Not because inherently evil products get shut down by law, but because people will refuse to use it. There will come a day when more and more users will have awaken to the realisation that somebody, at some point in time in the roadmap, could have said or done something. And that could’ve been you.

The fate of AI adoption is built on trust — especially in high-consequence systems. People expect you to make Ethics your priority, if not your job.

“white building with data has a better idea text signage” by Franki Chamaki on Unsplash

Product Managers want to ultimately make the perfect decision, which usually involves looking at a problem from multiple angles at once: product-market fit, roadmaps, processes, technical debt, commercial viability, available runway. Similarly, AI ethics should be a part of your responsibility, even if that means raising the question of who should play the role of a Ethical Lead, above and beyond GDPR compliance, security, and data ethics.

It is within your power to ask for an ethical board to provide guidance on AI Ethics, just like you’ve always been adamant about getting as much user feedback on new features before sending them to production.

What does the future look like?

Traditionally, product managers’ role was to influence one core business / product metric at a time: MRR, churn, adoption, sprint size and duration, resources management

I believe that we will soon see product managers transition from managing linearly programmed models to cognitive systems slaloming away through thick layers of data describing human behaviour. In which case, their responsibilities should broaden and change.

Product managers will have to make judgement calls that data scientists are not trained in, and that stakeholders are not preoccupied with. Some of those judgement calls will be about how consequential the use of AI should be.

Others will be about when to use AI. Some problems should never be solved by AI. Data isn’t always the best measure of what we want it to measure. Hence, using AI to solve a problem that is described with incomplete or poor data will only lead to disastrous results.

However, assessing AI readiness and preparing your company for applied AI is only the operational side of best practices in developing and applying AI. The ethical side of best practices is intertwined with the Agile delivery of AI-enabled products.

I’ve developed a method that can help you integrate ethics into the delivery and production of AI-driven products, from start to finish. It’s been designed to include several exercises, and to cover enough research into AI Ethical Standards, such that your product is less likely to fail when it hits the market. Write back if you found it useful!

“Do Something Great neon sign” by Clark Tibbs on Unsplash

--

--

Catalina Butnaru
WomeninAI

City AI London and Women in AI Ambassador | Product Marketing | AI Ethics | INFJ