Developing Long-term Responsible AI

I’ve been working with multiple start-ups in fractional COO, CSO and CPO capacities. Each one is building amazingly unique AI solutions enabling circular economy and making our lives better, along with a profitable business model. It is an incredibly rewarding experience. And then, there are conundrums…

The common core value at these firms is actively ensuring a long-term responsible AI solution. We believe in pausing to evaluate how the future generations will live with what we are building using the present-day datasets. These datasets may have uncautious biases built right into them, however they are also the best possible options available at this time. This means that any AI’s ability to self-correct and remain unbiased will be limited at this time; at least until it collects and digests more multi-dimensional data. This implies we need to deeply ponder and strategically approach this key question:

who is actually accountable for helping the AI course-correct and evolve, if needed?

During one of transform this TED Circles, we debated “who should be accountable for a physical robot’s actions?”. The most popular proposed option was: “there should be an accountable and liable human operator at all times”. Now, the bias towards trusting human operators over machines aside, this is definitely true of most solutions today because of their early stage technology. Spot from Boston Dynamics is always monitored and operated by a human holding the wireless remote, when it ventures out to paint the town red!

I wonder if the AI solutions deviate from physical robots in this aspect. We develop AI solutions to automate, simplify and remove biases so that we do not have to micro-monitor process-driven tasks. Most of the time, these solutions are also invisible and behind the scenes to the end-users. The complexity rises quickly when we integrate various AI solutions, developed independently of each other, into a single end-to-end business use case. With a fully automated AI-enabled solution, it is possible that various AI solutions are talking each other without direct intervention or observation from a human. The data humans receive will likely be an overall use-case performance — this is what provides efficiencies and speed to a typically manual business use-case. In this case, we need to clearly define the accountabilities and governance to ensure issues, risks and biases can be quickly corrected before they compound, leading to possibly severe unintentional consequences.

Human and manual systems are always going to be biased. It is the nature-gifted human condition to create trends in our minds using socially-accepted conclusions. We evolved this way to quickly identify potential failures or risks by generalizing them for the benefit of our immediate tribe. How much bias is a factor of the exposure a particular group has had to different types of perspectives and situations. In a global context where we now wish to operate, there is a way to balance this with machine learning and human integration. Ultimately, it is scalable to program the bias out of a machine or data, than out of a mind because machines and data can be deployed multiple times. This can be achieved provided we have ironed out “a sufficiently unbiased MVP” as a starting point. In addition, we need an active governance to identify and manage the biases with accountability. Otherwise, inclusive innovation will remain out of reach because machines will simply enhance our existing biases rather than reduce them. What do you think:

Who should be ultimately accountable for an AI system’s actions?

Vote on our LinkedIn poll and share your thoughts. Also join us for a free live-workshop on October 30 2021, TED Circles: Optimism — Unbiased AI?

Sign up for live & on-demand workshops hosted by ‘transform this’.

Join the Super Community for a wealth of transformation and innovation on-demand workshops.

© 2021 Emerald Technology Group Inc. All rights reserved.

This newsletter and the entirety of its contents, including all images, text, frameworks, articles and operating names, are protected under copyright, trademark and other laws in Canada and/or foreign countries. For permissions, please contact:




A Strategy Officer’s learnings from building, innovating and scaling 35+ businesses in 10 industries.

Recommended from Medium

Ensuring Business Continuity and Increasing Operational Efficiency with Touchless Sales Order…

What principles not to disrupt: on AI and regulation

Smart Audit: Using AI in International Trade

AI and Math — A Guide for Parents

Meet AI: The next great Master

Machine learning for better governance

Current Situation:

Imitating a child’s mind instead of an adult’s — The next leap in AI

An animated graphic of a human baby and a robotic baby facing each other

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Gunjan Syal

Gunjan Syal

I build, innovate and scale success in a way that the results are visible, measurable and repeatable, through responsible innovation.

More from Medium

Early-stage companies — Deep Genomics

Is your AI built on a Throne of Lies?

De-risking innovative AI products

Transformation to Innovation