Responsible AI — Everyone Knows Why, but No One Knows How !

You can develop and implement AI responsibly…If you understand risks, understand them better and specific to your use case!

Anand Tamboli®
tomorrow++

--

Future-tech always brings us two things — a promise and the consequences. It’s those consequences that responsible AI is about; if it is not, then it should be.

Much of our current daily lives as consumers intertwine with artificial intelligence. There is no doubt that Artificial Intelligence is a powerful technology. However, power comes with responsibility!

While many appreciate the ease and convenience of these solutions, AI is still an emerging technology, and we must approach its explosive growth with proper care and preparation. How do we tackle the challenges it presents, and how do we make sure that it does what it is supposed to do?

Enter responsible AI

Since the last couple of years, there has been some movement around making AI tech usage responsible and avoiding ethical issues. The term responsible AI came into existence to indicate these intents.

Now, almost everyone knows why responsible AI is needed, but how to achieve that is still not clear. The concept of responsible AI is still very abstract for most of us.

While speaking at a conference last week, I asked an audience, what does being responsible mean to them? Several responses were in the lines of — being ethical, careful, controlled, cautious, reasonable, accountable, and likewise. All these answers point towards one common aspect, i.e., risk management or risk-averse behavior. Rightly so, because being responsible in real life often means we are exhibiting a sane, rational, and controlled behavior.

The problem is — if the risks of being irresponsible are challenging to quantify, they are difficult to control too.

If the risks of being irresponsible are difficult to quantify, they are difficult to control too!

The question is, however, is it possible to create responsible AI?

The fact is, you can develop and implement AI solutions responsibly. However, first, you must understand the risks better and that too, specific to your use case.

Repeat-mistakes should be the first to go away

In the past 10–15 years, technology has made much progress. Brick sized phones are now as good as a small thick stack of paper and $20 phone call costs less than $0.20. Our laptops that run at 2.9 GHz carry the computing power of at least 15 desktop computers that used to run at 240 MHz, that too in Turbo mode, about 20 years ago.

However, nothing much has changed from the perspective of technology adoption. We are still making the same or similar mistakes. Technology vendors are in constant haste of pushing things out at the expense of quality.

As I witnessed several technology failures in the last few years, all those failures had something in common. There was a pattern of mistakes; almost all of them were repeats.

A few of those repeated mistakes are:

  1. Flawed hypothesis: Majority of businesses want to use technology, but for the wrong reasons.
  2. Unattended near misses: Anything going wrong with tech product or implementation is often responded with fixes, but those are seldom termed as near misses and eliminated from the root.
  3. Bad data: Need I say more? Bad data has become such an epidemic matter now that we have become somewhat insensitive to it.
  4. Performance degradation: WE often design systems as one point solution, and the broader ecosystem is ignored. When there are changes in the broader ecosystem, the developed system drifts, and it keeps drifting as changes continue.
  5. Poor design: System designs are often either accuracy focussed or speed focussed, but never both. The accuracy focus sometimes makes the solution too bulky to use in real-life situations. On the contrary, speed focussed solutions often lack adequate coverage or accuracy.
  6. Human-machine interaction issues: Those who design systems do not understand how users use it — and, those who understand how users use it do not create the systems. As a result, human-machine interaction is always a weak link.
  7. Input malfunctions: With the advent of the cloud, everything has become well connected. Contemporary systems are complex and intertwined. It also means that making one system robust doesn’t necessarily save users from failures. The other upstream or downstream systems are still fallible, and any malfunction can break the entire operation.

Nonetheless, the good part is, since these are repeat mistakes, we already know solutions to them. If we could address these issues, we can avoid several risks and so avoid failures — ergo; we can be more responsible!

If we avoid repeat mistakes, we can avoid several risks and subsequent failures, and become more responsible.

What will it take to streamline responsible AI?

Since we have established that being controlled, risk-averse, and sensible is the key being responsible, making these attributes quantifiable must be our priority. If we can’t measure it — we can’t improve it!

If we can’t measure it — we can’t improve it!

There is a need to have a structured approach towards AI design and development as well as deployments. This approach not only needs to guide how to do it but also must provide quantification methods & metrics to track and improve it.

An excellent approach to problem-solving must cover three aspects.

An excellent approach to problem-solving must cover three aspects.

Do the right thing: Solve the right and essential problem. Do not use technology for the heck of using it.

Do the thing right: When solving a problem, solve it in the right way. No short-cuts, no unnecessary trade-offs or corner-cutting.

Cover all the bases: Make sure you have covered all the risks. Impact on upstream & downstream processes and overall ecosystem is assessed.

The Leash System

To provide structure and quantifiable mechanism for achieving responsible AI, I have developed a new methodology — The Leash System.

It is based on several expert-interviews, literature review, experience in proven methodologies such as DFSS (Design For Six Sigma), Cybersecurity, and Process Excellence. Moreover, the three foundational aspects of excellent problem-solving serve as the critical pillars of this system.

The Leash System is a 10-step methodology, a structured approach to achieve responsible AI.

The methodology has ten critical steps that are covered in three stages.

1. Problem validation

The first stage focuses on the correct linking of problem and root-cause of that problem. Albeit, this is the most critical stage. If you work on a wrong problem or a wrong root cause, no AI can save you!

Without doing this, you may end up retrofitting your AI solutions and giving poor experience to your customers. Eventually, you will find yourself in the negative spiral of endless fixes.

Thoroughly finishing the first stage ensures that you are doing the right thing.

2. Solution validation

The second stage is to establish a link between the root-cause and the AI solution. I often say that the solution need not address the problem; instead, it must address the root cause.

As such, the problem is an abstract concept. The reality is the root cause(s), whereas the problem is just a symptom. Root causes drive the problem, so, fixing root causes should stop the problem.

When done right, you will observe a significant definite uptick in your key metrics.

3. Control system

The third stage is to establish rigorous, testing & control system.

With the Pre-mortem analysis, you get to peep in the negative future and work-out all the failure scenarios. When you do that, you are in a better position to establish viable and relevant controls.

Moreover, since you will be able to quantify risks level for each failure scenario, you can measure them for continuous improvements.

It is also a wiser approach to stress test your systems inhouse before someone else does that as it might be too late. It is usually best achieved with the help of Red-Team.

What about residual risk?

Once you have such a rigorous problem & solution validation steps and a robust control system in place, you will be in control of your AI system.

However, still, there might be this lurking — random — unexpected risk, that may exist. A situation that might be unknown can’t be anticipated or controlled.

Just for this unknown-unknown situation, get AI insurance! It is not a thing yet, and it won’t be until someone leads the way.

AI insurance, is not a thing yet, and it will not be until someone leads the way!

The point is…

The responsible AI is not just a fancy term or an abstract concept — it is being ethical, careful, controlled, cautious, reasonable, accountable, i.e., being responsible in designing, developing, deploying, and using AI.

Following the 10-steps methodology of The Leash System is how you can make your AI a responsible AI system in reality.

So I am quietly confident that The Leash System is a comprehensive and structured approach, and it is a better tool to achieve responsible AI.

You can develop and implement AI responsibly…

If you understand the risks.

Understand them better…and specific to your use case!

About the author: Anand is many things packed inside one person: a serial entrepreneur, an award-winning published author, a prolific speaker, a savvy business advisor, and an intense spiritual seeker.

If you liked this article, subscribe to my newsletter for more such articles and connect with me on LinkedIn.

--

--

Anand Tamboli®
tomorrow++

Inspiring and enabling people for a sustainable and better future • Award-winning Author • Global Speaker • Futurist ⋆ https://www.anandtamboli.com