Here is a better way to fix AI risks

You can develop and deploy AI systems more responsibly if you understand the risks better, specific to your use case, and “The Leash System” will help you do that effectively…

Anand Tamboli®
Jul 1, 2019 · 3 min read
Illustration by Nishant Choksi, for the New York Times.

AI is often presented as a panacea that can somehow resolve everything. As it stands today, the outlook is still highly debatable. AI, however, has now become a real-world application technology, and it is becoming part of the modern life fabric.

However, the future of AI highly depends on the trust factor. If AI is to drive the business and overall social success, it cannot hide in a black box. To have confidence in the outcomes, obtain user-trust, and ultimately capitalise on the opportunities, it may be necessary to open up the black box. Then again, it will not always be possible to do it for various reasons, and hence, a pre-emptive approach is necessary.

When AI fails, the deed is done, the output is mostly irreversible, and so is the damage caused by it. How do you keep your AI in control and make sure that it does exactly what it is supposed to do and nothing else? How can you believe, reasonably, that AI will follow your command and will not go overboard? How can you understand the risks involved before using any AI and then mitigate those risks pre-emptively?


There is a better way to fix it

I am now in the last phase of developing this methodology, which I am fondly calling as — The Leash System. To a large extent, The Leash System can recommend several risk mitigation methods, once the potential risks are identified. One of the key outputs of this methodology is a risk-profile with standardised scores. This is going to be helpful in objectively assessing AI systems from the perspectives of developers, integrators, and users.

Having objectively defined risk-profile is further helpful in devising a measurable and actionable control-plan, and thereby helping in increasing the trust and reliability of AI systems.

At the core of The Leash System is a SaaS-based AI risk-scoring platform, which will be used in conjunction with a day-long live workshop with clients. The high-level process involves — initial workshop, the risk assessment exercise followed by risk mitigation workshop and finally scoring.

How is this different from other frameworks?

These frameworks do an excellent job of explaining and convincing the need for ethical and responsible AI design & use to the CEOs and board. However, they fail to provide actionable steps or framework, which can be followed to achieve it effectively. The Leash System fills this void and provides a practical and actionable methodology, which can be implemented by teams.

The Leash System provides practical and actionable methodology that can be implemented by your teams.

Take the lead and get involved today!

Original illustration by Nishant Choksi, for the New York Times; edited by Anand Tamboli.

tomorrow++

It is time we started thinking beyond tomorrow…

Anand Tamboli®

Written by

Award-Winning Author • Keynote Speaker • Transformation Specialist • Tech Futurist ⋆ https://www.anandtamboli.com

tomorrow++

It is time we started thinking beyond tomorrow…

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade