Here is a better way to fix AI risks

You can develop and deploy AI systems more responsibly if you understand the risks better, specific to your use case, and “The Leash System” will help you do that effectively…

Anand Tamboli®
tomorrow++

--

Illustration by Nishant Choksi, for the New York Times.

AI is often presented as a panacea that can somehow resolve everything. As it stands today, the outlook is still highly debatable. AI, however, has now become a real-world application technology, and it is becoming part of the modern life fabric.

However, the future of AI highly depends on the trust factor. If AI is to drive the business and overall social success, it cannot hide in a black box. To have confidence in the outcomes, obtain user-trust, and ultimately capitalise on the opportunities, it may be necessary to open up the black box. Then again, it will not always be possible to do it for various reasons, and hence, a pre-emptive approach is necessary.

When AI fails, the deed is done, the output is mostly irreversible, and so is the damage caused by it. How do you keep your AI in control and make sure that it does exactly what it is supposed to do and nothing else? How can you believe, reasonably, that AI will follow your command and will not go overboard? How can you understand the risks involved before using any AI and then mitigate those risks pre-emptively?

There is a better way to fix it

I have been a strong proponent of identifying and pre-emptively managing risks with AI systems. Having witnessed a few stellar failures of automation in the recent past, I decided to take the lead on this and have been working on a methodology that can identify risks quantitatively.

I am now in the last phase of developing this methodology, which I am fondly calling as — The Leash System. To a large extent, The Leash System can recommend several risk mitigation methods, once the potential risks are identified. One of the key outputs of this methodology is a risk-profile with standardised scores. This is going to be helpful in objectively assessing AI systems from the perspectives of developers, integrators, and users.

Having objectively defined risk-profile is further helpful in devising a measurable and actionable control-plan, and thereby helping in increasing the trust and reliability of AI systems.

At the core of The Leash System is a SaaS-based AI risk-scoring platform, which will be used in conjunction with a day-long live workshop with clients. The high-level process involves — initial workshop, the risk assessment exercise followed by risk mitigation workshop and finally scoring.

How is this different from other frameworks?

AI ethics and responsible developments have been a few hot-button topics these days. And, the majority of the frameworks that have been proposed or developed to assist in achieving this are nothing more than high-level theorised approaches.

These frameworks do an excellent job of explaining and convincing the need for ethical and responsible AI design & use to the CEOs and board. However, they fail to provide actionable steps or framework, which can be followed to achieve it effectively. The Leash System fills this void and provides a practical and actionable methodology, which can be implemented by teams.

The Leash System provides practical and actionable methodology that can be implemented by your teams.

Take the lead and get involved today!

Since we now are in the last phase of development, we are accepting applications for interest to use this system. If your company is interested in becoming an early adopter of The Leash System and would like to know more about it, express your interest here, today.

Original illustration by Nishant Choksi, for the New York Times; edited by Anand Tamboli.

--

--

Anand Tamboli®
tomorrow++

Inspiring and enabling people for a sustainable and better future • Award-winning Author • Global Speaker • Futurist ⋆ https://www.anandtamboli.com