Trustworthy AI

Spiros Margaris
4 min readNov 3, 2021

--

By Spiros Margaris (Margaris Ventures)

Before we explore the importance of trust in artificial intelligence (AI), we should first consider what is meant by the word “trust.” We commonly use this word when discussing people and things, but we rarely think to define or explain it because it is something we feel deep inside ourselves. Of course, we can — and do when we believe it is necessary — provide reasons why we trust someone or something, but in the end, it is something we feel more than we think about rationally.

While considering this matter, I came across one of Merriam-Webster’s definitions of trust: “Assured reliance on the character, ability, strength, or truth of someone or something.” This definition rings true to me, and I believe it applies to AI and the implications of its use.

But what do we specifically mean by “trustworthy AI”? I wholeheartedly agree with IBM’s explanation of this concept: Trustworthy AI drives business transformation with responsible AI solutions that address human needs, safety, and privacy.”

To me, this description captures the essence of trust as well as the Merriam-Webster definition. They both suggest that to experience trust is to believe something is reliable and unlikely to bring us harm. In other words, it is the feeling of having a safety net that we can fall into and avoid getting hurt.

AI, machine learning and data have the tremendous power to ease many of the daily challenges that we face in our lives and work. AI applications often work unnoticed behind the scenes. Nonetheless, they do whatever they are programmed for as well as possible. In many cases, this will be faster and better than human capabilities.

I most appreciate AI when it augments human work and makes us stronger or more effective; when it performs tasks that we do not want to do or struggle to do, thereby freeing us to enjoy the activities we are good at and like. Remember how the introduction of the electronic spreadsheet did not make accountants or statisticians obsolete but instead gave them numerical “super-powers” that made their work easier and better? In many ways, AI does the same. It empowers people and businesses in their day-to-day activities.

However, we all know that our trust in someone or something can be shaken if we feel let down, and it is often hard for trust to regain its strong, magical power after that.

AI companies need to understand the trust placed in them by users of their technology. People can live with most AI disappointments — and, let’s face it, these will happen while the technology advances — on the condition that they are not the result of deliberate action or abuse, and as long as AI makes things easier and better overall.

Though we do not often think about it, there is no real way for us to avoid leaving a digital data footprint in a digitalized world. However, many of us care about what happens to our data to the extent that we may recognize it — consciously or unconsciously — as part of the price we pay for services that we want or need.

We provide a lot of personal data when looking for information on a search engine, and we pay for the service by seeing ads or results based on this data. We also get recommendations for movies or songs because AI has the power to recognize patterns with our data and identify what we might like, perhaps better than we can ourselves. In the fintech industry in which I work, many people are happy to share their data as long as they get clear financial benefits and better solutions.

However, when our data and the insights derived from it are used against us, we will have a real problem with the company or applications that have abused our trust in them. None of us likes to provide data in exchange for a service that, despite being initially beneficial, uses our data in a way we would never consciously agree to and, worse, may have a negative impact on us in the future.

That is why governments around the world have implemented robust measures to protect the personal data of their citizens and force companies to comply with privacy laws (for instance, the General Data Protection Regulation (GDPR) in Europe). Nevertheless, there is still leeway for AI to be used irresponsibly if a company or an application chooses to do so.

I want to state that governments cannot — and should not — regulate everything and, for the sake of allowing innovation, must avoid trying to do so. Mistakes happen, and this is part of the process of innovation, enabling us to learn and move forward.

But what is critical is that companies and consumers understand how an AI algorithm uses data to make better decisions that protect all stakeholders from disappointment and harm. This means AI’s decision-making process must be transparent to reinforce trust, fair to avoid bias, protective of data to ensure privacy, and vigilant against cybersecurity threats to prevent external abuse. In brief, AI must be used responsibly to build and maintain trust.

As well as having negative implications for the consumer, the possible harm of misuse of data by AI can tarnish a company’s brand, stripping away the trust that it has worked so hard to build over the years.

To end this article, I would like to consider some wise words from Ernest Hemingway: “The best way to find out if you can trust somebody is to trust them.”

Metaphorically speaking, I believe trustworthy AI can be the “somebody” that Hemingway is referring to. So, let’s work hard to ensure that AI receives and maintains our trust. AI is here to stay, even if it disappoints us and betrays our implied faith in it. However, its genuine capacity to change humankind for the better will be realized faster if we trust it.

--

--

Spiros Margaris

#VC | No 1 #Fintech #Banking @Refinitiv & @Onalytica | #AI | @TEDx | @natechsa @ai_mediastalker @GenTwoAG @SparkLabsGlobal @HeradoHQ