Trust

Sabin Dima
Sep 7 · 7 min read

Artificial Intelligence has permeated the fabric of everyday processes to such an extent that it has reached the tipping point beyond which we should consider it a pillar as integral to the functioning of society as economy and communication.

Although more pragmatic, abstract and tame (dare we say boring?) than the world-conquering, albeit fascinating incarnation envisioned by the grandfathers of science fiction, Artificial Intelligence, as we know it today, is not regarded without concerns. These range in nature from ethical to epistemological — in other words, from “is this right and when does it become wrong?” to “is it actually capable of understanding us and are we willing to delegate our decisions to an entity we might not understand?” The answers might lie in another question: “are we comfortable enough to play God with a technology we don’t understand?”.

Concerns towards building trust in AI

Artificial Intelligence is more commonplace today than we might think. From smart home devices and navigation systems to surveillance, medical, banking and business tools and everything in between. It can process in a short time more data than is humanly possible and it can find patterns and connections where we might not. It can undoubtedly be useful, but not without certain limitations and concerns.

One of those is that no AI is purely objective, for the simple fact that deep neural networks learn by studying human data, human actions, human patterns and, unavoidably, human biases and prejudices. The other is our ability, or more precisely our inability to trust a decision making algorithm that we don’t understand — or rather, that doesn’t explain its decisions to us.

Human biased AI

Political philosopher Michael Sandel believes that “Part of the appeal of algorithmic decision-making is that it seems to offer an objective way of overcoming human subjectivity, bias, and prejudice, […] but we are discovering that many of the algorithms […] replicate and embed the biases that already exist in our society.” He goes on saying that “AI not only replicates human biases, it confers on these biases a kind of scientific credibility. It makes it seem that these predictions and judgments have an objective status”.

Other researches, such as Karen Mills, quoted by The Harvard Gazette, believe that the panic over AI injecting bias into decisions that impact human lives is overstated. On the contrary, carefully calibrated and thoughtfully deployed AI tools minimize the potential for favoritism and allow more data to be processed.

This situation is easy to imagine with screening software for loan, job and visa applications. In a concrete example from 2021, as reported by the Wall Street Journal, there were fears that Diversity Visa Winners could risk losing their chance to come to the U.S. because only 3% of the applications had been processed before the deadline, due to pandemic restrictions. The use of AI in such a situation would lighten the burden of the reviewers and shift their focus on oversight and transparency, while giving applicants true equal chances.

Ironically, it is the realization of human bias that defeats the age-old fear of sentient AIs acting against humans. By learning from human actions, including biases, AI can be said to be also human, to some extent. And on top of that we superimpose checks and balances with human oversight. However you look at it, there is a human behind every AI decision.

Should AI be explainable or understandable?

And this brings us to our second concern: can we trust and approve AI decisions if we don’t understand the process?

If we entrust AI with important business, medical and security decisions, we have to be sure its reasoning is accountable and ethical. Business has to be able to explain the logic behind their decisions because they directly influence the company revenue, just like a security department has to understand threat assessment so as not to overlook risks, but also not commit injustices.

The challenge is that AI models that solve complex problems, such as deep learning models or neural networks, are “black box models” that rely of billions of complex, interwoven parameters in order to deliver outcomes that may seem wrong or counter-intuitive at first because of our limited comprehensive capacity.

The conundrum is that, on one hand, complex problems require complex solutions, but on the other hand, with “human-centric AI, the imperative is to make sure the powerful capabilities that are promoted are also understood by the users, to be sure that the technology serves the right purpose” as stated by Patrick Couch, Business Developer AI & IoT at IBM.

This has given rise to a demand for eXplainable AI (XAI). As humans, we need to build trust and, as with every relationship, this happens over time, through interactions. We build trust not by knowing how another person functions on a biological level, but by learning what triggers them and how they react. Similarly, users do not build trust in AI by understanding all models and their parameters, because it would be impossible. They do not need to understand the intricacies of the entire model, but what factors can influence the output of that model. There is a significant difference between understanding how a model works and understanding why it gives a particular result.

Trust is built because of explainability, but not because of understandability” as stated by Anders Arpteg, Head of Research at Peltarion, cited in an article by hyperight.

Performance vs. Interpretability

But here’s the catch: eXplainable AI exists to have a human decision behind every AI decision in order to build trust in AI. This comes at the cost of performance over interpretability. As Cassie Kozyrkov, Chief Decision Scientist at Google explains, there is a trade-off between the two solutions and an issue arises only when users don’t know in what business they are; meaning that, for research purposes, users do not need to know how the model works, while for consumer-facing institutions the process has to be explainable.

The Magic in Technology and a Need for Control

But do humans really need to understand AI to adopt it, or do they just need to be able to control it? And by humans we mean the general public, the users, domestic and institutions, as opposed to developers and researchers.

Arthur C. Clarke, a founder of modern science fiction, said that “sufficiently advanced technology is indistinguishable from magic”. It is enough to look at the stories that humans have created from the earliest of times to modern day to understand that all we want is superhuman, if not even God-like powers accessible with a word or gesture or token. Not to understand the intricacies of the universe, of unseen magic forces or the complexities of an AI model; simply to wield them with our will.

There is no difference between uttering “Abracadabra” or “OK Google” in our desire to wield greater forces with a simple voice command; waving a magic wand or our hand in front a sensor feeds the same need to exercise control without the need to understand how the system works; rubbing the lamp or swiping the credit card works the same to fulfil a wish — we do not care how to trop the genie or how electronic payments work.

As for trust? We’ve always based that on having our human decisions dictating the actions of our fictional AIs. The Hebrew myth of the Golem, an animated clay figure animated by a tablet with its name, can be seen both as a reflection of an AI model and the human decisions powering it. The Golem has no will of its own, therefore it can do no intended harm. As explored by the sci-fi writer Ted Chiang in his novella “72 letters”, inspired by the ideas of the mathematician and philosopher Gottfried Leibniz in his opus “On the Combinatorial Art”, the challenge with animating golems is crafting the proper word (as in machine code). In our case, taking the best decisions behind every AI.

A unifying AI model for creative purporses

We have wired ourselves culturally to accept a performance AI model over an explainable one, as long as we have intuitive control and our decisions matter, but the distinction might not be necessary, after all. We believe there is a third purpose: creative.

HUMANS is a tech start-up that aims to democratize creation and scale human potential with Digital DNA, AI and NFTs. What this means is that anyone can store their skills, such as voice, gestures, likeness, as Digital DNA in a bank of NFTs, which content creators can mix, match and reshape into unlimited media products with the help of AI and a few simple input decisions.

Occupying a niche between competing AI models, the project deals or embraces all concerns regarding the use of AI and looks like a solid stepping point to building trust in AI. The concern of security and privacy is addressed by using an NFT model to store personal data. The introduction of biases is not an issue, because the output needs to be as subjective and unique as possible. Last, but not least, users do not need to understand the intricacies of the system — also because it does not output potentially life changing decisions — only that they are in control with simple actions.

Conclusion

Artificial Intelligence is already part of our life and will be even more so in the years to come. There are valid concerns around it that raise valid questions which, in turn, help make AIs better at serving us. Some of the AI models will help us better understand the world, others will make us more productive with less effort and other will help scale our human potential. Behind all, we can be sure, will still be humans.

humansdotai

Heart driven AI

humansdotai

Humans is creating an AI-based token issuance and governance platform which allows Users and AI Researchers to convert their technologies and data into productive digital assets via an innovative use case for NFTs.

Sabin Dima

Written by

humansdotai

Humans is creating an AI-based token issuance and governance platform which allows Users and AI Researchers to convert their technologies and data into productive digital assets via an innovative use case for NFTs.