A very human learning on AI
Thoughts and considerations on AI, how it affects our lives, its state of the art and applications.
It is now difficult to spend a day without getting into the term “artificial intelligence”. I must say it is often used a little inappropriately, like a key to open the doors to the future, a deus ex machina that produces a sort of magic that leaves us all astonished, reassured, and underneath, happier. This is another way of saying that 99% of the time we use the term without having a clue about what is really being said. Our perception of AI jumps between the Big Boh and HAL9000.
Yet there is a lot of talk about it: in the last six years, the number of scientific publications has grown sixfold. Meanwhile, corporations are investing in AI studies and applications at a frenzied pace. In the last year alone, global private investment has grown by 10%, despite the pandemic. The existing opportunities are evident, the technology capable of turning them into profit has yet to mature instead.
So what is AI really, how do we use it and what is it for in our lives as people, consumers, citizens and designers?
The “Artificial Intelligence Index Report 2021”, a weighty study published by the Stanford Institute for Human Centered Artificial Intelligence, helps us in this path of understanding. At the moment, it’s the most complete and exhaustive compendium on the state of the art of the discipline and its applications. Not an easy document to read, but capable of arousing numerous thoughts.
AI means that vast aggregate of technologies — from Machine Learning to Natural Language Processing — that allow machines to perceive, understand, act and learn.
Where are we with the technology?
First of all, where are we with technology? Far from the HAL9000. We are still more or less at the beginning of large-scale applications: at the moment the academy and industry are focusing on how to make silicon-based intelligence really performant and fast. It’s a prerequisite for a lot of industrial uses: from medicine to automotive (if AI in a self-driving vehicle can’t quickly recognize a person running at dusk, or the neighbor’s dog, it may not end well).
As technology progresses, concerns from academia and public opinion (excluding industry) about its perverse uses also increase, bringing the discussion into that rough terrain between ethics and research.
Nobody questions the power, speed and rigorous logic (given valid premises, ça va sans dire) with which an AI manages its processes (I still find it a bit difficult to use the term ‘reasoning’): this is why contests where artificial intelligence compete to solve mathematical dilemmas or to test some mathematical theorems flourish. Nerdy stuff? Maybe.
Have you ever taken a look, even if only in passing, at the Heisenberg matrices? Don’t worry, even Einstein didn’t understand a damn, yet they are the basis of modern quantum physics and its extraordinary results.
But without going that far, its uses range from circuit design to validating the logical goodness of algorithms and their performance, and from there to all their applications: a mediocre algorithm will produce mediocre results whose effects will be even worse and so on.
While some AIs focus on pure logic, others analyze language: its deep structure, but also its utterances such as speech and writing. Natural Language Processing (NPL) systems can teach machines to interpret, manipulate, and ultimately generate language.
It is possible to transform audio or video into text or vice versa, translate from one language to another with great accuracy, answer questions. We are now almost all used to chatbots, those stupid automatic responders who are usually the first interlocutors in customer care. We are less used to virtual qualified professionals — lawyers, consultants — who are starting to make their appearance (they work better in English, with other languages they still make a bit of a mess).
Sometimes these characters take on the appearance, voice and manners of a human being, in some cases someone known or famous. We can all have fun playing games on social media by replacing our faces with famous actors and thus interpreting some scenes from vintage movies, but the applications can be far more sinister. Especially if the perverse uses are added to autonomously generated images (computer vision is another technological sector in rapid growth: it has allowed interesting applications in the automotive sector with autonomous driving machines, in medical analysis, in the security and surveillance sector and also in manufacturing).
We could have a very crunchy Joe Biden announcing the nuclear apocalypse. A fake video from start to finish, but what effects could it have?
Certainly, the possibility of these events — even less dramatic, it is clear — provides the motivation for the development of artificial intelligence applications that deal only with Deep Fake Detection, overturning the practices of generating synthetic content.
Understanding the shape of life
I kept the best for the end. We have all been amazed by the speed with which mRNA vaccines have been developed to fight the SARS-Cov-19 pandemic, and this acceleration is largely due to the applications of Natural Learning Processing to the structure of proteins. Nature published an article on this subject last November, presenting AlphaFold, the algorithm developed by DeepMind that made this discovery possible.
As we are taught in school, DNA is the language of life and at some point, the information contained in the double helix is translated to produce amino acids and, from these, proteins. The translation apparatus is made up of ribosomes and strings of messenger RNA, a kind of tape that contains the key for decoding. In short, by applying NLP techniques to this particular language, it is possible to predict with sufficient accuracy the sequence of amino acids and therefore the shape of proteins. And then to provide the right instructions to produce the right protein.
Last year we were dazed by the talk about the spike protein, the one that allows covid to spread, and which, if present alone without the virus, is completely harmless, but induces the production of antibodies. Here, mRNA vaccines can induce the production of the spike protein. It is truly the dawn of a new generation of drugs and medical applications (against cancer, for example, or to eradicate autoimmune diseases, it would be wonderful).
Environmental & social sustainability, and ethical dilemmas
First of all, we have a problem as big as a planet, ours: how much hardware, resources, energy does it take to make all this stuff work? Can we afford it, as an ecosystem? Are we really ready to pay the price? And do we really think that planting a couple of trees in places usually deprived of the world is enough to catch things up?
And let’s also talk about social sustainability, for a moment: will access to these technologies and their uses be egalitarian or highly disproportionate? Obviously, it is a rhetorical question, of which we unfortunately know the answer very well. And politics — intended as that apparatus capable of directing people’s behaviors in relation to certain phenomena for the collective well-being — is struggling. Canada tried to provide a regulatory framework for the use of AI back in 2017 and, to date, 30 other nations have followed in its footsteps, but you can well understand that national legislation may not be adequate.
So, last year an intergovernmental working table was opened to find a common and widespread regulatory perimeter.
The market generally deals only with profit, much less with ethical and social justice issues, and the great concerns of corporations therefore focus above all on the safety associated with this type of technology. And among all the possible risks, the attention of the market falls above all on cybersecurity and data protection.
On the other side, citizens and workers have an issue of trust among their main concerns related to AI: do these technologies really take into account my well-being and my health (and in the workplace it is not a trivial matter), or do they have different priorities (like maximizing the machinery efficiency at the expense of my fingers, for example)?
Furthermore, the data reveals a major problem of inclusion. Meanwhile, there is a racist issue: AI is largely designed by white Westerners. I imagine that things will change quite quickly now that China and other Far Eastern countries are also increasing their influence on research and the number of their experts in this field is growing.
Design naturally reflects the mental models, cultural and behavioral conditioning of designers and therefore AI performs better when applied to white males living in the United States. Is this all right? Well, not so much…
As designers we have a responsibility: we have the duty to consider all these aspects when we find ourselves designing products, services or systems that involve any of these technologies.
The design must therefore be based on 5 principles:
- Granting the openness and completeness of the information: all the information behind an AI/IV choice must be made visible to the human being so that he can accept the choice.
- Humans have the right to choose freely: humans must be the only ones able to make the most important choices; the final choice must be possible for humans regardless of what the machine says.
- Creating a dialectic, collaborative and trusting relationship between Humans and AIs: trust is built through a lasting relationship and on the basis of a series of positive encounters: humans must be able to ignore the indications of AI/IV and this must be adapted without losing their completeness.
- Quantifying the value of AIs: the value of the AI must be made tangible and thus become understandable and acceptable (in using this tool, how much money can I earn?)
- Protecting privacy: users must be sure that they can trust their computer counterparts. The information they exchange must be available only to the user and not to others.
Artificial intelligence opens the door to a deep and accurate understanding of the world’s mechanics. Faust’s dilemma remains: what to do with knowledge? But this is an ethical dilemma and there are no shortcuts or automations: it’s all up to human beings.