AI needs love like everything does

Audrey Lobo-Pulo
Phoensight
Published in
9 min readFeb 9, 2020
Photo by Lenon Estrada on Pexels

As Artificial Intelligence (AI) permeates deeper into our society, new manifestations of perverse societal outcomes are being unleashed. Public concerns about algorithmic bias and fairness, data privacy and various forms of digital divides, put a spotlight on the existing cracks in our systems.

The ethical implications and social impacts of AI are the focus of much research and debate — and with these come a new wave of approaches and tools to try to address these ‘technological shortcomings’. But, as yet, there doesn’t appear to be an easy ‘patch’ to the problem...

We are beginning to see the distortions in our systems through an AI lens. What we learn from these, and how AI is shaping our ecosystem, will determine how we design and build new technologies — but it also informs how we design and understand our systems.

Photo by Brynden on Unsplash

What we are coming face to face with is a disruption of our systems as we know them to be — spurred on by the machines we have trained. This is not about an ‘AI takeover’ — it is about understanding the complexities of our social, economic, political, regulatory and cultural frameworks, and the moral or ethical undertones that are ‘baked’ into these.

Building AIs that make decisions with very real human consequences, is taking us into uncharted waters that are more nuanced and require a closer examination of the interrelationships and interdependencies within our complex societies. New generations of AIs provide us with new avenues and contexts to understand how we relate to each other.

At the same time, the creation of these new forms of intelligences does not come without its responsibilities — more so when we train technologies to teach themselves on data and processes that inherently capture the best and worst of humanity. Without them having the ability to contextualise their learnings or comprehend the broader implications of their decisions as they evolve, the use of AIs might drive humanity towards unimagined consequences.

We cannot separate the influences that AI has on our society from the influences that society has on AI — the two are inextricably diffused. Bolting on regulation, without a systems perspective, in an attempt to hold AI back from running wild may have unintended consequences for society — as will allowing it to run free, untamed and unchecked.

What’s needed is to work within the complexities of the AI-human ecosystem — and this means understanding the interactions, interdependencies and interrelationships within this changing ecology. It also means closely examining the differences between AI and human intelligence.

This is no easy endeavour. For one, AI’s tend to exhibit a ‘black box nature’, and are known to sometimes surface unexpected outcomes — and for another, we need to be comfortable and familiar with the many complexities in living systems and our societies. So where does one start with something so daunting?

Well, as it happens, humans have encountered something just as enigmatic and difficult to explain through the ages that may hold some new insights— love.

Love in Binary

Photo by Alexander Sinn on Unsplash

“When the knot is hacked in two, a binary is born and the poem is lost — not solved, but truncated.”

— Nora Bateson, from ‘Small Arcs of Larger Cirlces’

In ‘The Matrix Revolutions’ when the Oracle tells Sati that “cookies need love like everything does”, she hinted at what AI is missing — love is the quintessential human emotion that has fascinated poets, artists and philosophers through the ages. Subjective by nature, difficult to explain and sometimes unpredictable, many have attempted to express love’s elusive qualities through the creative arts with varied success.

AI researchers and developers find themselves in a similar conundrum — with many on a quest to breathe empathy into their AI creations, in the hope of improving AI performance and decision-making. That human relatability to machines could be supported by emotive technology is seeing new advances in AI — but the challenges might be as old as the mythical ‘Golem’ developing a cruder version of human emotion, only to eventually cause havoc.

What we are dealing with is much more complex than simply coding in ‘empathy rules’, based on collected human data, to upgrade our AI algorithms. Besides, mapping out the effects of human emotion says little about their causation — let alone the processes involved in their evolution. There is a difference between developing AI to emulate human emotions and designing AI to enhance and support our ‘human-ness’ within its messy complexities.

Photo by Hamza El abaoui on Unsplash

The ocean between how we perceive the role of emotions in our lives and their true origin may be as deep and complex as sifting through innumerable life experiences and across multiple contexts — ecological, cultural, and economic amongst others.

Studies identifying many interdependencies between our cognitive, emotional and social intelligences within our neural systems suggest that our ecology and environment contributes significantly to our human intelligences. While the phrase, “Beauty is in the eye of the beholder”, has been held in greek literature since the third century BC, science has confirmed that an individual’s aesthetic preferences are mostly based on personal experiences (not genes!) — which are highly subjective.

The ecology matters — and teaching AI about human emotions through collected data is like translating the anguish of waiting to see a loved one through the number of days that pass, photos and messages… What’s missing is the information within a loving gaze, which cannot be interpolated between data points. It can only be learnt through the complexities of life experience by interacting within the many and varied complexities of our ecosystems.

Photo by Pablo Heimplatz on Unsplash

Love continues to defy logic — and evade technology’s attempts to ‘bottle’ its essence in a data warehouse or algorithm. No algorithm can be trusted to blend the infinite complexities held deep within each human psyche — especially if they are continually shifting!

Explainable AI may be able to identify which features were responsible for your credit rating, but might miss that your preference for sunflowers comes from feeling a sense of freedom and joy— and that one story at the seed of those emotions comes from a memory you still hold tenderly of planting sunflowers in the garden with your beloved Grandmother.

Not all complexities are created the same— the complexities of systems created by humans, such as our financial, political and economic systems are different to those that exist within nature. And so it follows that what we learn from interacting with different types of complexities must also be different. The complexities that AIs work with, while processing large amounts of data, are different to ones we encounter in our lives. The same might be said for how each encounters love…

And as to why the sciences were unable to snare love into logic, and why the arts could only trace its outlines? Perhaps it has something to do with the balance and interdependencies of both, objectivity and subjectivity. While logic holds firm its door to entry, the arts are much more permissive allowing more entries to experience its offerings through different perspectives and contexts.

Yet, both cannot really capture this ever evolving quality we call love. Just as no amount of poetry can compare to the experience of falling in love, no amount of data or algorithmic programming can be the same.

Love is analogue, not digital.

Context is (a subset of) Everything

Photo by Mariana B. on Unsplash

“I read in a book once that a rose by any other name would smell as sweet, but I’ve never been able to believe it. I don’t believe a rose would be as nice if it was called a thistle or a skunk cabbage.”

― L.M. Montgomery, from ‘Anne of Green Gabels’

AI is not the first to be perplexed by how humans behave and make decisions. Another came before — Economics. For a large part of its evolution, the study of Economics focussed a great deal on the ‘rationality’ of humans, who would ‘consistently apply their preferences’ — until it became apparent that humans did not behave in line with their economic model counterparts!

Economics, dissatisfied by its attempts to accurately predict and influence market outcomes, incorporated insights from psychology, neuroscience and microeconomic theory to better understand and shape human responses — and Behavioural Economics’ was born.

We are witnessing a similar story with AI. With increasing computational processing power and data storage capacity, AI is evolving by incorporating new datasets, such as those used for facial recognition and emotional recognition, to better predict and respond to human behaviours and preferences.

While the ultimate goal of AI is widely touted to be reaching ‘singularity’ (i.e. when AI surpasses human intelligence), what’s still not clear is how these will be compared. In his recent lecture at Lafayette College, Pennsylvania, Kevin Kelly spoke of the multiple dimensions and taxonomies of intelligence — some of which are yet to be identified, and some that might be unique to animals and other species. And these are constantly evolving!

Photo by Casey Horner on Unsplash

Understanding human behaviour, emotions and the reasons for them (in their multifaceted complexities) may not need untangling and dissecting. Perhaps like love, formed within multilayered contexts and numerous life experiences, objectivity and subjectivity, rationality and irrationality, science and art must be blended together for a new type of sense-making.

What’s been missing from AI so far, is a broader approach — a rich and diverse ecology of the many evolving contexts that are tangled together, which create the countless stimuli for mutual learnings. These mutual learnings or symmathesy’, as described by Nora Bateson, founder of the International Bateson Institute (IBI), go deeper than just taking a ‘systems perspective’ — because it also includes the expression and communications of the interdependencies within and across living systems.

Perhaps, by working within these complexities (rather than reducing them, which limits our understanding) we may discover new avenues and ways of sense making — ones that allow for a trans-contextual exploration of our complex relationship with ‘intelligence’. We need to ask ourselves what we mean by ‘intelligence’ in a changing world, by immersing ourselves through a kaleidoscope of contexts, so that ‘intelligence’ may be allowed to shift and change — and adapt to the many systems we’ve fenced around it.

But nothing is really so disconnected from everything else, and a shift in our perception of intelligence (artificial or otherwise) will ultimately also shift the way we perceive and design our many systems. ‘Warm data labs’, developed by the IBI, provide the conditions to do just that — and may prove to be an invaluable process to achieve this.

One more thing about love — though St. Clare of Assisi may not have been thinking of AI when she wrote, “We become what we love and who we love shapes what we become”, she must have known that how and to whom we respond to plays a part in our evolution.

How we perceive and relate to any intelligence, including within our existing inter-species relationships, will undoubtedly contribute to our own ecological learnings — which then informs how we design both AI and our other systems. The same might be said for how we relate to AI — will responding to other intelligences in a way that’s apathetic change us differently to if we responded with kindness or love? If so, how do we create the conditions that allow for more trans-contextual complexity in our relationship with intelligence?

Danielle Krettek (founder of Google’s Empathy Lab), drawing inspiration from Jane Goodall amongst others, has shifted the AI design narrative from ‘design thinking’ to ‘design feeling— emphasising the importance of our social and emotional connections. Her research shows that when technology is ‘attuned’ to strengthen our emotional connections, an ‘empathic leap’ is possible with machines.

We have the chance this time to go further — we have the unprecedented opportunity to re-design not just AI, but also our human created systems using a trans-contextual approach… where we can work within moving complexities and allow for new shifts in our perceptions.

Perhaps AI needs love — like everything does.

“Carl Sagan and Ann Druyan taught me a lesson that I now hold as my secret mission: when designing for AI, be sure to sneak the love in” — Danielle Krettek

Phoensight is dedicated to supporting the interrelationships between people, public policy and technology, and is accredited by the International Bateson Institute to conduct Warm Data Labs.

--

--

Audrey Lobo-Pulo
Phoensight

Founder of Phoensight, Public Interest Technologist, Tech Reg, Open Gov & Public Policy geek. Supporting the interrelationships between people, society & tech.