The AI that reflected.
Part One : What ever happened to Information Overload
James Dyson recently revealed that he receives only six emails a day. Bill Gates’ email alert only pings a few dozen times a day, though Apple’s CEO Tim Cook says he gets between 700–800 per day. Over the last two decades working in digital media, I would say I have been at both ends of the spectrum, and nowadays settled somewhere in the middle. This conversation popped up when talking with a friend about when Information Overload was in the headlines, and the fact that neither of us had heard this crop up in the past few years. Yes, we might complain about having to answer a lot of emails, but the specific term “Information Overload” has been absent, and I started to wonder why.
Have we evolved as humans in the past decade, adapting our brains and learning how to process hugely increased levels of information? Perhaps we conform to Malcolm Gladwell’s general theory in Outliers that it takes humans around 10,000 hours of practise to achieve mastery in a field. Certainly most people have ‘been on email’ for over 10,000 hours, so have enough experience to say we have reached mastery of the form. Perhaps, just as our bodies evolve to be able to beat sporting records on a regular basis, so our brains are evolving to accommodate the increase of information that this digital age is bringing us.
With the increasing acceleration of innovation, creativity, invention, and technology, we are certainly going to need to adapt our thinking to be able to process and make sense of the world. Perhaps we are rising to the challenge of the new society, with evolution parachuting in to save us and give us the skills and tools to respond. Maybe we are inventing, growing and adapting in synch with technology.
Or are we?
I put the Gladwell theory to author and emotional coach, Jennifer Day. Her view is that the information overload, along with such modern additions to our lives as status anxiety, IT angst and 24-hour — even fake — news, have combined and morphed into almost constant low-level stress. This, she feels, is creating the rise in significant mental health issues that we see in the headlines and amongst our friend. This type of stress not only raises but maintains cortisol in our systems at unhealthy levels, and if unchecked leads to anxiety, illness and suicidal tendencies. Our ‘always on’ society is not allowing time for our bodies or minds to process the information and emotions that we experience — leaving them to bottle up over time — and periodically explode, privately or on the world scene. Coaching from people like Jennifer help us build resilience, techniques and routines to build up resilience and help (mainly men) understand the changing world and how best to deal with its increasingly complex roles and requirements. When Happiness becomes part of a capitalist aspiration, we can be sure that society is on the wrong track.
So here we are, at a fascinating crossroads: just as we *need* to speed up evolution to handle the innovations that promise to boost our mental and physical strength, we are hitting a massive speed bump. The technology we created to make our lives better and our minds able to gain insights from increased communication and technology is the very thing that’s also grinding us to a halt.
Having seen the increasingly impactful cycles of innovations over the last two decades, I believe that AI is bringing us to a crux in our evolution only comparable to the invention of the wheel.
This may seem an ambitious comparison (though Stephen Hawking and Elon Musk have similar perspectives), but in fact it’s an understatement. The differences between the two inventions suggest why. Firstly, the speed at which innovations spread is clearly much faster than it was for one stone-age tribe to pass on their news, with the internet allowing new inventions to be known and adapted at the speed of light. Secondly, the scale at which AI can spread is greater. The sheer number of people on the planet who will be affected by AI, who will use it and evolve it is larger then when the wheel was invented. In simple terms, we have a much greater team of people contributing to the evolution of AI than ever worked on the evolution of the wheel.
Thirdly, while the wheel transformed humankind’s relationship with the physical world, AI operates in a different dimension.
When the wheel was invented, it set a path of the acceleration of mankind’s movement. Suddenly, we could not only carry heavy loads further, but we could do it faster. Our ancestors who may have taken hours to carry a heavy stone one kilometer, could now do so in minutes. We started to gain control of the space around us, and the time that it took to traverse that space. In a physicist’s language, we started to gain control of space-time. Since then, we have expanded that control to travel longer distances faster. We attached engines to the wheel, and then discovered that flight is a more efficient wheel allowing us to step-change how fast we could travel across space and time. But all these changes only helped us in the physical world.
AI is aimed at doing for us cognitively what the wheel did for us physically. It is aimed at taking over the more mechanic tasks of our thought process so we can explore new ways of thinking, to think faster, more creatively, and to evolve our abilities as thinking human beings. It may be timely to move away from our more primeval ways of thinking about individual survival, to thinking about the more modern challenges of survival as a species.
Part Two: Don’t let AI play video games!
Naturally, there is a scepticism with this little-understood technology that we have created. Perhaps this is partly due to the fact that even people operating at the forefront of its development are surprised by its learning capacity. Which brings us to DeepMind, Google’s centre of AI. If you have not heard of DeepMind, the headlines are that it was a start-up organisation bought by Google for $500m and is seen as leading the world in artificial intelligence. One of its founders, Demis Hassibis has a life mission for AI, informed through the lense of video games: he wrote his first aged eight, and founded the games company Elixir after graduating with a double first in computer science from Cambridge.
Herein lies a problem. To teach AI to play games is to teach it a win/lose scenario, which in essence is what Space Invaders, Chess, Go and so on are. They are scenarios which pit one player or team against another to see who wins. This may be life viewed through a male and capitalist prism, but it is not life in the round or in a philosophical sense.
If we teach AI based on a win/lose and problem solving basis, we ignore the vital, emotional side of humanity around connections, communication and growth to improve rather than growth to win. In a current society which values confidence over quality in job interviews (and US elections), we need to recognise that whatever we build in the future will reflect the values of its creators when it learns to think for itself.
This is something that the other founder of Deepmind, Mustafa Suleyman, seems to appreciate more. At 19, he dropped out of Oxford to set up the Muslim Youth Helpline, a telephone counseling service to help young Muslims overcome barriers in employment, sexuality, mental health and more. He helped start Reos Partners, a conflict resolution consultancy. He has also worked for the UN, The Dutch Government and WWF as a negotiator and facilitator.”
It is Suleyman who talks in interviews about the broader and more nuanced applications of AI and its relationship to social impact, as he said in an interview with the FT: “We learn so much about the strength and weaknesses of our algorithms by testing them on large-scale, real-world, noisy and messy data sets…It’s a pretty unique way to make progress with our toughest social problems.”
AI needs to connect with the world in all its messiness, because it is through being able to understand the world that it can connect with our real, infinite and constantly changing world issues. We can learn many things from games and from game theory, but setting it a challenge of solving London’s traffic problems, reliance on fossil fuels, hunger or perhaps even the Human Condition should be the focus. Games are not a good starting point where the goal is philosophical.
Part Three: Insight Overload and What Will AI Think of YOU
Diversity across culture, gender, upbringing and outlook is vital as AI evolves. There simply are not enough women in technology at the moment and this will create a substantial issue as technology takes an ever more personal involvement in running our lives. The gender balance in AI currently runs at 85% male to 15% female. While this could be put down to the broader problem of male bias in the technology industry, it needs to be addressed so that the algorithms that are created do not present a similar gender bias. AI is already used to filter job applications and provide character assessments. If these assessments are built on biased objective functions, then once again AI is starting off in the wrong direction. We need to make sure that decisions made by machines do not incorporate conscious or unconscious biases from their creators. Much work needs to be done to understand our own human biases first, as we cannot work to prevent behaviours we, as individuals and as a society, cannot see.
The importance of this diversity grows even further because soon AI will grow beyond number crunching and playing games, and start to become more integrated into our own human cognitive functions. For this to happen, a new language will need to be created that allows AI to communicate rapidly and impartially with us. Indeed, if we are to end up with the sorts of brain-enhancing chips in our heads currently envisaged, this will need to be an absolute imperative so that the correct signals can be given to the brain in the correct language without room for misinterpretation. If these chips provide information and analysis seamlessly, there might be no way to know whether the thought has come from you, or from the computer. Given recent events with the global cyberattacks on the Windows operating system, building an AI language that prioritises humanity in all its diversity will be imperative.
And this is where creativity, authors, writers and the book industry come in.
We know that we humans views the world is through the language of stories. From a simple conversation opener like: “What’s happened to you!?” through “How are you?” to “Tell me something about yourself”, most of our conversation openers are gambits to elicit a story in response. We like stories. And we like them because they give us an emotional context to the information that is being told to us. As humans, we think with our hearts and our bodies as much as with our heads (“gut feel”, love, fear, stress etc). Stories give us a way to communicate information in a way that we can interpret, remember and recall. “Boy meets girl, and drowns” is a good synthesis of Titanic, but it does not communicate the details and context of the story which are fundamental to understanding it.
As we are evolving as humans, so is our storytelling. We started in the Oral Tradition which could be described as one to one (or 1:1). With the invention of writing, storytelling grew to 1: a few. With the invention of film, it grew to 1: quite a few. Television grew it to 1: many, and the internet confused everything by making it a quantum state of both 1:1 again, and 1: absolutely everybody.
With each evolution of the medium, the storytelling structures evolve. Poems, novels, plays, screenplays, games, interactive fiction — all have different language nuances, and when AI starts to talk to us, we need to invent a new language so that it communicate with us in ways we all understand without ambiguity. Initial example of AI’s creativity such as Google’s Deep Dream are viewed with humour at conferences, and familiarity with LSD users, but could it be that we are seeing the first forms of what the computer is trying to say? Like the ‘tea-cup stains’ language that was described so well in the recent movie Arrival, how would we know if the machine is thinking if its output is not in any recognisable human understanding of what language is?
And it doesn’t stop there.
If we are looking to create a sentient technology, it will be able to communicate how it views us — its creator. As a species, we have never had a non-human species analyse us. We have explored the mind through MRI scans, philosophy and all sorts of neuroscience, but this is something different as the analysis will not only be made, but interpreted by AI. It might be able to describe our potential, our limitations….and even our evolutionary potential. Are we actually using 10% of our brain? Do we think with our heart and body as much as with our brain? Are there dimensions around us which we have filtered out in favour of survival mechanisms, but that AI can open up to us?
Looking at your mirror reflection is known to have a psychological impact — one used from self-awareness exercises to treating phantom limbs. AI has the potential to provide a new sort of mirror, one that could describe the future of our evolution based on what it sees. Linking technology to one’s brain to provide augmented senses could free us up to focus on more creative and insightful traits, freed from more mechanical acquisition of information.
At this point, you might be thinking that all this is dizzying future talk, but in many ways we are already well down this road. My grandmother who lived to 92, was 100% deaf. Hearing aids did not work, so she was an early adopter of Cochlear implants that link directly into the brain. She could hear again. She got to talk to her great-grandchildren, to communicate and connect with the world. The emotional impact of this is intense. A new dimension opens up for the individual. If you haven’t already seen this impact, have a look at this video of a 29-year old as her Cochlear is switched on for the first time. What other dimensions might exist that we were simply not aware of until AI switched them on for us? What insights can be achieved if we are given the cerebral equivalent of the wheel?
Information Overload has already challenged our emotional and cognitive capacities, and this can be countered by building up our resilience and self-awareness. However we are likely to enter a new era — let’s call it Insight Overload — as breakthroughs like e=mc2, the Higgs Boson and gravitional waves in black hole mergers, become daily occurrences, and we get the insight that this new phase will challenge our role as a species, not just as individuals.
Our species has never been more under threat from thinking as self-centred individuals. Through scientific research we have dissected enough bodies to know *what* we are. Perhaps the challenge we should prepare for is that AI is about to hold up a mirror to show us insights into *who* we are. Given our experiences with Information Overload in the past couple of decades, we need to make sure we are able to grow within ourselves, to understand our diversity and our values as a species, so that when Insight Overload arrives, the gap between who we think we are, and who we are revealed to be is narrow enough for us to evolve intelligently.
If you made it this far, thank you! Please share out to other like-minded folk.