What does an artificial future mean for us?

Welcome to The Interface!

In this new series of articles from Brand Genetics, we explore the world of artificial intelligence and machine learning and ask the question: what does this mean for humans?

We begin this series by looking at the evolution of advancement, exploring how technology has changed and developed through the ages and what the so-called fourth industrial revolution, and a future of artificial intelligence, has to offer us.

In the second part, we shift focus and look at the importance of human-centric AI where we think about the ways in which AI is designed and used through a human lens. Technologists tend to think of ‘users’ but does this adequately take into account human needs, behaviours and motivations? Finally we look at a future where humans and machine learn together. We’ll explore the value of combining AI with behavioural science and psychology to create optimized, human-centric goods and products. What advantages does this offer and what benefits will we see in our daily lives?


THE EVOLUTION OF TECHNOLOGY

Arnold Schwarzenegger in the Terminator

The last eighteen months has seen a significant increase in the amount of artificial intelligence (AI) hype and the bubble has grown larger and larger and millions of dollars has been pumped into the industry. However, according to Gartner’s Hype Cycle, most of the uses of AI are currently at the “Peak of Inflated Expectation” stage — in other words, our expectation for the technology is riding high and the longer terms reality is likely to be quite different.

We all know that AI is changing the way we work and the work we do. However, much of our understanding derives from the narratives of Hollywood Sci-Fi screenwriters and results in eye-catching and scaremongering headlines about an impending robot apocalypse. This makes it challenging for us to not only rationalise but more fundamentally, prepare for, a future with intelligent machines. To make this even more complicated, the pace of technological change has accelerated: it has never happened as quickly as it is happening now. Consequently, many of the concepts and themes that lie behind AI are poorly defined and misunderstood, even by experts themselves.

A Brief History of Technology

When we think of technology our mental model is usually computers, machines and smartphones — but the word actually derives from the Greek ‘techne’ meaning “art as human skill”. This implies that the concept of technology is neither new nor modern. Our ancestors have been using ‘techne’ biologically, psychologically and culturally for hundreds of thousands of years. Take for example the Australopithecus Afarensis, a hominin alive 3 and a half million years ago who were shaping shaping technology, just as technology was shaping them. These hominis built purposeful tools to keep themselves alive and out of danger, which pre-empted many critical human cerebral functions like co-ordination and language.

The Australopithecus Afarensis

We also see technological evolution at play with the development of connectivity. Sophisticated language emerged around 100,000 years ago, enabling us to communicate face-to-face and paving the way for the development of civilization as we know it. The creation of culture enabled social skills like collaboration, which played a fundamental role in driving technological development and innovation throughout the evolution of mankind.

The Four Industrial Revolutions

In 1750, the First Industrial Revolution saw the development and wide-scale introduction of mechanical production, railroads and steam power. The second in 1870 saw mass production and electrical power, with innovators like Alexander Graham Bell inventing incredible devices like the telephone — enabling direct communication across the globe. The third revolution — also known as the digital revolution — arrived in 1969 with automated production, electronics and computers with Tim-Berners Lee creating the internet (which was not publicly available until 1994) and the introduction of mobile phone technology in 1979. It is important to note, that in each case, there was a lag period of around 30–40 years between the development of the nascent technology and the wide-scale impact of that technology on the broader population.

Now, we are confronting the Fourth Industrial Revolution: characterized by artificial intelligence, big data, robotics, and quantum computing. The leading thinkers in the field argue that these technologies will create an unprecedented change in the way the world functions. Klaus Schwab, Founder of the World Economic Forum, suggests that this revolution is so significant because of its speed.

“Current breakthroughs have no historical precedent…the Fourth (Industrial Revolution) is evolving at an exponential, rather than a linear pace.”

This speed of change is evident with one of the most disruptive consumer innovations of the 21st century: the iPhone. Created in 2007, the iPhone now has almost a billion users and its parent company Apple was recently valued at one trillion dollars. Whilst the aesthetic and design are significant innovations in themselves — the hardware inside the iPhone is quite extraordinary. For example each chip inside the iPhone X motherboard (that is the fourteenth iteration of the iPhone) is 15 nanometers wide. Small enough that 450 of those fit in one single red blood cell. This astonishing example illuminates not the remarkable rate of technological advancement.

Explaining the rate of innovation

We have seen an exponential growth in the power of computing chips over the last five decades, which many computer scientists attribute to Moore’s Law. Named after Gordan Moore, founder of Intel, the law observes that the number of transistors in an integrated circuit doubles every two years. This is an important observation because it helps us to predict the future of selected innovations — in this case, the future of AI. Although proven correct for the past five decades, some experts believe Moore’s Law is slowing down, but this may in fact be a good thing as it allows us to jump from artificial machine intelligence (top down — human engineered) to natural machine intelligence (self-improving with deep learning for example).

Moore’s Law and Eroom’s Law

What exactly is artificial intelligence?

Unhelpfully, there is no clear-cut ready-to-go definition of AI, largely because we don’t have a clear-cut definition for human intelligence. However, broadly AI can be defined as the use of computer systems to make decisions similar to those that humans would make.

There are many different models for AI — some which replicate the neural networks in the human brain, others which are entirely artificial. AI is created through a number of different methods which include machine learning, deep learning, supervised learning and reinforced learning. Machine learning is the most basic of these and involves the construction of algorithms which find patterns in data sets and makes decisions based on these. AI is a focused intelligence. Unlike human brains, which are complex machines thinking in many different ways, AI is a tool which can reduce complexity by making quick, replicable and accurate decisions. What is more, AI is self-learning, which means it gets better and better at making accurate decisions the more data (information) it receives and/or generates. This graph from Nick Bostrom’s “Superintellgence” gives some scale of the potential of self-learning AI in comparison to human intelligence.

From Nick Bostrom’s Superintelligence

As you can see, the potential for self-learning AI is quite remarkable. While it is important to note here AI does not have many of the cognitive capacities that humans do, such as empathy and emotional intelligence, ethics, the ability to reason or think laterally, we can use AI to perform more basic, data-heavy tasks, which require making multiple linear decisions.

How does AI impact our industry?

The development of an artificial general intelligence will likely change the trajectory of humanity forever. We are already seeing the impact on the way we live our lives and over the next decade it will completely disrupt the way we consume goods and services.

It is important to note is that data analytics is not new — technology companies like Amazon and banks around the world have been using algorithms for years to find patterns and make predictions about consumer behaviour. However, machine learning and deep learning (built upon access to mobile networks and data) are increasingly allowing for greater accuracy in analyzing such patterns and even predicting when those patterns will deviate. This technology is not only changing the demand side — as increasing transparency, consumer engagement and emergent consumer behaviors force companies to change the way they design and market their products — but it is also allowing for new types of services to come into being. We are witnessing the rise of new business models with the growth of an AI-supported sharing economy, as well as enhanced subscription, on-demand and personalization models — all of which offer consumers increased flexibility, accessibility and efficiency.


Clemmie Prendergast is a consultant at Brand Genetics, an insight and innovation agency specialising in human-centred insight and innovation. With a background in anthropology, she has a wealth of experience in behavioural science and psychology and has worked in strategy, insight and behaviour change.