I was invited to deliver the keynote at NEXT Conference held on September 24th-25th 2015 in Hamburg, Germany. A busy mix of agency, marketing and business executives, it was an audience outside of our usual network. And so, it became an opportunity for Jon and me to present our thinking, our company and the research areas we want to work in, to a new audience. The original talk was more like a performance, with a stream of videos and animations as visual stimulus, which you view here.
This is Prof Ranjan. On a Sunday few weeks ago he posted one of his regular selfies on Facebook. Three hours later he was no more. His untimely death has been a deep shock for the Indian design community and the wider world whom he touched. I am still quite not sure how to make sense of it. In many ways I am here today because of him.
I could not help but think of the fragility of life. In that moment of news, I had just finished recording a little video clip of my three year old son’s first encounter with a VR Headset. As I looked at my son, that sudden loss made me consider my own mortality more then ever before. In the absence of a prolific collection of selfies I considered writing a letter to my son — something he might read in the future, in the event of my own untimely death. But sitting down to contemplate what I would write I was continually drawn to the question: In what future world would this letter find him?
As a designer my work often involves investigating potential futures, normally through the lens of a specific technology. But this raw personal emotion gave another level of poignancy to that moment’s consideration.
As we kick of this new version of NEXT with the theme ‘How We Will Live’, I can not but help but pose that as a question. HOW WILL WE LIVE? In the 30 minutes I have, I can barely even scratch the surface of this question, but I will try my best to highlight a few themes that can hopefully help widen the scope of conversations over the next two days. Starting with some fundamental questions: What future are we building for ourselves and our children? More importantly, how are our visions of the future shaped and formed? What impact to these visions have on our lives, and what power do we have to influence and change them?
Whilst we conference, the future is manifesting itself all around us through a notification, an update, a news headline, a product launch, a housing policy, a new voting system, a political party, a patent, a treaty. And even though its within reach, and all around us, we often struggle to make sense of it.
The popular term for this in psychology is ‘Selective exposure’. Simply put — we favour information that reinforces our pre-existing views and avoid information that might question our beliefs and attitudes. It was precisely this kind of confirmatory bias that was responsible for the 2008 bankruptcy of the Lehman Brothers Investment Bank, triggering the Global Financial Crisis. In the zeal for profit and economic gain, politicians, investors and financial advisors ignored the mathematical evidence that foretold the housing market crash, in favour of flimsy justifications for upholding the status quo.
In situations where we are confronted with ideas that conflict with our beliefs, we experience a form of mental discomfort. Leon Festinger called this ‘cognitive dissonance’. Stealing office supplies is one little example — if you grab sharpies and post-its from your office, whilst knowing that it does not feel right, you justify it by telling yourself that you are being made to work long hours, or not being paid enough.
Scientists continue to warn us about global warming, and many of us are aware of it, but either because of direct vested interests in it, such as being the chief executive of Volkswagen, or for lesser mortals like the rest of us — the burden of doing something about it — simply less flying or having fewer children is difficult. So we seek out information that will be more comforting, and find justifications for our actions.
These are just some examples of cognitive weaknesses that as well as blindsiding us, are also exploited by various industries to ensure we consume only specific information, whilst ignoring other. Part of it is due to the media landscape we live in, which presents us with carefully curated singular future visions full of shiny glass surfaces and effortlessly telepathy. Unless we understand these conditions, our capacity to create the future we want will be out of our hands and we will have no control it. So today, I want to go beneath the visions of the future we consume. As we look around at big trends like Big Data, Smart Cities or Internet of Things, which we realise are not really so separate, each of them bleed into the other.
So I’ll try and paint a bit of big picture by starting with the mundane activities such as socialising and shopping. Until recently, I was a prime amazon member, using it to the fullest, buying emergency nappies at 2 in the night. It would be at my doorstep the next morning. Super convenient. Next time I was offered a combo deal with wet wipes. Of course duh. Then those books which promise the dream of sleep, then the cute onesies, then the pacifiers. Not to mention the free vouchers and coupons. I was hooked.
We become enveloped in habit forming feedback loops, thanks to the convenience of it all — the fingerprint login, the one click buy button, saved credit details so on. This system of cue / routine / reward — in which the brain converts a sequence of actions into an automatic routine, is called “chunking” by researchers at MIT. A simple example is driving a car. When first learning to drive, tremendous concentration is brought to bear, the brain is highly active. But after much repetition, much of the act of driving becomes “chunked” into a subroutine that is executed with far less brain activity.
These habit-forming digital activities turn us into what I call Chunkers: A new 24/7 labour class of data producing workers. As Adam Curtis has written, giant computer servers accumulate the data and metadata of our most mundane everyday pasts — creating a historical universe that is constantly mined to find new ways of giving back to us today what we liked yesterday — with variations.
The inevitable effect of chunking is the narrowing and simplification of our experience making it easier for smarter and cleverer recommendation systems to predict what we might want next. This baffling patent diagram is one example of that. Its Amazon’s patent for anticipatory shipping, so things we haven’t yet ordered but are very likely to do so, are already being shipped right now.
I have recently started confusing the Amazon algorithms, by searching for all sorts of things that I absolutely dont want. The recommendations are — well — outrageous to say the least. I know I have probably dug my own grave of insufferable spam.
But jokes apart, when such systems start showing signs of autonomy, things can get eerily complicated. One small example is my latest stand off with Siri. We are going to enter into ever more complex battles over autonomy and agency with the gadgets we use and live with.
Whilst concerns about robots taking over our jobs hit the headlines, I think we need to delve a bit more instead on how we are progressing or regressing human agency. How much capacity do we have to push ourselves to test the limits of our own abilities?
This becomes even more important to think about with the internet of things as the cue / routine / reward system is bleeding out into our physical infrastructures. Instead of buying stuff online, things all around you keep track of what you need, so long as you keep feeding them data.
Mimo, a smart baby monitor built into a onesie notifies you when your baby wakes up or changes her breathing pattern. so when your baby is stirring, the lights turn on, coffee begins brewing and some Baby Mozart starts playing on the stereo. if you are a parent in this room you might notice some problems with this product and its programmed behaviour. Most recently we heard of the smart fridge that leaves your gmail accounts vulnerable and the TV that listens into your conversations.
Recently I worked with my team at Superflux on a project where we explored the frictions that arise between such one-size-fits-all IoT care devices and an elderly person’s habits and rituals. Thomas, aged 70, has been given smart devices by his children who live far away, so they can monitor their’s father’s wellbeing. A smart fork to monitor his diet, a smart cane to monitor his exercise and a smart mattress to monitor his sleep.
Finding the constant nudges of the devices intrusive, Thomas stopped using them, only to be nagged by his children. So he looks for ways to outsmart the devices. I am going to show just a small part of the film — its called Uninvited Guests.
IoT is happening so much to us, that we forget that the technology is not the goal.
This is something my colleague Hugh Knowles constantly reminds us. Instead of desperately hunting for the killer app in IoT, what would truly make it a killer tech would be a human approach to the technology — ethical, sustainable biz models that empower people. Where we have the capacity and tools to make sense of the data we collect, and decide what we want to do with it.
We consider startups and entrepreneurs to be disruptors and early adopters of such technologies. But ironically, nation states are often the ones who actually test these technologies and start using them in unexpected and unsettling ways.
The Chinese government recently announced that it is building a ‘Social Credit System’, where they will use big data and various surveillance systems to publicly rate its citizens on sincerity scores. People will know they are being watched and their standing in society will be affected by their behaviour. This shouldn’t be so shocking for us here in the West, where such systems have been operating by stealth for a very long time. And lest we forget, there are obvious icons that represent the power and struggles of such systems.
But beyond the obvious, if we looking through the cracks and crevices we discover the less visible side of these infrastructures and big data systems. The same system which allows companies to collect copious amounts of our data and predict what we might need, is also used for policing and crime prediction. “What Can We Learn from Wal-Mart and Amazon about Fighting Crime in a Recession? ask law enforcement agencies who are busy trawling your social media data for early warnings — such as postings of gigs, parties etc. Once a computer identifies an area as a hot spot, it lowers the bar for what qualifies as suspicious behaviour -reinforce stereotypes about certain neighbourhoods.
Another example mentioned by Adam Curtis is STATIC-99 — the most popular tool in the US for predicting whether sex offenders are likely to commit crimes again after they have been released. It is being used to decide whether to keep them in jail even after they have served their full sentence. But the fact is that there is no way the system can predict what an individual will do. A recent report of such systems said that the margin of error for individuals could be as great as between 5% and 95%. Yet people are being kept in prison on the basis that such a system predicts they might do something bad in the future.
And so we come a full circle. From the simple act of shopping or sharing insignificant details of our lives online to future crime prediction — increasingly autonomous systems are constantly, watching, tracking, logging, collecting and archiving every aspect of our lives. Anonymity is becoming a luxury. Where it will never be possible to forget or get lost ever again. Finding patterns from these trillions of megabytes of data has become the biggest asset of the 21st century.
The future will be a landfill of GPS tags.
Although we might find the records and archives of our most mundane daily activities rather dull, these grey boring blocks are the biggest centres of power today. Our futures are being built on these archives. If ever there was a truth in the idea of how your past will come to haunt you in the future, its NOW. You are only as much as your data karma will allow you to be.
This might all seem a tad dystopian, but it is a very real aspect of our technological landscape. I think one of the reasons we often overlook these more problematic aspects is because we tend to imbue technology with the ideals of the people who have created it, and the messages of those who market it. However, creator and marketeers only ever set the affordances and suggest a use case. A technology’s true impact will always be defined by those who use it. Whether that’s knitting groups or fascist regimes, technology becomes an amplifier and accelerator of the social, cultural and political values of the groups who use it, not those who made it. And it will continue to be used in ways you can never imagine.
For instance this article has been doing the rounds recently. Lot of people have been very surprised that Syrian refugees have smart phones. I guess its interesting because we see it being used in unexpected ways, but I think this needs a bit of context is needed.
This is Syria before the civil war broke out in 2011.
And here’s one of the beautiful squares of Damascus. Not one of the world’s richest country, but doing well, and with more then enough comforts to afford cheap or not so cheap smart phones.
Before this happened.
And whilst these are the popular images in the media it is often devoid of even the most recent historical context.
As millions of Syrians flee their homeland, they grab the most essential items that will help them stay connected, and chart their journeys forward. Whatsapp and maps take on a whole new meaning as they become a lifeline for hundreds of thousands of people.
Simultaneously that very same technology is being ingeniously exploited by oppressive forces. Soldiers at government checkpoints, as well as at Islamic State checkpoints, commonly demand Facebook passwords. They look at Facebook profiles to determine one’s allegiance in the war.
And then altogether just like that — these things can be turned off. Just few weeks ago the Government of India turned off internet for 63 million people in my home state of Gujarat. It was known as the ‘whatsapp ban’, and was implemented because it could (apparently) ‘incite social unrest’. It was used an an instrument of control over quick communication of ‘rumours’. A very stark reminder of where power lies and how quickly something that we have come to rely on can be taken away.
And this is a small example of something history has repeatedly demonstrated. Those with the least power to participate in creating the future often suffer the worst consequences of its manifestations.
If I were to write a letter to my son I will not fail to write to him about the incredible and powerful work of thousands of people across the world to build a more inclusive, plural, aspirational world. I will write to him about the world changing projects that are brought about by the determination of a few. I will share with him a future of promises too. But I want to show some other shades that might help him form a more nuanced worldview.
We are here to discuss the consumers of tomorrow. They are going to be all groups, more diverse than you can imagine. They will include us, people from other parts of the world, extremist groups, fascist governments, activists, refugees, creditors, and bots. So any future we consider and design for will need to be diverse, plural and inclusive.
And in what world will these varied consumers live? Let us, for a very brief moment, do a thought experiment of extending it into just one — probably populist theory of the future right far out. Say 50 years from now — 2060–2070. Tim Urban’s brilliant ‘Wait But Why’ blog polls various experts of the AI field, who place their estimate of the birth of an artificial superintelligence (ASI) — one that exceeds human intelligence — to around 2060.
The kind of intelligence that we couldn’t understand anymore. As Tim writes, “An ASI would be orders of magnitude more powerful than a human mind, and it would use its power to continuously improve itself even further. For a human trying to understand an ASI’s “mind” would be as it would be for a spider to try to understand a human’s mind, concepts and culture.”
Couple this big technology trend with one of the biggest societal “trend” or more like — concern. Scientists at UK’s Met Office Department of Energy and Climate Change warn that a 3–4C rise in temperature could happen by 2060 without strong action on emissions, which will have a global GDP loss of 0.7% to 2.5% cause a 40% reduction in corn, rice other agricultural produce.
People will be forced from their homes on a grand scale — from coastal areas because of rising seas; from areas no longer habitable due to high temperatures or drought; and from changing industrial and commercial practices. Maybe human society will be so displaced that they won’t be able to adapt to it. Just about the same time at we have artificial superintelligence.
What world will that be? Some of us might still be alive then. My son would be in 63 then, and if he did have children, then my grandchildren could be just about my age. Suddenly time compresses and nothing seems that far off.
This might feel like a work of fiction and it may well turn out to be, as the future remains uncertain. But its potential is very real. Lets just have another look at the events unfolding around us today. The situation in Syria could be seen as a microcosm of this future. Researchers and scientists have said this for several years now — global warming intensified Syria’s worst-ever drought between 2006 -2010, which destroyed the country’s agriculture.
Pushing 15 million into cities already straining from an influx of refugees from warn-torn iraq and poor government. Professor Seager of Columbia University says “I think it’s only just beginning. It’s going to continue through the current century as part of the general drying of the Eastern Mediterranean — I don’t see how things are going to survive there.”
On the Artificial Intelligence front, well we have already starting to see the impact of autonomous robotic systems.
So once again when we ask — How will we live? How will we survive, sustain, endure?
The problem with utopias or dystopias is that they demand fear or hope. Where fear often paralyses, hope fosters placid anticipation. Both can lead to inaction. When considering how to move beyond this dichotomy I am reminded of a Gilles Deleuze’s quote ‘There is no need to look for fear or hope, but only to look for new weapons.’
As entrepreneurs, marketeers, media agents, technologists, hackers, designers, you have amongst you, a suite of sophisticated tools and clever tactics of social media, information access, language, human and machine resource, and so much more. You don’t need to go out on the streets and protest if that’s not for you, you can instead become stealth activists, to create the future we want. As Keller Easterling would say: “Gossip, rumor, gift-giving, compliance, mimicry, comedy, distraction, hacking or entrepreneurialism.” are all tools for the stealth activist.
So — what can you do?
This is just my quick, hastily cobbled back-of-the-napkin list, certainly not an exhaustive one. The point is that, you can, within your contexts and environments, be tactical, creative and innovative, in order to leverage power.
We all know Orwell for the bleak dystopia of his book 1984. What is not so known is that he wrote it because he was full of hope. As Thomas Pynchon wrote in his 2003 foreword for the book: “Orwell remained confident in the ability of ordinary people to change anything, if they would. It is the boy’s smile, in any case, that we return to, direct and radiant, proceeding out of an unhesitating faith that the world, at the end of the day, is good and that human decency, like parental love, can always be taken for granted — a faith so honourable that we can almost imagine Orwell, and perhaps even ourselves, for a moment anyway, swearing to do whatever must be done to keep it from ever being betrayed.”