Culture Before Technology: Why We Need an Ethics Explosion

Felipe Kirsten
Predict
Published in
5 min readJan 30, 2020
Photo by Ali Yılmaz on Unsplash

At this year’s World Economic Forum Annual Meeting in Davos, New York Times journalist Somini Sengupta sat down with Yuval Noah Harari and Tristan Harris to hear a shared diagnosis of a form of power asymmetry that may disrupt the most cherished human narratives. The oldest principles of liberal democracy and free-market capitalism may soon falter because the algorithms of certain tech companies could already know many humans better than they know themselves. The suggested prescription includes global regulation, fiduciary and data ownership laws, mandatory ethics training in STEM learning environments, and the societal cultivation of volition, values-based thinking, and open-mindedness. Crucially, humans will need to learn how to prioritise culture over technology.

In any public setting allowing for discourse between the two men, it doesn’t take long for Harari and Harris to match each other’s intellectual wavelengths. Having formerly spoken together in a WIRED interview and on Harris’ podcast, this year their explanations around what Harris describes as an ‘asymmetry of power’ are polished and complementary to each other’s common examples. Harari, a macro-historian with a knack for futurology, and Harris, an ex-Google design ethicist who is now the director of the Center for Humane Technology, make a particularly potent team.

What is the Asymmetry of Power?

The ‘asymmetry of power’ discussed by Harari and Harris refers to a unique phenomenon in human history — an advanced algorithm can understand the patterns, perceptions, biases, and beliefs within a person’s mind better than that person can and can wield this dangerously tailored level of understanding across many human societies. Harari elaborates that while many children do not know themselves better than their mother knows them, as these children develop into adults their mind’s workings become so dislocated from any other human’s that they can safely assure themselves that they know their own mind best. The asymmetry may arise when algorithms know a child better than his or her mother does, as well as when algorithms know billions of adults better than they know themselves. Harris identifies how this omnipotent ability is used by several tech companies to exploit human psychology — their business models are designed to maximise profits from massive amounts of freely given behavioural data.

Liberalism on its Last Legs

Harari asserts that human narratives which allow humans to cooperate flexibly in large numbers have been vital for our species to thrive above all others for thousands of years. Such narratives include economic systems such as free-market capitalism and political ideologies such as liberal democracy. Common catchphrases that benefit these narratives have arisen over the past three hundred years: “The voter knows best”, and “The customer is always right”. Can these catchphrases still be believed? Do the voters know best if they do not know themselves best, and are the customers always right if their behavioural data reveals that their desires are strikingly vulnerable to manipulation? If a narrative’s call to arms no longer rings true in the ears of its believers, shouldn’t a new narrative take its place? Does anyone know what that narrative could be? Harari and Harris give a clear warning — for liberal democracy and free-market capitalism to survive the 21st century in the imaginations of the people, these narratives will need to be reinvented.

Silicon Valley Has a Philosophical Bug

Harris claims that there is a ‘philosophical bug’ in the metaphorical ‘Silicon Valley operating system’. This bug causes tech companies to be unable to distinguish between a user’s values and vulnerabilities. In other words, this failure results in the design of algorithms which make determinations not only based on what users want, but also what users can’t avoid looking at. Are you prone to clicking on gory animal abuse videos and feeling an immense rush of anger afterwards? What about pimple-popping videos or articles with click-bait titles which may induce very different psychological effects? Perhaps radical political rhetoric? According to the philosophical logic of Silicon Valley, this must be the kind of content that you value — yet companies may fail to recognise what exactly makes you vulnerable to clicking on this content. And the more you click, the more the algorithm will curate it into your feed, simply interpreting it as something fundamentally important to your nature that you may fail to notice yourself. As Harris aptly phrases:

“A human being in the attention model is worth more if they are addicted, distracted, outraged, polarised and disinformed than they are if there is a human being.”

On allegations that Facebook taps into your microphone to listen in on your conversations, Harris has been assured by a top executive at the company that this does not happen. What is scarier than Facebook listening to your conversations from your microphone is that Facebook doesn’t have to use your microphone to make these kinds of predictions, which seem to correlate with your physical conversations at very close points in time. The topics that you speak about tend to be predictably similar to the algorithm’s assumptions about what you would like to see next.

The Need for an Ethics Explosion

In offering a prescription for their diagnoses of this power asymmetry seen in the world today, Harari and Harris identify global regulations on artificial intelligence and laws for data ownership rights as political points which should be top priorities among world leaders. Harari warns that a technological ‘arms race’ speeding up scientific progress even more would be an incredibly dangerous and volatile situation that would almost definitely assure the worst outcome. However, solutions should not end with governmental responsibilities. Placing importance on the study of sociology, philosophy and ethics in the curricula of STEM learning environments would provide technologists and engineers with the skills necessary to assess for themselves whether their inventions could threaten the balance of human societies. In necessitating a legal obligation for tech companies to protect the data of users, Harris suggests that applying fiduciary law would immediately kill the business model enabling the exploitation of human psychology. Internal employees could also be rallied to place pressure on their companies from within in the interest of change. Furthermore, Harris believes that human values can be cultivated, such as in the US and Europe during the 1940’s in opposition to fascist thought. By cultivating a stronger culture of volition, values-based thinking and open mindedness, our minds would be better equipped to identifying our own vulnerabilities and biases — offering a natural first line of defense against third-party manipulation.

Slow Down Technology, Speed Up Culture

Our technologies can be used to construct very different types of societies, and this is especially important for technologists to recognise as their responsibilities grow — they will not be programming computers this century, they will be programming humans and societies. In the 21st century humans will desperately need culture to embrace our Palaeolithic minds, to upgrade our medieval institutions and to slow down progress so that we may find the wisdom to wield our god-like technologies that are yet to be invented. All other options will fling us into the abyss of one of countless dystopias.

--

--

Felipe Kirsten
Predict
Writer for

I am an author, futurist, and founder. I like to publish articles and books about disruptive technologies, the world of tomorrow, and art as a human tool.