Technology is Downgrading Humanity: Let’s Reverse That Trend Now
Summary: Today’s tech platforms are caught in a race to the bottom of the brain stem to extract human attention. It’s a race we’re all losing. The result: addiction, social isolation, outrage, misinformation, and political polarization are all part of one interconnected system, called human downgrading, that poses an existential threat to humanity. The Center for Humane Technology believes that we can reverse that threat by redesigning tech to better protect the vulnerabilities of human nature and support the social fabric.
THE PROBLEM: Human Downgrading
What’s the underlying problem with technology’s impact on society?
We’re surrounded by a growing cacophony of grievances and scandals. Tech addiction, outrage-ification of politics, election manipulation, teen depression, polarization, the breakdown of truth, and the rise of vanity/micro-celebrity culture. If we continue to complain about separate issues, nothing will change. The truth is, these are not separate issues. They are an interconnected systems of harms we call human downgrading.
The race for our attention is the underlying cause of human downgrading. More than two billion people — a psychological footprint bigger than Christianity — are jacked into social platforms designed with the goal of not just getting our attention, but getting us addicted to getting attention from others. This an extractive attention economy. Algorithms recommend increasingly extreme, outrageous topics to keep us glued to tech sites fed by advertising. Technology continues to tilt us toward outrage. It’s a race to the bottom of the brainstem that’s downgrading humanity.
By exploiting human weaknesses, tech is taking control of society and human history. As magicians know, to manipulate someone, you don’t have to overwhelm their strengths, you just have to overwhelm their weaknesses. While futurists were looking out for the moment when technology would surpass human strengths and steal our jobs, we missed the much earlier point where technology surpasses human weaknesses. It’s already happened. By preying on human weaknesses — fear, outrage, vanity — technology has been downgrading our well-being, while upgrading machines.
Consider these examples:
- Extremism exploits our brains: With over a billion hours on YouTube watched daily, 70% of those billion hours are from the recommendation system. The most recommended keywords in recommended videos were get schooled, shreds, debunks, dismantles, debates, rips confronts, destroys, hates, demolishes, obliterates.(AlgoTransparency)
- Outrage exploits our brains: For each moral-emotional word added to a tweet it raised its retweet rate by 17% (PNAS).
- Insecurity exploits our brains: In 2018, if you were a teen girl starting on a dieting video, YouTube’s algorithm recommended anorexia videos next because those were better at keeping attention.
- Conspiracies exploit our brains: And if you are watching a NASA moon landing, YouTube would recommend Flat Earth conspiracies millions of time. YouTube recommended Alex Jones (InfoWars) conspiracies 15 billion times (source).
- Sexuality exploits our brains: Adults watching sexual content were recommended videos that increasingly feature young women, then girls to then children playing in bathing suits (NYT article)
- Confirmation bias exploits our brains: Fake news spreads six times faster than real news, because it’s unconstrained while real news is constrained by the limits of what is true (MIT Twitter study)
The advertising business model is the cause of this human downgrading. Free is the most expensive business model we’ve ever created. We’re getting “free” destruction of our shared truth, “free” outrage-ification of politics, “free” social isolation, “free” downgrading of critical thinking. Instead of paying professional journalists, the “free” advertising model incentivizes platforms to extract “free labor” from users by addicting them to getting attention from others and to generate content for free.
Instead of paying human editors to choose what gets published to whom, it’s cheaper to use automated algorithms that match salacious content to responsive audiences — replacing news rooms with amoral server farms. This has debased trust and the entire information ecology.
Now we see that social media has created an uncontrollable digital Frankenstein. Tech platforms can’t scale safeguards to these rising challenges across the globe, more than 100 languages, in millions of Facebook groups or YouTube channels producing hours of content. With two billion automated channels or “Truman shows” personalized to each user, hiring 10,000 people is inadequate to the exponential complexity — there’s no way to control it.
- The 2017 genocide in Myanmar was exacerbated by unmoderated fake news with only four Burmese speakers at Facebook to monitor its 7.3M users (Reuters report)
- Nigeria had 4 fact checkers in a country where 24M people were on Facebook. (BBC report)
- India’s population has 22 languages in their recent election. How many engineers or moderators at Facebook or Google know those languages?
Human downgrading is an existential for global competition. Global powers that downgrade their populations will harm their economic productivity, shared truth, creativity, mental health and wellbeing of next generations — solving this issue is urgent to win the global competition for capacity.
Society faces an urgent, existential threat from parasitic tech platforms. Technology’s outpacing of human weaknesses is only getting worse — from more powerful addiction to more power deep fakes. Just as our world problems go up in complexity and urgency — climate change, inequality, public health — our capacities to make sense of the world and act together is going down. Unless we change course right now, this is checkmate on humanity.
A PROPOSED SOLUTION: Catalyzing a transition to humane technology
Human downgrading is like the global climate change of culture. Like climate change it can be catastrophic, but unlike climate change, only about 1,000 people need to change what they’re doing.
Because each problem — from “slot machines” hacking our lizard brains to “Deep Fakes” hacking our trust have to do with not protecting human instincts, if we design all systems to protect humans, we can not only avoid downgrading humans, but we can upgrade human capacity.
Giving a name to the connected systems — the entire surface area — of human downgrading is crucial because without it, solution creators end up working in silos and attempt to solve the problem by playing an infinite “whack-a-mole” game.
There are three aspects to catalyzing humane technology:
- Humane Social Systems. We need to get deeply sophisticated about not just technology, but human nature and the ways one impacts the other. Technologists must approach innovation and design with an awareness of protecting of the ways we’re manipulated as human beings. Instead of more artificial intelligence or more advanced tech, we actually just need more sophistication about what protects and heals human nature and social systems. CHT has developed a starting point that technologists can use to explore and assess how tech affects us at the individual, relational and societal levels.
- Phones protecting against slot machine “drip” rewards
- Social networks protecting our relationships off the screen
- Digital media designed to protect against Deep Fakes by recognizing the vulnerabilities in our trust
2) Humane AI, not overpowering AI. AI already has asymmetric power over human vulnerabilities, by being able to perfectly predict what will keep us watching or what can politically manipulate us. Imagine a lawyer or a priest with asymmetric power to exploit you whose business model was to sell access to perfectly exploit you to another party. We need to convert that into AI to acts in our interest by making them fiduciaries to our values — that means prohibiting advertising business models that extract from that intimate relationship.
3) Humane Regenerative Incentives, instead of Extraction. We need to stop fracking people’s attention. We need to develop a new set of incentives that accelerate a market competition to fix these problems. We need to create a race to the top to align our lives with our values instead to the bottom of the brain stem.
- Policy and organizational incentives that guide operations of technology makers to emphasize the qualities that enliven the social fabric
- We need an AI sidekick that’s designed to protect the limits of human nature and be acting in our interests like a GPS for life that helps us get where we need to go.
The Center for Humane Technology supports the community in catalyzing this change. We can see how different groups can come together:
- Product teams at tech companies can integrate humane social systems design into products that protect human vulnerabilities and support the social fabric.
- Tech gatekeepers such as Apple and Google can encourage apps to competing for our trust, not our attention, to fulfill values — by re-shaping App Stores, business models, and the interaction between apps competing on Home Screens and Notifications.
- Policymakers can protect citizens and shift incentives for tech companies.
- Shareholders can demand commitments from companies to shift away from engagement-maximizing business models that are a huge source of investor risk.
- VCs can fund that transition
- Entrepreneurs can build products that are sophisticated about humanity.
- Journalists can shine light on the systemic problems and solutions instead of the scandals and the grievances.
- Tech workers can raise their voices around the harms of human downgrading.
- Voters can demand policy from policymakers to reverse kids from being downgraded.
There’s change afoot. When people start speaking up with shared language and a humane tech agenda, things will change.