Could AI ever “benefit all of humanity”?

Kentaro Toyama
AI Heresy
Published in
4 min readJan 12, 2024

--

An image of a poor man looking at a smartphone in a Dickensian alley.
Image created by DALL-E 3. Not copyrighted, according to the non-copyrightability doctrine of generative AI.

The kerfuffle around Sam Altman at OpenAI seems to have been about whether Altman was sufficiently focused on AI safety and the non-profit’s mission to build technology that “benefits all of humanity.” But is building more AI really the route to better use of AI? As someone who’s worked on dozens of digital technologies intended to improve human lives, I can say that any idea that building good technology ensures technological goodness everywhere, much less benefit all of humanity, is doomed from the outset.

To begin, I want to cast just a little doubt on the widely accepted narrative that the technology of the last two hundred years has categorically improved lives. Those centuries saw the development of pretty much everything we consider modern technology — from the medical, to the agricultural, from the vehicular, to the electrical. So, it would seem to take a real curmudgeon to doubt their impact. But, consider that in the early 1800s, per capita GDP for the approximately one billion people living then was around $1,390 in 2023 dollars.[i] Today, there are 1.8 billion people who live on about that much or less.[ii] So yes, technology has improved lives for many of us, possibly even for the majority, but if you’re one of those 1.8 billion living under the global poverty line today, you haven’t seen the benefit. To put it another way, almost double the population of 1800 are now doing worse than the average person in 1800. In effect, there’s a big difference between a great technology being in existence, and whether we, as a civilization, apply it for the good of everyone. Benefitting “all of humanity” is not something you can bake into the technology.

If even straight-out life-saving technologies like vaccines and large-scale agriculture haven’t helped everyone, what then of digital technologies, which are seldom life-saving, and often bring their own problems?

In 2004, I moved to India to help Microsoft start a new computer science research lab. I was focused on exploring ways in which digital technology could support low-income communities in that country. We built new software, mobile apps, text-messaging services, and custom hardware — some applying the cutting-edge AI of the day — to support farmers, inner-city families, hospital patients, and government agencies. After 50-odd such projects, though, I saw time and again how even the best-designed technology failed to change the underlying social, cultural, and political causes of poverty and other social challenges. What digital technology did was to amplify existing human forces; and exactly where we most wanted to cause positive change, the underlying human forces were corrupt, dysfunctional, or indifferent.

In 2010, those conclusions led me to quit the tech sector and return to the United States, where I found that the lessons I learned in India applied just as much in a richer world. At the time, social media and smartphones were widely beloved; Mark Zuckerberg announced Internet.org, an effort to bring the internet (read: Facebook) free to more people in the world. But, all was not well in digital paradise. Bit by bit, stories accumulated about cyberbullying, revenge porn, online trolling, and eroding mental health. And, despite Silicon Valley’s economic halo, poor and working class Americans were not much better off with Amazon and Google and smartphones than without. Then came Cambridge Analytica and Russian interference in U.S. elections. Suddenly, we all woke up to the dark side of technology.

Those experiences seem to have inoculated many of us to the hype about AI. More of us are rightly concerned about the risks, and more of us are calling for regulation. Even the U.S. government, not generally known for its eagerness to regulate, has begun to act.

Regulation, incidentally, is the only way to rein in bad AI, if for no other reason than that governments are the only forces powerful enough to push back against billion-dollar corporations. And, a social movement big enough to affect global capitalism is the only thing that could radically alter the dynamics of inequality. The profit motive and market rivalries are too powerful to ensure that the world’s tech bros will do the right thing of their own accord.

If technologists were serious about regulation, they’d spend their considerable lobbying dollars pushing for AI regulation, and then paying for representatives of global civil society (and not themselves) to shape the regulation. For now, though, they seem to be focused on fighting technology with technology. On the one hand, they’re vocal about the dangers of AI. Sam Altman, Elon Musk, Sundar Pichai, and other tech titans issue frequent, dire warnings, and they often mouth calls for more regulation. On the other hand, these are the same people racing to build the most powerful AI systems on the planet. They’re not too different from arsonists calling in the blazes they started — they need the rest of us to restrain them.

Notes

[i] By one estimate: https://read.oecd-ilibrary.org/economics/how-was-life/gdp-per-capita-since-1820_9789264214262-7-en . Other estimates tend to agree to within 10% or so — most such estimates are based on a historical dataset built by economist Angus Maddison.

[ii] https://devinit.org/resources/poverty-trends-global-regional-and-national/

--

--

Kentaro Toyama
AI Heresy

W. K. Kellogg Professor, Univ. of Michigan School of Information; author, Geek Heresy; fellow, Dalai Lama Center for Ethics & Transformative Values, MIT.