If you read the news, then you are confronted with daily updates on wars, diseases, and celebrity gossip. It’s hard to get a sense of how humans on this planet are actually doing. Are things getting better for the average person? Are they getting worse? You may have encountered articles like the BBC’s Five ways the world is doing better than you think, Cracked.com’s 18 undeniable facts that prove the world is getting better, and Why the world is better than you think in 10 powerful charts. Aside from the clickbait titles, the articles almost get it right. However, while the data is correct, the take-away is dangerously misleading.
So is the world doing better than you think? Well, if you think people in the world have it worse off than they did in the good ‘ol days, Hans Rosling is correct; things are better. Population growth is slowing, developing countries are getting wealthier, people are much healthier, girls are receiving better education, and the end of extreme poverty is within sight.
I could end this post right now if the question was: Is it better to be alive now than one hundred years ago? By almost any measure, yes. But why is it better? Technological development is the short answer. Far less humans get sick now thanks to vaccines, antibiotics, and clean drinking water. However, technological progress is not inherently tied to human well-being. Jared Diamond, author of Germs, Guns, and Steel makes the argument that hunter gatherers might have been better off than agriculturalists. The agricultural way of life was certainly more efficient at extracting food resources, and thus supported a larger population, but more efficiency at resource extraction is no guarantee of better living.
The average person* today is much better off than the average person 50, 100, or 1000 years ago. Should we expect things to keep getting better? Are we on an unstoppable trajectory to the elimination of global poverty, disease, and violence? It depends who you ask. If you ask Peter Diamandis, he would tell you The future is better than you think. Diamandis is a techno-optimist. In his book Abundance, he argues that exponentially improving technologies will enable innovators, with the help of billionaire philanthropists, to solve all of the world’s problems, especially those that affect the global poor. That’s a nice narrative, but it is true?
Yes… and no. Humans are capable of designing technology to allow 11 billion people to have enough healthy food, clean water, and medical care. It is less clear that this future will come to pass. We currently produce enough food to feed the whole world, but 795 million people do not get enough to eat. Global markets are powerful optimizers, but the selection pressures shaping the global economy do not perfectly align with human values. Technologically, an abundant future is very possible. If global development continued without any major setbacks, we would eliminate global hunger and absolute global poverty. Unfortunately, the same powerful forces driving global development are pushing us towards a major disruption–a disruption that may result in human extinction.
International development would be impossible without innovations in the underlying technologies. The “world is better than you think” crowd is right to credit technological progress for the ability to feed most of the world today. Unfortunately, the catch-22 of development is dual-use technology. The Wikipedia entry on the topic has sections on missile, nuclear, chemical, and biological tech. If that sounds broad, it’s because it is. The same innovation processes that give us a Global Positioning System also give us Intercontinental Ballistic Missiles. Technology provides no inherent morality. It can be used to wipe out smallpox and polio, or bomb civilization into oblivion.
Sir Martin Rees, cosmologist and president of the Royal Society from 2005–2010, sees the danger posed by technology as very real. He wrote one of the first major books on the subject: Our Final Century: Will the Human Race Survive the Twenty-first Century? He gives us 50/50 odds of surviving until the end of the century. One reason he cites is the persistent threat of global nuclear war.
Over the last sixty years, we have come disturbingly close to nuclear war. During the Cuban Missile Crises, a Soviet nuclear submarine almost fired a nuclear torpedo, initiating a nuclear war between the USSR and the United States. Cut off from the Soviet command, two out of the three senior officers decided to go forward with the launch. Only the strong disagreement of Vasili Arkhipov saved human civilization.
The end of the Cold War did not end the threat of nuclear war. There are now nine state actors in the nuclear club. Among them are India and Pakistan, who have a long history of conflict and border skirmishes. Two days before this article was written, North Korea conducted its fifth nuclear test, which follows its recent tests in long range and sea-based missile technology. Tensions between the U.S. and Russia are the highest they have been since the Cold War. The Bulletin of the Atomic Scientists has been estimating nuclear risk since 1945. To illustrate the present risk of nuclear war, the journal displays the Doomsday clock on its cover. In 1947 the clock showed 7 minutes to midnight. In the last two years, the clock has read 3 minutes to midnight, the closest it has been since the cold war. Progress in arms reduction has ceased, and the nuclear powers are increasing their spending on nuclear weapons and developing new capabilities.
The state of the world is worse than global poverty and world health trends would suggest. It would only take one mistake to undo the benefits of development and reduce the world’s population to a fraction of its current size.
Concerned scientists like Martin Rees aren’t just worried about nuclear war. While nukes were the first invention capable of destroying human civilization, they likely aren’t the last. A less known fact about the Cold War is the development of massive quantities of biological weapons. The Soviet Union’s secret bioweapons program employed more than 30,000 people to develop and produce biological weapons, including anthrax, Ebola, Marburg virus, plague, Q fever, Junin virus, glanders, and smallpox.
Historically, handling and producing weaponized pathogens has been quite difficult, which has limited the production of biological weapons to large militaries. This is changing. Recent advances in biological engineering techniques, such as CRISPR, have made bioengineering easier and cheaper. The field of synthetic biology hopes to revolutionize biotechnology by designing small, lego-like modules that can be built into many configurations. Soon it might be possible to design a bacteria or virus on your laptop, compile your code into DNA, and print out the molecules using a desktop bioprinter.
How difficult would it be to engineer a pathogen to kill millions or billions of people? The short answer is that we do not know. Human immune systems have not evolved to deal with threats from precision engineering. It might be possible to design pathogens with capabilities that couldn’t evolve naturally. Like with all powerful weapons, nation states lead the way in bioweapon research but the results are highly classified. As the Soviet Union demonstrated, bioweapon development programs are much easier to hide than nuclear programs, as they require no rare elements or massive centrifuges. North Korea is suspected of having a bioweapons program, but their capabilities are hard to estimate.
The risks from advanced technology are not limited to nuclear and biological tech. Climate change is likely to have destabilizing effects, and might compound other global risks. Future advances in the field of artificial intelligence** might lead to machines with super-human capabilities and unpredictable reasoning. Finally, there are unknown unknowns. Humans at the turn of the 19th century were unable to predict the devastating capability of nuclear weapons and rocket technology. In the same way, we don’t know what new technologies will present new risks this century, in addition to the ones we can identify.
Most people would like to see a humanity flourish, not go extinct. I would very much like to see the future that techno-optimists envision. To make that happen, we are going to have to seriously confront global risks. The problems are complicated and the potential solutions are not simple, but the good news is that people are already working hard to understand the risks and possible mitigations. The organization 80,000 hours provides some suggestions on leveraging your skill set and career path to mitigate global catastrophic risk. If you do not think you can work on the problem directly, you can support existing organizations like the Future of Life Institute, the Future of Humanity Institute, and the Global Catastrophic Risk Institute.
It’s uncomfortable to directly confront the idea of global catastrophic risk. I do not enjoy thinking about a world devoid of human life, our cosmic endowment wasted. However, diverting our eyes from the problems will not change what is real. Humanity has made some great achievements, from vaccines and antibiotics to computers and space travel. Future technology could enable us to eliminate disease and greatly improve the quality and quantity of life. None of this will be realized if humans let technology prevail over wisdom. For the sake the future generations, for their very existence, we must learn to be wiser.
* While it’s true that humans are better off today, the idea that the world is much better off than it was a hundred years ago is a very anthropocentric view: if you’re thinking about the wellbeing of domestic animals, things have gotten much, much worse. For instance, if you considered the value of a domestic animal’s experience to be even 1/7th the value of a human experience, then the equivalent of the entire human population is slaughtered annually (53 billion animals).
** Many global risk experts believe advanced AI poses the greatest extinction risk to humanity. Unfortunately, the topic is far less straightforward than nuclear or biological technology, so I have chosen not to focus on it in this post. Please feel free to message me if you would like to learn more.
Originally published at globalriskresearch.org on September 11, 2016.