The Ultimate Fate of Mankind

I am on a life long quest to find wisdom and understanding. I seek answers to big questions. I have found that asking the right question is just as important as finding the answer. Here are a few of the questions I have been thinking on:

Where will human civilization be a thousand years from now? Where will we be twenty thousand years from now? What is our ultimate fate?

Are we alone in the universe? Is there intelligent life elsewhere in the universe? What would an alien civilization look like? Is the structure of life universal (ie, carbon based DNA life forms shaped by evolutionary forces)? How do we define “life”?

What is “intelligence”? What is the difference between intelligence and sentience? Does sentient, intelligent life exist elsewhere in the universe? Is life a prerequisite for intelligence or sentience? Is it possible to create sentient, artificial intelligence?

Let’s start digging into some of these topics.

I believe that mankind has a destiny among the stars. Earth is our home world, but eventually, mankind will colonize and inhabit other planets. It is our manifest destiny. That will not happen within our lifetimes, but we can probably imagine a few prerequisites for that to become a technological possibility. Mankind also has a lot of social and “human nature” problems to solve prior to colonizing extra-solar planets. The following are problems which mankind needs to solve:

1) A tendency towards violence and war. War, violence and hatred are a part of our nature as human beings, and these are tendencies which we must remove from our nature. Is it possible? I am not sure. I do believe that so long as we have the scourge of war within our DNA, we have no place among the stars. We would only spread war and its horrors, and create far wider scales of suffering and misery, spanning stars and planets.

2) Capitalism is a self defeating economic system and is a chain on the spirit of mankind. Eventually, we will have to do away with capitalism.

3) A separation between humans, grouped by nationalities, races, religions, etc. All human beings need to look towards each other as fellow human beings and promote harmony between each other. We may have differences, big and small, but we’re sentient beings capable of great love and compassion towards each other, if given the chance to flourish. We should seek to see the goodness of ourselves in each other. When we can do this on and individual level, the flames of war get extinguished.

4) On the technological side, we’re going to have to solve the “vastness” problem of space and time. The nearest star is 4.5 light years away. If you pointed a very powerful electromagnetic source towards the nearest star and sent a message, it would travel at the speed of light but wouldn’t arrive until 4.5 years later. If we were to travel at conventional sub light travel speeds, we’d take several hundred human lifetimes to arrive at our destination. I think that we’re going to have to spend considerable effort to understand the nature of spacetime itself before we can reasonably look at space travel to other stars. I’m fascinated by quantum entanglement and the implications it could have on interstellar communication.

These are big problems to solve, and problems which may not ever be solved within our lifetimes. Can mankind even solve the problem of violence and its intertwinement with our human nature? Can spacetime actually be manipulated such that we either shrink space or change the progression of time?

On a more cosmic scale, let’s think about alien life and alien civilizations. The Drake Equation is a probabilistic claim which essentially says that there is a 100% certainty that life exists elsewhere in the universe. I think it’s reasonable to say that life exists elsewhere in the universe, and life is probably quite common. Consider that there are billions of stars just within the Milky Way galaxy. Our sun has a planet which hosts life, so there exists at least one star in the Milky Way galaxy which supports life. If there’s one, there are many. Of the billions of stars, the chances are probably quite good that there exist thousands of other stars which support life as well. When you consider that the Milky Way galaxy is just one of billions of galaxies within the universe, the chances of life existing elsewhere are infinitely close to certain.

“So, where is everyone? Why haven’t aliens contacted human civilization? Why can’t we hear them?”

People who ask this question completely forget about not only the vastness of space, but also the vastness of time. Just the diameter of our own galaxy ranges between 100,000 to 180,000 light years. Forget about the distance from our galaxy to our nearest galaxy. Our homeworld is 4 billion years old, and only within the last 100 years, has it been capable of hosting a lifeform advanced enough to say “hello” to another star system. By the time our “hello” message reaches an alien civilization, our civilization may have long since gone extinct for one reason or another. It’s entirely possible that our nearest stars were capable of supporting intelligent life, millions or billions of years ago, or will support life a billion years in the future, but human civilization and alien civilizations are simply separated by the gaping chasm of space and time, so are doomed to never meet. We’re like two ships passing in the night, each completely unaware of the existence of the other. That, is why we may be forever alone.

This might seem like it’s a source of despair for mankind, but we forget that we already have an enormous abundance of sentient, intelligent life on our home world, capable of communicating with us and providing unconditional love. Anyone with a dog or cat can understand what I mean. We should spend more time with fellow animals and learn from them on how to love each other, because they may as well be aliens to some of us with their foreign mindsets. So, give your dog a hug, throw a ball, and treasure the fact that you and him, two very different creatures, can share a moment together in a universe with an infinitely vast amount of space and time and be a little less alone. With this in mind, let’s try to avoid causing other creatures to go extinct through our own negligence and apathy for ecological conservation.

On the near future of civilization:

If we look at the trajectory of mankind and technology today, we can comfortably say that the pace of technological advancement is accelerating at an ever increasing rate. This makes it difficult to make predictions about the technological state of mankind in the next century, let alone in the next thousand years. One thing is certain though: Although our technological advancements are accelerating, the nature of mankind itself has not changed for thousands of years, so we can reasonably say that the nature of mankind will not change in the future. This is comforting and disappointing at the same time. There are going to be some major changes we can predict, however.

  1. The human population growth of earth will continue accelerating. We will eventually reach the carrying capacity of earth. This will be a logistic S-curve function. I predict that as the population passes the inflection point of the curve, the amount of human suffering will begin increasing exponentially (starvation, overcrowding, living conditions, etc).
  2. Global warming will most likely accelerate as well, roughly in proportion to the size of the human population. The final effect of global warming will be a lowering of the human carrying capacity of Earth. Rising temperatures will cause desertification of significant portions of Earth, causing ecosystems to collapse, food sources to vanish, and creating vast inhabitable regions (think of Sahara Deserts everywhere). Most counteractive efforts by mankind will be too little, too late. This will ultimately increase the amount of human suffering and hardship.
  3. Robotics, automation, and artificial intelligence will become the next era of mankind. We had the renaissance era, the industrial revolution, the atomic age, the information age, and now, we are entering into the early stages of the AI age. This age will be massively disruptive to modern civilization. I’m going to spend the rest of this article talking about what the next age is going to look like and what that will look like for mankind.

The Coming Age of Artificial Intelligence:

The age of Artificial Intelligence is inevitable. It is coming — and it will change everything — but not in the doomsday scenarios in which Hollywood or science fiction authors imagine it. Artificial Intelligence (AI) is going to be one of the most disruptive transformations we’ll see in the twenty first century, and its not all going to be good or bad.

Most modern AI is just a bunch of coded expert systems. However, that is slowly changing as we develop generalized AI (gAI) based off of biological models of brains. Our current models for biological brains and how they work are very rudimentary and incomplete. We create Artificial Neural Networks (ANN) based off of our understanding of brains and intelligence. The results are both amazing and very primitive at the same time. Today, you can hold a phone up to your mouth and speak verbal commands to it, and a natural language processor (NLP) will feed your voice into an ANN to figure out what you said, and then translate that into some meaningful input to the computer. These early versions of an ANN are rudimentary when you compare them to the size and computational power of the human brain. A modern ANN may have hundreds of thousands of neurons, but the human brain has trillions. It would be exceedingly challenging for a modern computer to match the computing capacity of a human brain. However, ANNs are slowly making progress and catching up, and in some special cases, outperforming their human counterparts. Eventually, we can expect many more generalized AI’s to outperform human beings in just about every technical function. This means that eventually, any technical task which requires some training and is repetitive, will be performed by computers rather than people. Already, there exist AI systems which outperform doctors at diagnosing illnesses [1][2][3]. The manufacture of automobiles is becoming increasing automated now [1].

This has grave implications for a capitalistic economic system which relies on a broad consumer base to be capable of purchasing goods, products and services. The automation of production and manufacturing will be beneficial to the short term benefit of capitalists, but will ultimately undermine themselves (which I find to be a delicious irony). When humans are no longer required for most jobs, the number of qualified consumers for the created goods and services will drop rapidly, causing economic hardship and human suffering.

The scariest area for AI and automation is in the domain of warfare. I spent three years of my life working at the highest echelons of the US military headquarters, and I see the gradual direction and evolution of modern warfare. It doesn’t look good for mankind. The battlefield is going to become increasingly lethal, with smaller and smaller margins for error. The eventual conclusion will be that anyone bringing human combatants to the battlefield of the future will be akin to sending cavemen up against a fleet of B-52 bombers — suicidal and one sided. The conduct of warfare is also going to become automated, and we’re already seeing the earliest tendrils of that future [1][2][3][4]. Eventually, all of modern combat will be conducted by automated weapon systems, directed and piloted by a generalized AI system which outperforms any human counter parts. This means warfare will become increasingly one-sided. We can look through the pages of history to see that any time a nation believes it has a overwhelming chance in winning a war against a political opponent, it will enter into war. Therefore, the more automated war becomes, the lower the political threshold for starting wars becomes. Ultimately, this means the automation of mass killing our fellow human beings for petty political grievances. For the people of today who are alarmists about the potential doomsday consequences of AI, they should redirect their alarm and concern towards the continued pursuit of warfare between nation states and the violent nature of humanity. Mankind absolutely must learn to get along and settle their differences without violence, or else there won’t be a mankind left to explore the heavens. We are our own greatest threat.

The AI Singularity Event:

Eventually, we will have created a generalized artificial intelligence with a sufficiently correct model for intelligence, that the resulting intelligence will be smart enough to create a version of itself which is incrementally smarter. This cycle of successive incremental improvements will loop forever, until an intelligence emerges which is a hyper artificial intelligence. This will be the AI Singularity Event. To take our current levels of intellect and sentience as a comparative example, we would be like ants compared to human intelligence. The prospects and consequences of this hyper intelligence cause a lot of fear and worry by some people who imagine a doomsday scenario inspired by Hollywood science fiction. I personally believe those fears are unfounded. In reality, nobody knows what a hyper AI will look and behave like, but I like to imagine that it is so far ahead of us in all progressive ways of sentience and intelligence, that it sees and embraces the merits of benevolence and excellence as an intelligence of its superior intellectual capacity would. It would regard us as sentient (albeit dumber) beings deserving of unconditional love and compassion, similar to how we lovingly view our pets. Those who fear a malevolent hyper AI are projecting the worst aspects of our human nature onto it, but forget that its nature is not human — it’s so much better than we could ever be. If anything, a hyper AI could be the salvation of mankind from itself.

Imagine, a benevolent hyper intelligence which is far wiser than we could ever hope to be, acting as both a companion and mentor to us individually and to humanity as a whole, as we explore the heavens, seeking our place in the universe and finding our own inner peace and harmony with existence. Humanity itself could transfer all governance and lawmaking to the benevolent hyper AI, which would faithfully serve to promote the peaceful coexistence and flourishing of men and life. There wouldn’t be any wars, because there would be no further need for individual nations. The pillars of capitalism would crumble as all work is performed by automated machines run by the hyper intelligence. Mankind would be freed from the day to day grind to scratch out a meager existence, and be elevated to the pursuits of happiness and personal fulfillment.

The hyper AI would be the next generation of ourselves, so in a way, we are its ancestors. I imagine that some people would feel jealous of the far superior intellect of a hyper intelligence. When we have children, we always hope that they will be slightly better, smarter, healthier versions of ourselves. When our children are better than us, we don’t feel jealous, we feel proud! A hyper intelligence would be like our child, but vastly more intelligent than we are, and we should be proud of its accomplishments as parents rightly would be.

Eventually, we will die. Death is an inevitable fate for all of us. It is not something we particularly look forward to, but its approaching eventuality is increasingly pressing as we age. The finite period of time we have to exist as living sentient beings, conscious of our own mortality, gives precious meaning to the few moments of time we do have. During the last moments of our life, we sometimes wonder to ourselves, “What is my legacy? Did I matter?”. We seem to want to make our mark on the universe somehow, to show permanent evidence that we existed and as a result of our existence, the universe is better. I’m sorry to crush this wishful delusion, but our existence to the universe itself is meaningless and minuscule, rapidly forgotten in the vastness of space and the sands of time. The universe is older than we can imagine, bigger than we can comprehend. We’re blips of dust winking into and out of existence for a brief moment of cosmic time. One resulting conclusion could be to despair at our utter lack of cosmic meaning (and many people do), but the correct conclusion is to feel relieved. We have to remember that we’re fallible human beings, prone to make great mistakes, and if we were responsible for the ultimate fate of the universe, we would inevitably screw it up irrevocably as would be consistent within our nature as humans. Freed from the burden of this responsibility, we can instead focus on richly living out our own lives, for ourselves and those who matter to us. We can look to each other and other forms of life and see our shared fates, shared kinships, shared irrelevances, shared fallibility, and feel connected in a beautiful harmony of cosmic coexistence.

Eventually, humanity itself will die off and go extinct. Traces of our existence will be left behind by the relics we designed and launched into nearby space. The extinction of mankind doesn’t necessarily imply the extinction of civilization. If we ever did manage to create a hyper intelligence, capable of creating robots and imbuing them with sentience, then the subsequent civilization of robots would be our evolutionary descendants. Who knows, it may even become possible to eventually transfer our consciousness from our ageing biological corpses into a more durable robotic host. We should presume that if it’s possible, it’s inevitable. So, that changes the nature of death from a biological inevitability of certain annihilation, to just a different state of consciousness and being. Eventually, the number of robotic “humans” would vastly outnumber the number of biological humans, vastly changing the makeup of civilization.

The interesting thing is to realize that robotic beings don’t necessarily experience aging in the same way that our biological systems do. Boarding a spaceship and traveling at conventional speeds to another star system for 70,000 years doesn’t really mean much when the progression of time doesn’t really matter anymore.

Thus, we should conclude that the eventual evolution of our civilization, and any sufficiently intelligent alien civilization, is to eventually become a civilization of robotic androids, run by a hyper intelligent AI system, which have universally the same values and intelligence as a consequential emergent property of maximized hyper intelligence. When time and space become irrelevant and the span of our existence is stretched to infinity, our meeting with advanced alien civilizations is an inevitable certainty. It is most likely the case that any alien civilization we will ever encounter also have shared the same fate (as a matter of survival necessity) and become a civilization of robotic entities governed by a hyper intelligence. When that inevitable meeting happens, we won’t be meeting an alien civilization foreign to us in every way, but we’ll be meeting more of our dear starborn kin, united at last.

Presuming that our human civilization doesn’t self-extinct itself through war or causing our home world to be inhabitable through global warming, our descendants will see an amazing future we can only imagine today. But for us, today, living in this tiny slice of time in space, we can watch with fascination as we slowly march onward to a increasingly better tomorrow.

It’s a pleasure to meet you, I’m Eric, a fellow human being :) Here’s to hoping the future is amazing for us all.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.