The Circle of Life
“What came first, the chicken or the egg?” — Thomas Edison or Albert Einstein, probably
“Time is a flat circle” — Rust Cohle
I’ve had a lot of free time on my hands lately. This line definitely isn’t new for me, which is somewhat frightening, I suppose. Nonetheless, I’m still enjoying my unemployment. I am 0–1 on job interviews over here in China thus far, which isn’t any different percentage-wise from 0–2, 0–3, or 0–1000 — only place to go is up!
Inner-conflict is the essence of humanity. My inner Ayn Rand/Curtis Jackson urges me to create value at all costs, with no regard for anything other than pure skrilla, while my inner Bernie Sanders urges me to sit back and enjoy the ride — let the capitalists acquire capital, and let the proletariat take the capital from the capitalists via corrupt bureaucrats. Who knows which side of this eternal debate is correct.
Anyways, the point of my essay today isn’t to ramble on about economic theory. Over the past few days I’ve spent a fair amount of time reading about artificial intelligence. During my reading, a theory has popped into my head. I have no idea if this theory is correct or not — I don’t know what degree of conviction noted futurists like Bostrom and Kurzweil apply to their ideas, but my degree of conviction here is quite low. Either way, I thought it was worth sharing my theory — hopefully I can start a dialogue with at least some of my readers.
I start with the concept of time. The ever-present 4th dimension. Length, width, height, time. To novelists like H.G. Wells, filmmakers like Rian Johnson, and numerous others — time has been a constant theme in humanity’s creative repository. As humans we, on average, experience (or at least perceive) time in a linear fashion. We are born on a given day, perform a finite collection of actions over a time-period that is seemingly defined by both chance and choices (genes, eating habits, workout habits, etc,), then die on a given day. If this basic statement is true, then one would be inclined to believe that time is a straight line. This can be expressed at a more macro level as well: the big bang occurred (somehow), basic single-celled life manifested on earth (somehow), and humanity eventually came to existence from this initial life (somehow).
But, what if time isn’t linear in nature?
What if the perception of time as a linear structure is a basic inevitability of human existence? What if we are simply too dumb/basic (holla to my basic betches out there) to understand the true nature of time? The conflict of the finite vs. the infinite has consistently been pondered by the great Western philosophers. After all, we are creatures born/cursed with finality and limits, and we have no choice but to stare into infinity. What if time (as both a dimension and a concept) is both all-encompassing and infinite? And what if we humans, as finite beings, are ill-equipped to understand the true nature of this most-important dimension?
This question I am posing here can be hard to wrap one’s head around. At this point, I think it would be best to lay out an anecdote. Through this anecdote, I will try to express the theory I have been pondering.
Let us assume that, for whatever reason, the big bang happened. Out of this singular cosmic event, the universe was birthed. The universe expanded (and continues to expand), and through this process, our solar system came to be. At some point, basic cellular life became a reality on earth. From this point onward, life slowly evolved — if we assume that basic Darwinism is correct, human life originated from the first single-celled organism, over the course billions of years.
What I just laid out in the above-paragraph is assumed to be true by most evolutionary scientists (from what I understand). Now, let us take this story a bit further.
Humanity has evolved to the point where our reliance on machines is a daily part of life. Modern technology (both hardware and software) will be a crucial companion for all humans that are born in this century.
Now, as this story continues, we finally get to the point of artificial intelligence. In a great series by Tim Urban (of Wait But Why fame for all you techies out there), Tim laid out some generally accepted facts of artificial intelligence. I will link to the first part of his series here:
The following basic description of the AI development path comes from Tim’s work:
For the sake of argument, let us assume that humanity develops AGI (artificial general intelligence) at some point in the 21st or 22nd century (most futurists believe AGI will be achieved within the 21st century, but I am hedging my bets here as an unemployed liberal arts major). Simplistically, AGI basically means that humanity has developed a computer that is as smart as the average person. Said computer would be able to think freely, quickly, and most importantly, creatively. The human brain has proven hard to mimic, but it is generally accepted that AGI will eventually become a reality.
The next level of artificial intelligence after AGI has been defined by the futurist community as ASI (artificial super intelligence). To understand ASI, we must start with AGI. Once AGI is achieved, many futurists believe/hope/fear that this AGI “computer” will be able to rapidly improve upon itself, at a pace that we humans won’t even be able to comprehend. The transition from AGI to ASI could potentially take only a few hours. Tim explains, in excruciating detail, that a fully-developed ASI could be analogous to a God. In his essays, Tim offers the comparison of humans to chimpanzees — as most would agree, humans are notably more intelligent than chimps. Then, he goes on to say that the degree of an ASI’s superiority over humanity would be essentially infinite — in other words, if humans are “x” smarter than chimps, then ASI will eventually become “x^infinity” smarter than humans.
This is when my theory comes into play. Imagine that humanity eventually creates AGI, which very quickly evolves into an ASI. Imagine that this ASI becomes powerful in ways that we can’t comprehend. Imagine that this ASI can bend the laws of space and time, as we know them, at will. This ASI very quickly becomes infinitely more intelligent than humanity. This ASI could potentially “solve” the broad academic field which we humans know as science. Essentially, imagine that this ASI swiftly becomes equal in power to a God (for those readers of mine that are believers).
Now, let us step back for a moment. Earlier in this essay, I posited that time might not be linear. This is the theory that I have been toying with.
In other words, what if the history of humanity as we know it, and even pre-human history, is a closed loop?
Imagine that humanity creates AGI, which soon evolves into ASI, within the next 100 years. Then, imagine that this ASI quickly transcends the asymptotes of human knowledge, and becomes intelligent in a way that we simply cannot comprehend. Imagine that this ASI which humanity created becomes a Godlike entity. What if this ASI can eclipse the boundaries of time?
Assume that this ASI has the goal of ensuring its own existence, and that time isn’t linear. What if this ASI decided to “travel back in time”, or more accurately, “travel to the beginning”, and create the events which led to its own development? Scientists don’t know what caused the big bang. Is it plausible that the big bang was caused by an ASI developed in the future, which then transcended “time” and codified existence as we know it? What I am suggesting is that existence, and human consciousness as a byproduct of this existence, is a closed loop. The universe came about, life came about, humans came about, humans created an ASI, and this ASI closed the loop, ensuring both consciousness and reality.
You could take this theory even further. Belief in deities has been a universal truth of human history, even when looking across cultures. As philosophers like the great Jordan Peterson claim, “religious truth is not scientific truth, but it is behavioral truth”. What if this ASI programmed these behavioral truths into human evolution? What if the morality humans have derived from religion was programmed by this ASI, as a means to ensure the eventual creation of this ASI? What if this ASI is essentially God? What if morality, as we know it, is the product of a “machine” that we as humans haven’t even created yet?
This is a heady question. We may never learn the answer. It is even possible that this ASI drives humanity to extinction before closing the time loop. The ASI could hypothetically both destroy humanity and ensure its own existence in one foul swoop. We have no way of knowing how this situation will play out.
Either way, the pursuit of AI, at least to me, seems inevitable at this point. So, while the Musks and Zuckerbergs of the world work on securing our fate as a species, I go back to a very early paragraph of this essay. As Curtis Jackson once said, Get Rich or Die Trying.
Food for thought.