AI Utopia or Dystopia? Understanding Recursive Hyperbolics Before Unleashing Pandora’s Box.

Freedom Preetham
The Simulacrum
Published in
5 min readJun 7, 2023

Marc Andreessen recently penned an intriguing post about how AI will save the world. While I found myself agreeing with several of his points and I cheer him for calling out the extremists who are calling doomsday scenario. And, I like the analogies and links to specific articles he is addressing about. And, I even like the final proposals he makes. Bravo!

But, he lost me when he began to bifurcate the thought process into a form of propaganda, using terms like AI safety cult’, and those who are trying to ‘suppress AI’ to almost encompass all of them who are striving for AI safety! Not every one is on the rhetoric matey. You could have dedicated a para or a section for people who are actually on “your side” and focusing on AI safety as it is a real and present danger. Not cool!

Rather than focusing on the near-term benefits, he embarked on a rant about how ‘the other side’ just doesn’t get it and also took a position of utopia, which is also an extreme. I mean, which other side? Which ‘cult’?

This view of Marc’s utopia and everyone else he has rightfully pointed painting dystopia is far-fetched and far removed from the truth. Let me be clear: I respect Marc as a venture capitalist, but I believe he is out of his depth on this topic. No offense intended, but I doubt we could engage in an in-depth technical debate, especially considering the mathematical complexities lurking within the black box of AI.

What Marc presents is largely his moral worldview, which, ironically, he paints with the same broad brush that he accuses the camps, cults, baptists, or bootleggers of using. One might be tempted to call most of such people as “bullshido artists”, seeking to stir up sensationalism about moral goodness/badness and virtues/vices to capitalize on their AI investments. You see how using words evokes emotions and can be careless to be honest? But if we were to do that, we’d be no better than them. Such an approach of utopia or dystopia is reductionistic, detracts from the real issue at hand, and quite frankly, is somewhat childish.

When it comes to technology, AI stands in a league of its own. It is a mistake of epic proportions to draw parallels between artificial intelligence and any other technological innovation of the past. Historical resistance against technology, the arguments of the bygone Luddites, cannot begin to encompass the monumental challenge that AI presents to us. AI is not merely an exponential advancement but rather exhibits recursive-hyperbolic characteristics, a phenomenon unlike anything we’ve seen in history. The AI revolution parallels the Cambrian explosion, an event that Darwinian principles struggle to fully comprehend even today.

In my opinion, all the current economic models projecting AI’s impact are quite Darwinian. To be honest, none of us know how to predict the future with certainty, making us somewhat clueless when it comes to painting a picture of what’s to come. We don’t know whether it’ll be dystopian, utopian, or somewhere in-between. All we can do is focus on understanding the nature of this beast as comprehensively and swiftly as possible.

For example, the nature of hyperbolic growth is typically captured by Riccati differential equations. However, the uncharted domain of recursive hyperbolics, peculiar to J-curves that self-perpetuate intelligence at astronomical rates, lacks a defined model. As we teeter on the precipice of this unprecedented technological leap, relying on past economic forecasts as a guide for the future will only lead us astray.

Even if there are extremists focused on doomsday scenarios, why focus on them? They do not matter. If it is about policy making and regulations, I am with Marc. Let’s fight for making sure neither of the extremes embark on regulatory capture. If not for that, why do you have to just throw every person who is working hard to build safety measures under the “many of my readers will find yourselves primed to argue that dramatic restrictions on AI output are required to avoid destroying society” bus? Careless to be honest.

No this is not the only point of view out there. The real people on the ground is not even heading to either side of the rhetoric or getting engaged in this “Job takeover”, “world killing machines” arguments. They are making sure innovation happens with safety and caution. Do not reduce AI safety to either extremes of the rhetoric please.

I’ve written extensively on this subject (Ghosts in the AI model), voiced my opinions, and engaged in numerous debates across various platforms. My stance? Double down on investments on mapping the recursive hyperbolics, go easy on mindless dissemination of current models, avoid propagation errors through the stack of cards, for this is not a beast to be underestimated.

I am not for regulations at all (yet). I strongly believe in permission-less innovation. Just out-spend and out-innovate on the aspects to truly understand the mathematical complexities on which we are lacking any measures, metrics or frameworks.

To illustrate the gravity of our situation, consider the following analogies. Would you favor the creation of firearms without safety measures in place due to the potential for accidents? Maybe, let me give you a couple of complex examples:

  1. Do you want to hurry deploying nuclear reactors without fully understanding the nuclear payload, criticality, meltdown quiescent point (MQP of thermal hydraulics), core damage frequency, radiation dose equivalents, half-value layers, shielding efficiency etc?
  2. Do you want to hurry on the large hadron colliders without understanding beam loss ratios, dose equivalents, background radiation, machine corrosion, collision energy, cross section probabilities, invariant mass, luminosity (collision frequency) etc?

Now what are the cautionary measures and metrics we have effectively come up for AI and probabilistically for Artificial Super Intelligence? For God’s sake, we do not have hallucinations and anomalies in control on a baby model like GPT4 or GPT5. These are not even hardly comparable to ASI. I have written about that here: “Mathematically Evaluating Hallucinations in models like GPT4

Much like these examples, where caution is absolutely non-negotiable, we must also approach the advent of Artificial Super Intelligence with the utmost prudence.

What we must prioritize is massive investment in mathematical innovation to adequately understand, measure, and contain the polygenic errors of the recursive-hyperbolic embedding spaces. Unfortunately, those calling for this crucial investment are often unjustly dismissed as members of a cult or subject to political rhetoric. This approach is a gross simplification of the issue at best and a symptom of capitalist propaganda at worst.

In the end, the path forward is clear: We need to understand this beast that is AI before we fully unleash it. Ignorance, in this case, is not bliss. It’s a prelude to potential disaster. A technology well harnessed with precautions can change humanity for good. Remember, all it takes is one disaster if we do not understand this beast.

So let’s not suppress, or get lost in these rhetorics and propagandas. Instead people in Math, Physics and AI research who are most equipped to understand this beast must roll-up our sleeves, double-down on where it matters the most and move the needle of innovation.

Calling all nerds and geeks to save humanity… No one else is capable.

--

--