The Most Important Conversation in Human History

The Utopian
9 min readJan 9, 2022

--

If there’s one article you read on The Utopian, it should be this one.

It’s the core you need to consider when thinking about technology, and is also one of my core motivations of starting The Utopian. At first I thought the title was clickbait, but upon further consideration, it’s not.

Reality keeps reaching what we label science-fiction sooner and sooner. And most people have no idea what’s out there and what’s coming.

The upcoming great wave of disruption isn’t just limited to Silicon Valley and social networks, either. I went to an event in Paris a few weeks ago (the Hello Tomorrow Global Summit) and caught a glimpse of the enormous energy behind long-shot futuristic projects with high potential of disruption in biology, computing, space travel, artificial intelligence, etc. The people there weren’t young software developers keen to make a quick buck with the next hit app; they were experts in cutting-edge scientific fields with a strong desire for impact. We’re seeing innovation in places we barely imagined even fifteen years ago, from quantum computing to 3D printing human organs to blockchains.

Thousands of startups are created each year. Nature magazine receives ten thousand submissions a year. Billions are pouring into new ventures. The pace is dizzying, and nobody can keep up anymore. But why?

Exponentiality

If Earth began only one year ago, humans would have only existed for ten minutes, and the industrial era started two seconds ago. The internet? 0.1 seconds. Most people on the planet today grew up without the things which now define modernity. How do we keep making larger and larger leaps in less and less time?

You might’ve heard of Moore’s law, the reason why we carry devices in our pocket that are nearly a million times more powerful than the computer NASA used to land on the moon a few decades ago. But Moore’s law is becoming less relevant as new innovations not limited to computing hardware are on their way to exploding exponentially. We are entering what Azeem Azhar calls ‘The Exponential Age,’ one defined by unprecedented change.

In reality, almost all quantifiable metrics in human history, from world population to GDP per capita to the amount of data available to everyone, follow this general shape:

Exponentiality, “The Exponential Age,” the Great Acceleration, whatever you want to call it…

Nick Bostrom, Ray Kurzweil, and other academics believe there are a few factors driving this ‘exponentiality’ in humanity’s development:

>> The vertical compounding effect: existing technologies combine to create new ones. For example, anybody with a laptop can create a functional mobile app because of all the existing frameworks, libraries, and tools available. Let alone the complex fundamental layer of circuitry of which their computer is made, which a software developer mostly ignores.

>> The horizontal compounding effect: innovation in one field spurs that of another in a positive feedback loop. Artificial intelligence advances have helped us discover new proteins, useful in biotech. Quantum computers will accelerate data processing in every field. If we augment our mind and body with devices, we might be able to think faster, work more, and live longer.

>> Wright’s law: things take less time and money to make as we make more of them (because we learn, and economies of scale)

Some argue that Moore’s law, and perhaps exponentiality as a whole, is coming to an end. Maybe we have or will soon reach a limit defined by physics which we cannot surpass. Maybe humanity or the planet can’t bear the costs of reaching the next step in the same way Moore’s law is balanced by Moore’s second law (aka Rock’s law) which is that semiconductor fabrication plants double every four years.

But it’s important not to confuse Moore’s law with general exponentiality. While the growth of classic computer chips may slow down, the overall progress marches on, with new fields like quantum computing replacing the old and reigniting the positive feedback loop.

Singularity: exponentiality in the field of Artificial Intelligence, A.I.

It’s worthwhile to briefly explore the singularity, or exponentiality applied to A.I.

Singularity is the idea that there will come a time when machines match and surpass human intelligence. Once an AI system is at least as smart as humans, it can alter and add to itself, causing a runaway effect most experts believe will result in machines going from human-level intelligence to superintelligence (thousands, million times smarter than humans) within a very short time period. We won’t be able to match machine intelligence in the same way a lab rat can’t outsmart their scientist overseers. Any plan we might conceive of shutting it off, the machine will already have planned ten possible counters in a nanosecond.

As Nick Bostrom puts it, for better or worse, “Machine superintelligence is the last invention that humanity will ever need to make.” After the singularity, we can relax as the machine does all our thinking, working, inventing faster and better than us.

But such a machine must still have objectives which it may not be able to give itself. One possible future is that we’ll have a highly efficient optimization machine which will meet any goal as long as its given one. But we must be careful with what we give it. If we tell it to make all humans happy, it might invent a series of pins and wires that it inserts into our brains to stimulate happiness all the time.

More and more people are thinking about A.I. safety, an issue often overlooked but perhaps one of the most important for the future of humanity. Should we place bans on A.I. research? How can we ensure an A.I. doesn’t accidentally leak from the lab and learn on its own? What objectives do we give existing A.I. so that it aligns with our own? Should we let A.I. learn our ethical system by watching our behaviour?

A.I. experts’ most common estimate for the arrival of human-level artificial intelligence is 2050.

The pipeline of human-machine interfacing

I wrote a more extensive post about this topic here, but think it’s worth mentioning here as well. To start with a question, what drove humanity from the original desktop computer to the mobile phone to the augmented-reality glasses?

Pattie Maes of the MIT Media Lab says it’s all about bandwidth in the interaction between mind and machine. On a desktop, we communicate with machine intelligence at a certain rate by moving a mouse and typing on a keyboard. With a phone, we can touch the screen itself and even give it spoken commands on the go. With AR glasses, information from the computer is overlaid into our field-of-view and we can use our eyes to direct the computer.

In general, we’ve been finding ways to talk faster with machines in order to offload more cognitive functions off of our biological brains and onto silicon-based ones, an idea called the extended mind hypothesis from David Charlmers and Andy Clark. Human laziness combined with economics will probably keep blending mind and machine.

So, what comes after smart glasses? Logically, we’ll skip the middlemen, the sensory organs like eyes and ears, and go straight to the brain. At first we might pretend to have some restriction by using caps that you can take on and off, but soon we’ll realize how much more immersive, efficient, and powerful it can be to insert electrodes directly into the brain. Think Matrix (the old trilogy, not the new one…). From there you can use your imagination and ideas from your favorite science fiction movies: uploading consciousness, giving your personality to an A.I., etc.

So what is the problem?

While I’m not giving my opinion about specific future technologies, I will point out a few major problems with this exponential change.

  1. It’s concentrated.

One thing that’s always been a feature of technology is that usually very few people understand and use it in the beginning. Often only the rich could afford it and only the educated could understand it.

Individuals or small teams also have an agility that large and bureaucratic organizations can’t match: the most disruptive teams nowadays are at most in the hundreds. Real knowledge and decision-making is locked in small circles of insiders and experts.

Don’t forget a team of a hundred people is just 0.00000125% of the global population. Kind of funny considering such teams are altering the trajectory of our species’ future.

2. It’s undemocratic.

This stems from the first problem. The world’s policymakers, lawyers, historians, leaders, and ethicists struggle to do their work of making sure technology affects society in a good way because of how fast and complex the cutting-edge is. A handful of people are able to grow their technology with their own intentions in an unsupervised space.

You might argue all creativity is necessarily undemocratic; you wouldn’t have taken street polls in 1940 to discover everybody wants a PC. True, but ask those same people if they want the atom bomb. Most would say no, but it still happened. Nobody asked for brain-machine interfaces, but companies like Neuralink are still celebrated.

3. It can be unethical.

Humans are pretty unclear about what is ethical and what isn’t, so I won’t break into a philosophical rant. All I know is, when I ask normal people about advanced bioweapon development, a Facebook Metaverse, or artificial superintelligence, they usually agree something’s messed up with it and don’t want it.

For a more present-time example, look at the most likely explanation of the origin of covid-19. A few dozen people involved with the Wuhan virology lab have indirectly caused the death of millions with a pandemic that derailed our entire society.

Instead of just asking those questions like What does it mean to be human, we need to start answering them. And harder yet, we need to agree on that answer in order to take the right actions. And all in a timely manner. Time to step it up, philosophy majors.

Final words

Business, science, and tech exists to serve us, not the other way around. It’s time to start aligning our innovation with our values.

“Tech is neither good, nor bad, nor neutral.” — Melvin Kranzberg

I’m not trying to induce fear or skepticism about technological advances, but rather trying to provide a framework for thinking about where humanity is going with technology. You can use this framework to make your own opinions.

In Heilbroner’s Visions of the Future, most of history was dominated by a sense of powerlessness and immutability of the natural world. Then the enlightenment came and we started to play with the natural order to our benefit. But to Nick Bostrom, the post-war era seemed “dominated by impersonal forces, as disruptive, hazardous, and foreboding as well as promising.” Never before has humanity been faced with these issues.

We need more public debate, fast-acting legislation, decentralized control, and maximum transparency. When asked about the biggest global problem, most would say climate change. But experts estimate that worst-case climate change will only decrease global welfare by 20% per capita consumption. Seems like a lot, until you consider that in the 20th century technology gave rise to an increase of 3700%. Considering exponentiality, who knows what this number will look like in the 21st. Not saying forget about the climate, but in 2050 the world might not even be something we want saving.

Technology is how we design it and use it. The Utopian was founded with the aim of shining a light on the innovations most people don’t see, informing the people who are building the future, and getting everyone involved in the most important conversation in human history. Join that conversation today, for your benefit and humanity’s.

If you enjoyed this piece, don’t forget to check out my blog The Utopian here and subscribe to the newsletter to join the conversation about emerging technologies :)

Also, The Utopian Instagram has exclusive shorts, infographics, community events, and announcements.

The Utopian podcastyour favorite platforms, The Utopian YouTube, LinkedIn

SOURCES

The Future of Humanity — Nick Bostrom

Life 3.0 — Max Tegmark

Azeem Azhar’s Exponential View

--

--

The Utopian

Web3 meets Extended Reality in The Utopian blog and podcast. Made by and for those building the future of the Internet. Spatial computing, Metaverse, VR/AR/XR