A popular topic in Silicon Valley is talking about what year humans and machines will merge (or, if not, what year humans will get surpassed by rapidly improving AI or a genetically enhanced species). Most guesses seem to be between 2025 and 2075.
People used to call this the singularity; now it feels uncomfortable and real enough that many seem to avoid naming it at all.
Perhaps another reason people stopped using the word “singularity” is that it implies a single moment in time, and it now looks like the merge is going to be a gradual process. And gradual processes are hard to notice.
I believe the merge has already started, and we are a few years in. Our phones control us and tell us what to do when; social media feeds determine how we feel; search engines decide what we think.
The algorithms that make all this happen are no longer understood by any one person. They optimize for what their creators tell them to optimize for, but in ways that no human could figure out — they are what today seems like sophisticated AI, and tomorrow will seem like child’s play. And they’re extremely effective — at least speaking for myself, I have a very hard time resisting what the algorithms want me to do. Until I made a real effort to combat it, I found myself getting extremely addicted to the internet.
I believe attention hacking is going to be the sugar epidemic of this generation. I can feel the changes in my own life — I can still wistfully remember when I had an attention span. My friends’ young children don’t even know that’s something they should miss. I am angry and unhappy more often, but I channel it into productive change less often, instead chasing the dual dopamine hits of likes and outrage.
We are already in the phase of co-evolution — the AIs affect, effect, and infect us, and then we improve the AI. We build more computing power and run the AI on it, and it figures out how to build even better chips.
This probably cannot be stopped. As we have learned, scientific advancement eventually happens if the laws of physics do not prevent it.
More important than that, unless we destroy ourselves first, superhuman AI is going to happen, genetic enhancement is going to happen, and brain-machine interfaces are going to happen. It is a failure of human imagination and human arrogance to assume that we will never build things smarter than ourselves.
Our self-worth is so based on our intelligence that we believe it must be singular and not slightly higher than all the other animals on a continuum. Perhaps the AI will feel the same way and note that differences between us and bonobos are barely worth discussing.
The merge can take a lot of forms: We could plug electrodes into our brains, or we could all just become really close friends with a chatbot. But I think a merge is probably our best-case scenario. If two different species both want the same thing and only one can have it—in this case, to be the dominant species on the planet and beyond—they are going to have conflict. We should all want one team where all members care about the well-being of everyone else.
Although the merge has already begun, it’s going to get a lot weirder. We will be the first species ever to design our own descendants. My guess is that we can either be the biological bootloader for digital intelligence and then fade into an evolutionary tree branch, or we can figure out what a successful merge looks like.
It’s probably going to happen sooner than most people think. Hardware is improving at an exponential rate—the most surprising thing I’ve learned working on OpenAI is just how correlated increasing computing power and AI breakthroughs are—and the number of smart people working on AI is increasing exponentially as well. Double exponential functions get away from you fast.
It would be good for the entire world to start taking this a lot more seriously now. Worldwide coordination doesn’t happen quickly, and we need it for this.