Elon Musk’s A.I. Oedipus Rex Scenario
This is a repost of some commentary from my newsletter, InsideAI. Sign up if you want more content like this.
For a guy very much concerned with the dark side of A.I., Elon Musk keeps doing things that I worry might make his predictions come true, rather than actually save us from a malfeasant super A.I. I’ve written before out the concerns that OpenAI might inadvertently cause the problems it is trying to prevent. To re-iterate that argument, we have to start by assuming that we don’t know where the final breakthrough may come from that makes A.I. cross the line into superintelligence. By making so much AI research open, yes, we spread it about so no one company dominates it but, we also increase the risk that some single person, maybe a college kid somewhere, has the final breakthrough and now controls the first general artificial intelligence. OpenAI, by making all of their research public, decreases the likelihood that Google or Facebook solely owns AI, but at the expense of increasing the chance that other countries or individuals may own the keys to the AI kingdom.
Now Elon is building Neuralink, which is an awesome idea. But again, it’s a tradeoff. Yes, our best chance against an super A.I. is to augment ourselves so we too are superintelligent. But, if you believe the odds are high that a super A.I. will arise, and will want to control or destroy us, then it will be very convenient for such an A.I. that we all have electronic devices implanted in our heads and wired directly into our brains. Hacking humans may be easier in a Neuralink world than it would have been otherwise.
You probably remember the story of Oedipus Rex. When he was born, it was prophesied that he would kill his father and marry his mother. His father thus told the mother kill the baby. She couldn’t do it so she sent a servant to do it. The servant left the baby for dead but, he was found and raised by a shepherd. Of course, you know how the story ends. I sometimes worry that Elon, in his hypervigilience to protect us from a dangerous A.I. future, is setting himself up to be Oedipus Elon — the guy who caused the very problem he was trying to avoid. It’s particularly troubling given that, the evil-super-A.I.-destroys-us scenario is only one of many possible future outcomes, and at the moment, there is no reason to suggest this future has a higher likelihood of materializing than the alternatives. I think we should be making plans for how to prevent such a scenario, but, moving so aggressively without fully understanding the problem and possibilities may be a mistake.
