Superintelligence, according to Nick Bostrom, is ‘any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest’ (Superintelligence, p410).
Futurologists worry about the development of superintelligent AI which could arise so quickly, and be so much smarter than us, that we would have no way of controlling it. We need to be careful, they think when developing such AI so that if this possibility is realized, the AI treat us in the manner to which we’ve become accustomed to machines treating us.
Philosophy, Hieroglyphics, and Technology - Data Driven Investor
Before the discovery of the Rosetta Stone, hieroglyphics were already regarded as information, even if their semantics…
Now here’s a point: we’re arguably superintelligent compared with our ancestors thanks to the development of computers, the internet, and decent AI. I can effortlessly solve complicated calculus problems; I can learn what the capital of Kazakhstan is; I can even make a video of the Mona Lisa talking. And I can do so right now, sitting on my bed in front of my computer, in a few minutes.
My grandfather was an engineer. He could solve calculus problems. But he would need, at least, pen and paper, and some time. Maybe he’d need a book to remind himself of the chain rule (or some more difficult calculus rule the name of which I don’t know). He would almost certainly need a book to learn the capital of Kazakhstan.
But he didn’t really have many books. So he would have to go to the library, but that would involve a 15-minute walk each way, and anyway it’s Sunday so it wouldn’t be open. It would take him a long time, in the order of days, to find out the information, and if he was doing something that required that information, he would be held up.
As for the third — and this is important — he couldn’t even begin to conceive of what it would take to make the Mona Lisa talk. It would seem completely nonsense to him, not something to retweet with distanced amusement before clicking back to the tab where your work is (not that he knew what retweeting or tabs were, obviously).
But although I, and you, can do these things, we can do so only thanks to the internet, and so, were the internet removed from us, our cognitive performance would be drastically limited. We would become just like our ancestors if we lost our technological crutch.
But now imagine that not only did we lose our technological crutch, but that it was taken from us by someone who retained it — a belligerent nation, say, commits cyberwarfare to take down our internet. Then the belligerent nation would retain access to the internet and thus their cognitive performance would not be limited like ours. My claim is then that they would be like superintelligences to us. While one perfectly fine way of making superintelligence is to make things smarter than us, another is for us to make many people less smart than us, relative to whom we’d be superintelligent. That’s what I want to explore here.
A cyberwar might happen
The creation of sub-intelligent people through cyber warfare isn’t implausible because cybercrimes of various degrees are already happening; you need only look at the newspaper.
The WannaCry attacks are one of the most salient. Just yesterday, the New York Times ran a story about how Baltimore City’s computer system had been infected with the ransomware, which had originally been developed by the NSA but which fell into opposing hands. The software spreads itself from computer to computer, encrypting files and only de-encrypting them if a ransom in bitcoin is paid. This has, per the NYT article, “frozen thousands of computers, shut down email and disrupted real estate sales, water bills, health alerts, and many other services” in Baltimore.
It’s not the first time WannaCry has been used. It May 2017 it spread across the internet, starting somewhere in Asia and infecting, inter alia, NHS computers in the UK, leading to the NHS having to turn away non-critical emergencies and divert ambulances.
It’s not only WannaCry, though. Equally salient and relevant for the topic of this post is the work of the IRA, Russia’s Internet Research Agency, which interfere with elections by, in essence, trolling. They set up fake Twitter accounts and retweet bogus stories, helping subvert the quality of online discussion and making it hard for people to know what is true.
The key point is that this is epistemically harmful. By exposing internet citizens to this misinformation, these fake accounts harm us in our capacity as rational agents. The possibility I’m interested in here is something like this, only much more extreme: the possibility of shutting down the computer systems that we rely on so much. The fact that people, indeed government agencies (or affiliates of them) are already working in this area, with ongoing success, should make us actively worry about this.
If there were a cyberwar, sub-intelligence would arise
Here I want to make the claim that certain sorts of cybercrimes could have direct effects on our intelligence, and that accordingly victims and perpetrators of cyber crimes might be to each other as our ancestors are to us — as sub- to super-intelligence, where again by the latter I mean creatures whose cognitive performance greatly exceeds ours (and where ‘ours’ means something like the connected global community of 2019).
It is worrying, but useful, to think about how cyber attacks would affect us. Take a minor one: imagine the communications infrastructure in my town were taken down for a week, and in particular all internet connectivity was lost.
This hardly sounds drastic, but it would be. I wouldn’t be able to work — my job is based online. I couldn’t use my fintech branchless bank so I wouldn’t have access to my money (OK, I don’t entirely rely on such a bank, but I easily could). I wouldn’t be able to talk to my girlfriend who lives in another country, and I wouldn’t be able to watch Netflix, which is one of the ways I relax. My whole life would be upturned.
And that’s only me. I don’t really know the details, but I’d have to imagine everything would be super screwed — the infrastructure that brings food from distances to the local supermarket, the systems to determine how power and water are to be allocated, public transport, and so on. Life, as we know it, would change, because life as we know it depends on communication, and most communication goes via the internet.
That’s obviously bad. But it’s not quite the purpose of this essay to argue that life without the internet would suck. I am interested in intelligence. How would the lack of internet affect intelligence? You might think that, although it would leave me unemployed, broke, lonely, bored, and hungry, it wouldn’t affect my intelligence.
But we rely on the internet for knowledge. It will be useful, to make this point, to introduce some work from philosophy. Andy Clark and David Chalmers, in a great and famous paper, argue that knowledge is, or should be considered to be, extended. What one knows extends further than what is contained in your brain. Provided some piece of information is stored somewhere in a way that you can reliably access, it should count as something you know. For example, you should count as knowing the phone numbers of your friends if they are stored in a diary.
It seems very hard to deny, worries about exactly what ‘know’ means notwithstanding, that extended knowledge is extremely important. I rely on massive extents on Google and Wikipedia for my day job, and you probably do too. For but, one example, programming is so much easier now that, whenever you get an error message, you can just paste it into Google and you’ll find a very smart person on StackExchange telling you exactly what the issue is and how to fix it.
Maybe this point doesn’t need to be labored, but I will in a bit anyway. I am just old enough that I remember the time when you had to access academic papers by going into the basement of a library and looking them up among the stacks. This required, minimally, the library providing the journal and being open, and that you put pants on and physically locate yourself in the library.
That’s not even all. It requires that you know what paper you wanted to read in the first place, something you couldn’t rely on search engines to tell you, and it requires — and this is very important — that the paper be old enough that it has already been written, accepted, and printed. For very cutting edge research, libraries are no good.
In all, it used to take a long time to acquire information, and that information was limited. Our extended knowledge was much less accessible and much less in general. What is particularly worrying is how quickly our progress in this respect could be rolled back. Without the internet, our extended knowledge will lessen, and if the extended knowledge of others doesn’t also lessen, they will become to us as superintelligences.
Some More Consequences
There are two particularly interesting features of the sub-intelligence-through-loss-of-extended-knowledge scenario I have sketched. The first concerns what I mentioned above about the quick pace of research which means that a lot of very up to date knowledge is contained only on the internet.
The importance of this is as follows. Say a virus is developed that shuts down the internet, and say that that virus makes use of recent developments in computer science, research so far only posted on arXiv (a pre-print server where academics put as yet unpublished work). Then if it’s recent enough, even if one is located near the best library in the world, that research probably isn’t in it. It could be that the research about the virus, and thus the way to combat it, is available only if you have access to the internet. And so, the only way to regain access to the internet is if you already have access to it!
Of course, it could be that you can rely on others to give you the information. Imagine your country is attacked, but the friendly country isn’t. Then, you might think, you ask researchers in the friendly country to print out and send you all the information that has been written about the topic so you can go about fighting against it.
But it seems that in many cases, the very possibility of this will again depend on the internet. Many researchers, I would guess, rely on email and faculty webpages to keep in touch with other researchers. So if those cease to function, then one will become isolated and unable to acquire the research required to solve the problem.
There’s a very real risk, then, I think, of these sorts of attacks pushing one into a sort of internetless basin which you can’t get out of, because combating the software and regaining access to the internet will require, at every step of the way, access to the internet. And this is because, just to repeat, we have come to depend on the internet in every domain of our life.
Here’s a second interesting feature of this sort of problem. One of the spooky things about AI is the fact that it can do things we didn’t think possible, that would have been incomprehensible not so long ago. Think of the seemingly overnight progress of Google Translate, or of AI playing chess and go, or again of deep fakes and allied things.
Thus imagine my grandfather encountering deepfakes on television in the 1950s — say, Queen Elizabeth declaring war on the US. He probably couldn’t even begin to conceive that it was fake. But he also couldn’t even begin to conceive that it was genuine — it just makes no sense. He would be completely and utterly lost, and his sense of reality would weaken.
In short, advanced and inexplicable technology, I claim, has a tendency to epistemically weaken your grasp on the world. If we were cut off from the internet, accordingly, but technological progress was still to continue, who knows what might be developed?
We might start to have real trouble distinguishing the belligerents, who had just been following the technological path we had been until recently, from magic, and our sense of reality, or even the very basic concepts in terms of which we understand the world, might erode. And the epistemic gulf between us and those still technologically plugged in would then be so vast and deep that it seems not at all unreasonable to think that they would be as superintelligent to us as AI would be superintelligent to them.
Good or bad?
There can be superintelligence without AI — all it requires us that we find some way to claw back the advances we have made to make sub-intelligence relative to which we — human beings today — would be superintelligences. And since most of those advances are thanks to communications technology, communications technology is a fitting domain of war, and humans are warlike, the arising of sub- and thus superintelligence should be taken seriously.
I want to end on a slightly ironic note. People who are concerned with AI are often concerned with existential risk — the risks that could impact humankind as a whole on a large scale. Hostile AI superintelligence certainty could lead to existential risk if their values and aims aren’t ours and they’re much better at achieving their aims and realizing their values. What about intelligence that is super only by comparison to the sub-intelligences they create? Will they cause existential risk?
Maybe not. Imagine, to take a far-fetched scenario, that communications technology are permanently destroyed in many countries by one successful belligerent which retains full access to everything. Plausibly, in the dystopia that will result, energy demands across the world will be much less. We won’t be mining bitcoin or taking trans-continental flights, or perhaps having lots of kids — we’ll be concentrating on finding ways to get food and power and information to us, probably.
If most of us are reduced to an extremely poor standard of living by falling into the internetless basin that requires the internet to heave us out, we won’t use much power. At a sufficient scale, climate change might be halted. And if climate change is the greatest source of existential risk, a superintelligence could lessen, rather than increase, existential risk, albeit not in a way that most of us would be happy about.