# Arguments Against Artificial Superintelligence

- Energy Costs. It takes 24 MW currently to match the power of the brain (according to the article). Let’s assume the equivalent of 1 trillion humans is the cutoff for ASI. That would take 2.4 milion petawatts. World annual energy production is 132 petawatt-hours. So in ~3 days, it would consume the world’s annual energy consumption. Energy efficiency of processing would have to be improved 10,000x as well before this is feasible. (ref: https://en.wikipedia.org/wiki/World_energy_consumption)
- ASI is only going to show runaway progress in the subset of problems that can be solved with increased computation and access to data. How many real world problems are actually like that? Innovation is a trial and error process. The limiting factor in most scientific progress is time and resources, not computation. Experiments take time and money to run. ASI won’t solve these problems.
- Are Von Neumann architectures even good for solving creative problems? Parallel computation is only possible with multiple cores. Do we need a better architecture first? Quantum computing?
- Computation isn’t the limiting factor in most algorithms anymore already. I/O and networks are. Networks are limited by the speed of light. I/O has had one major advance in 20 years, hard drives to SSD. Read-write speeds on memory are not significantly faster than it was 20 years ago. Memory to hold all this data is still expensive and accessing it is time expensive. If CPU only accounts for 10% of an algorithm time, ASI would only run in 90% of the current time instead of exponentially faster. If we’re talking about harnessing the whole internet’s data power, network latency becomes the limiting factor. And code that requires some sort of API request is already 99% network bound, so there would essentially be no improvement from faster computation in this case. (ref: https://en.wikipedia.org/wiki/I/O_bound)
- What are the limits on recursive self-improvement on something like a neural network? Any optimization problem has to have a global minimum. Beyond that point, no improvement is possible. Will that global minimum be improved and efficient enough to trigger a runaway intelligence without the slow process of hardware improvement?
- The solution to a lot of these problems could be quantum computing. Maybe that’s where we should be looking? What are the limits on quantum computation (currently unknown)? Will that end up being the determining factor on whether ASI is achievable?