Predict
Published in

Predict

Long Live Moore’s Law

In 1965, American engineer Gordon Moore made the observation that the number of transistors on a computer chip doubles every 18–24 months. Since this observation was made, the computer industry has decided to abide by this observation and turn it into a self-fulfilling prophecy (called Moore’s Law) which is why computer power has advanced so rapidly over the course of the past fifty years. To give you an appreciation for the remarkable advances which have taken place in computers due to this trend, all we need to do is look at the number of transistors on the first microprocessor which came out in 1971 and compare it to the number of transistors on todays most advanced microprocessors. In 1971 the number of transistors on the very first microprocessor stood at 2,250. This number (while although impressive for the time) is nothing compared to the top of line microprocessors that have 40 billion transistors in 2020.

While although it is undeniable that computers over the course of the past 50 years have witnessed very impressive growth in their capabilities, what is concerning is that the rate at which our computers are becoming more powerful has been slowing down in recent years. One of the reasons for this is the demise of Dennard scaling. Dennard scaling states that the power needed to run transistors in a particular unit volume (such as the volume of a chip) will remain constant even as the number of transistors increases. However, since individual transistors have gotten so small, Dennard Scaling no longer holds which means that newer generation of chips are now becoming more energy exhaustive. In other words, power consumption is slowing down our ability to reap computational gains in ways it never use to.

The second reason for the slowdown in computing power is due to the heat of the chips. Clock speeds (the number of instructions or operations the chips can execute per second) has barely budged since the mid-2000’s due to the level of heat that the chips are giving off. As the number of instructions per second rises, the heat that accompanies those instructions also rises. As of now it isn’t clear how we can increase the clock speeds any more without melting the chips, which is why this particular area of computation has stagnated for the past 15 years.

The final reason for the slowdown is due to the economics of how much it cost computer companies to jump from one generation of chips to the next. As the transistors become smaller, the cost of making transistors even smaller rises. In 2002 when the transistors were 65nm apart, the design cost was 28 million dollars. By the time 2020 rolled around however, the design cost for the 5nm process had reached 540 million dollars (an almost 20 fold increase). It is predicted that in order to develop the next generation of transistors, the computing industry would need to spend 1.5 billion dollars just on design cost alone. This doesn’t include the 3–4 billion dollars in process development, nor does it take into consideration the 15–20 billion dollar cost of producing 40,000 wafers per month. Since it is becoming so costly to shrink transistors, the computer industry is waiting longer to shrink transistors than what has historically been the case. In the near future, the computer industry will have to stop transistor shrinkage entirely since economically it will not be worth the investment.

So what is the computer industry going to do? This is an important question to ask considering the fate of economy is at stake. According to the field of economics, the only way to sustainably increase the economy in the long-term is through either increasing the size of the labor force, or through increases in the productivity of each individual worker. Since the growth of the labor force is slowing and might even start to decline, growth in productivity (which is driven by technological progress) is more important than ever before in driving overall economic growth.

Fear not! While although there are many trends which are looking bleak, the computer industry still has a couple tricks up its sleeves that’ll help it reach the heights will need it to in order to solve our most pressing issues.

Low-Hanging Fruit

  1. Shrinking Transistors — This historical approach has a little bit of juice left. According to intel’s roadmap, it seems that a 2nm transistor process is in the works. Considering the cost of any further shrinkage past 2nm is extremely expensive, I would be surprised if the industry decided to meaningfully shrink the chip anymore past this point. It would seem that from this approach, the industry could increase their level of computation by 50% from the cutting edge computers we have currently.
  2. 3D Chips — Historically speaking, our chips have been flat and 2-dimensional. If however the computer industry decided to make multi-layered 3 dimensional chips, (they’ve already started to do so to some degree) it is speculated that this could lead to about a 1000 fold increase in computer power.
  3. Reducing Software Bloat — Due to the rapid advances in hardware over the past 50 years, software developers have gotten lazy when its come to creating code which takes full advantage of the hardware. What the computing industry could do going forward is increase the efficiency of code by reducing the amount of code needed to complete the same task. This approach shows a lot of promise considering what we’ve seen from certain case studies how the performance can be enhanced. It was found in one experiment that if you simply substituted the Python coding language for C, you could increase the performance of computers by 47x. In a follow up study it was found that by further tailoring the C code language to take full advantage of the hardware, you could increase the overall performance of a system by 60,000x!
  4. Chip Specialization — Since the rate of computational advancement is slowing down, the computer chip industry is planning to create specialized chips designed for very specific purposes in order to help them remain profitable. It was found that if you tailor a chip for a specific purpose, you could see anywhere from a 1.4–2.5x (lets just say 2x for simplicity sakes) performance boost.
  5. More Efficient Algorithms — An algorithm is a set of steps that a computer follows to solve a problem. If there is a more efficient set of steps used to solve a particular problem, the number of calculations needed to solve that problem will fall. While although this approach to increasing computing performance has a certain degree of overlap with point #3, higher levels of performance aren’t just tied to less software bloat (less steps). Higher levels of algorithmic performance are also tied to the quality and thoughtfulness of each step. As of now, it seems that algorithms are becoming twice as efficient every 16 months. In one particularly profound example, it was found that image recognition A.I in 2019 needed 44x less computation power to perform the same task as it did only 7 years ago thanks to advances in software efficiency. Going forward, silicon valley is going to put a lot of investment and research into algorithms so that software can compensate for the increasing inadequacies of hardware.

These 5 approaches to increasing the performance of computers are going to provide the bulk of computational advancement over the course of the following decade. While the computer industry decides to coast on these five approaches, it’ll be simultaneously planning and developing alternative approaches that’ll be able to pick up the slack when all of the low hanging fruit is picked. These mid-hanging fruit approaches while although not paradigm shifting, are going to offer us significant advances in computing power that’ll help us continue to advance as we work on cracking the secrets of quantum computation (more on that later).

Mid-Hanging Fruit

  1. Switching from Silicon to Graphene — If computer scientist were able to successfully change the materials of the chip from Silicon to Graphene, the performance of computers would increase by a 1000 times. This is because graphene as a substance conducts heat 10 times better then copper (the most commonly used conductive in chips) and is able to conduct electricity 250 times faster then silicon. In addition to this, if the computing industry got very clever, they could manipulate the properties of graphene to see a 1,000,000x boost in performance. The size of this leap in performance is possible since MIT figured out that graphene could be used to manipulate the laws of light which would allow electrons to flow through the chip at vastly quicker speeds (1/300th the speed of light!).
  2. Neuromorphic Computing — Neuromorphic computing is a type of computation which works the same way as the human brain. The hardware does this by trying to create electronic equivalents of neurons and synapses in order to help it compute faster. Intel which has already implemented this architecture on its chips has realized a 1000 fold increase in performance. Most of the computer industry has yet to adopt this type of computation which means we haven’t seen anything yet when it comes to its potential and the impact it will have on society. When all is said and done, this type of computation will be able to boost our collective ability to compute by 1000–9000x (Let’s say 5,000x for the sake of simplicity).
  3. Laser-light Pulsation — Current state of the art chips are able to switch between a 1 and a 0 (the fundamental building blocks of computation) about a billion times per second. The faster that you can switch an electron between these states, the faster the computers will be able to operate. It was found that if electrons were shot with a laser under very specific conditions, then the rate at which the silicon chips could switch between a 1 and a 0 would increase from a billion times to a quadrillion times per second! By integrating laser-pulsation into our chips, we could see a 1,000,000 fold increase in the level of computation that we are able to do.

The following three trends should take the computing industry by storm toward the later end of the 2020’s and into the 2030’s. When these trends run their course and are fully realized, our ability to compute will be amplified by 5 quadrillion times. To put this insanely large number into perspective, these 3 computational trends would give us a performance boost in computation similar to the performance boost we would witness if we transitioned from using a 1940s era computer to the type of computers that we are currently using in 2020.

With such radical increases in the level of computation (which doesn’t take into consideration the performance boost from low-hanging fruit technologies as well) I question if we will even need quantum computers to solve societies most pressing issues. Nonetheless, I wouldn’t be surprised if I was wrong due to humanities tendency to take on harder and harder problems. I am sure that many people in the 1940s would’ve thought that the level of computation that we have today would’ve been enough to solve any problem imaginable. But yet, here we are in the midst of complex challenges that we still haven’t been able to overcome yet. Challenges such as real-time language translation, driverless cars, accurate 2-week weather forecasting, simulating the human brain in real-time, cracking the secrets of the human genome, etc. As computers become more powerful and we solve these problems, humanities ambition will lead us into other area’s which we can’t even begin to anticipate. These other area’s are going to require the usage of the high-hanging fruit computational technologies listed below.

High-Hanging Fruit

  1. Optical Computers — An optical computer is a type of computer that performs its calculations with photons as opposed to electrons. This would be a huge boast to the performance of computers since it would enable information to flow through the chip at the speed of light. This technology would allow us to compute at speeds which were 300 times quicker than graphene computers, and 75,000,000 times quicker than the current technology.
  2. Spintronics — A spintronic computer is a type of computer that takes advantage of an electrons property called the spin-wave on the quantum level in order to generate computation. This type of computing is far superior to what we currently have since it requires the usage of practically no electricity to run calculations. In theory, this form of computation could match quantum computers since it takes advantage of a similar set of physics in order to operate. While although spintronics in practice isn’t likely to match quantum computation, it’ll lead to an unfathomable level of power compared to what we have today. Spintronic computational advancement will be so radical in fact, that I don’t know if I can accurately quantify its total effects when it is all said and done.

The Holy Grail

When it comes to computational might, nothing holds a candle to quantum computers. Quantum computers are unfathomably powerful thanks to their manipulation of quantum physics in order to perform calculations. To give you an idea of how powerful these computers really are, you first have to know how they work. Classical computers run on bits (basic units of information) which can either be represented as a one or a zero (on or off). Quantum computers however run on something called a qubit, which can be on, off, simultaneously on and off, or somewhere on the spectrum between on and off. This is significant since it allows for the computational power of quantum computers to rise exponentially with every single qubit you add to the system. One qubit is capable of performing 2 calculations per second, 2 qubits can perform 4 calculations per second, 3 qubits can perform 8 calculations per second, etc. Each qubit doubles the capacity of your system so by the time you reach 50 qubits, your level of computational power would be on par with the most powerful computer systems in the world. By the time you reach 100 qubits you’d have a computer which was many orders of magnitude more powerful than all of the computers on the planet combined. If humanity was to stop at computers which were composed of 100 qubits, it would be a huge deal when it came to progressing the species. However, knowing humanity we aren’t just going to stop there. We are going to create quantum computers with millions, billions, and trillions of qubits (and yes even more) to solve every problem which can be solved.

We must be careful with this power however because while it has the ability to give our scientist the power to crack the secrets of the universe, it’ll also give criminals the ability to execute the perfect crime. As humanity embraces this god-like technology, we must be diligent in creating quantum systems which are able to anticipate its own misuse. This technology is so powerful, that it would be easy to simulate the behavior and consciousness of each individual in real-time, and make highly accurate predictions as to which specific person in which part of the world is using this technology for criminal ends. While although a quantum system such as this might be necessary for the preservation of humanity against itself, this opens up another can of worms. How will nations come to embrace this type of power, and what impact will it have on the individuals ability to be free? With this type of power, it would be very easy for the political establishment to anticipate and eliminate all of its potential rivals before they even made the announcement that they were going to run for office. In more authoritarian societies, the government could simulate where political adversaries were and use this information to kill or imprison dissidents, making political revolution impossible. This technology has the ability to take us to Utopia, but it also has the very real possibility of amplifying our darkest and most destructive desires and impulses. If we cannot create a quantum system which preserves our freedom and protects against its own misuse, we will either destroy ourselves or live in a dystopian 1984 novel. It is imperative that while the computer industry progresses ever closer to quantum computation, that policy makers take this threat seriously and really think about what we can do in order to bring about (as Voltaire once said) the best of all possible worlds.

--

--

where the future is written

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Jack Borden

Philosopher, poet, economist, and most importantly a critical thinker. My aim is to explore the human experience and to make sense of the world that we live in.