Moore’s law has been the rule of thumb which has been the guiding force of the technological age. Gordon Moore (founder of Intel) said in 1965 that the number of components in a given circuit would double roughly every year. This has been a remarkably accurate prediction for the last 50 years, and has allowed us to create everything from the super-computers that store the internet, to the device that you’re reading this post on down to the sensor that turns the light on when you walk into a public bathroom.

Talking to friends about Moore’s law is always fun, because it is surprisingly difficult for people to get their heads around what exponential growth actually predicts. Even people who understand exponentiality, often don’t understand the physical nature of Moore’s law — so it’s good to cover the basics of microchip production and the hard end to Moore’s law. However, there’s always a smart-ass in the group who will talk about the possibilities of quantum computing, so we’ll try to cover some of that here too so that you can shut him up.

Full disclosure, I’m not an engineer, a physicist or even particularly mathematical — so if you already understand how all this works, please feel free to check out some of my other posts that you might find more interesting. This is aimed at people, like me, who just want to have a conversation about these things and not get lost.


There is a group of people, let’s call this group “Most People”. Most People don’t understand quite how significantly technology will affect our lives in the coming century. In everyday life, we generally don’t have to worry about exponential growth. We have evolved to function as pattern recognisers and purveyors of linear progressions. For example, if I’m hunting an animal and I observe it moving from point A to point B, it will probably take an equal amount of time to move an equal distance from point B to point C — allowing me to line up my arrow and hit the animal. If it suddenly moved twice as fast when covering the second distance, that would be very shocking and would probably mean that we missed the target with our arrow. This is exactly the type of progression that we’re seeing with computing, and it’s the reason that we’re continuously underestimating how technology will change our lives in the future.

Processing power is a function of the number of components that you can fit on your circuit board. Broadly speaking, Moore’s law stipulates that you can roughly fit twice as many components on a board every 18 months (or so). This has been a remarkably accurate prediction about the progression of computing and has nearly perfectly predicted the power and speed of modern computers.

The problem is that we just aren’t very good at imagining what exponential means. A great example of how insanely powerful Moore’s law actually is, Intel CEO Brian Krzanich explained that if a 1971 Volkswagen Beetle had advanced at the pace of Moore’s law over the past 34 years, today “you would be able to go with that car 300,000 miles per hour. You would get two million miles per gallon of gas, and all that for the mere cost of four cents.”

Most people, at this point that people start to realise that they should probably strap in — because this is as slowly as tech will ever develop again… or so they tell us.


There is a second group of people — let’s call them “Tech Fanatacists”. Tech Fanatacists will tell you all about how Moore’s law works (they don’t fully get it), and they’ll tell you that they understand what exponential progression predicts (they don’t know) and they will tell you to strap in, because this is as slowly as tech will ever develop again… I know, because I am one of these people.

The unfortunate reality of the situation is that there is a physical limit to the number of components that you can fit into a space which is governed by the size of the atoms themselves, and being sufficiently far enough apart to block the flow of electricity.

Here’s the sciencey bit you need to know. Silicon has been the material of choice for micro-chip manufacturers for the last 50-odd years because it is possible to chemically treat it to create areas of positively or negatively charged silicon atoms. The basic principle of a microchip is that you can create a layer of charged particles to prevent the free-flow of electrons, and then when you apply an electric charge to it, it becomes possible to turn on, or off, the flow — thus creating a simple switch, by creating a bunch of these switches in parrallel, you can create the necessary logic gates to create computing power.

If you want more detail — there’s a great article by the guys over at where you can learn all about it.

Now that you understand the process of creating microchips — you can explain to the Tech Fanatacists the root of their error. Currently, the best microchips are able to stop the flow of electricity with a distance of about 50 atoms blocking the electrons. Over the next 10 years, this number will continue to decrease until our pesky friend quantum mechanics starts to play a role in the efficiency of the switches and Moore’s Law will stop dead in its tracks.

You will never need to get into the specifics over a pint, but just in case you come across a tech fanatacist who is also a physicist — the effect is called ‘quantum tunnelling’ and it’s a property of particles whereby the closer you look at them, the more ‘fuzzy’ their location/velocity becomes. Actually at the very (very) small level, they can seemingly disappear from one position and reappear in another. This usually doesn’t have any impact on anything, but when you’re talking about switches that are one atom in diameter (for example) then electrons would likely jump from one atom to the next, effectively allowing electricity to flow and causing the chip to erroneously return an ‘on’ signal. This would render the chip redundant as it would be impossible to store or retrieve any meaningful data on it.

So there you have it — by 2025, silicon microchips will have reached their physical capacity and Moore’s law will end. Now some smart-ass tech-fanatacist might speculate about an alternative material to silicon being used (and there are several promising contenders) but you can rest assured that that will only buy time, as the same effect will stop Moore’s law eventually — even if they find a way to create usable hydrogen metal lattices.

The reason that this is the perfect pub argument topic is that when anybody tells you they can predict how technology will affect their lives in 10 years, you can tell them that they can’t because they can’t understand exponentiality. If they concede that point, and tell you that you can’t predict how technology will affect their lives in 20 years, you can tell them that you can because Moore’s law will be over by then. You’ll always be able to argue — and you’ll always be able to win!


Some stuff that people might chuck at you –

Even after the end of Moore’s law, there’s nothing stopping us from continuing to develop computers in a linear fashion, and just creating giant football-stadium sized chips that we can connect to via the cloud. It’s not that computing will stop growing, it’s just that it will stop growing exponentially. The exception to that progress is with quantum computing.

Quantum computing is a completely novel and innovative way of thinking about computing, which could be Moore’s last laugh and continue the exponential growth of computing power. Well, actually, now that I think about it, it’s more like the exponential growth of the exponential growth of computing power. Exponential squared.

Richard Feynman famously said ‘I think I can safely say that nobody understands quantum mechanics.’ And that’s bloody Richard Feynman! So what hope do we have? However, the basic principal of a quantum computer is that if you can organise (entangle) the superpositions of quantum bits (qbits) then you can hold exponentially more data in those bits than you can in a classical computer chip — because they can represent 1 or 0 or anywhere in between too.

Currently, it is not thought that you will ever be able to wander round with a quantum computer in your pocket — not least because they require incredibly intense conditions to work properly — but they also aren’t particularly good for everyday usage. The primary function of a quantum computer will be to perform massive calculations in parallel, like giant database searches or encryption. For the purposes of watching people slip up on YouTube, or poking your friends on Facebook, it’s probably not going to help — so you can tell whoever brought up quantum computing to shut up.

In hindsight, I’ll probably need to do a follow up on quantum computing…