Moore’s Law: The End is Nigh

Klaus Æ. Mogensen
FARSIGHT
Published in
3 min readFeb 27, 2020

Moore’s Law has driven technological advances for half a century — but it is reaching a wall.

It all started in 1965 when chipmaker Gordon Moore, who later co-founded Intel Corporation, in an article in Electronics Magazine predicted that the number of components in integrated circuits would double every year for the next ten years — which proved true. In 1975, he revised his prediction to say that the density of transistors in integrated circuits would continue to double every two years; something that became known as Moore’s Law.

This rapid advance in digital technology has driven basically all technological progress since then — personal computers, the world wide web, smartphones, genetic science, weather forecasts we can (mainly) trust, GPS, artificial intelligence, and more. However, according to an article by David Rotman in MIT Technology Review, we are now close to reaching the end of Moore’s Law. There are physical limits to how small we can make the details of microchips, and we have even now come up against this barrier. The smallest details in integrated circuits today are about 10 nanometres, which is less than a dozen silicon atoms wide.

We have seen the end coming for some time. The clock rate of chips levelled out nearly twenty years ago, and most advances in processor speed since then have been gained by adding cores to processors rather than increasing the clock rate. To stave off the end, there are also plans for adding capacity by building chips in three dimensions. Neither, however, are real substitutes for shrinking chip details. It is difficult for programmers to fully make use of parallel processing on multiple cores and going in three dimensions introduces growing problems with cooling. Cooling is especially a problem for portable devices, and even today, high-performance laptops have issues with overheating.

We should not expect an end to gain in computer performance tomorrow or next year or even this decade, but we should expect diminishing returns, with each additional operation per second costing more than the previous one. At some point, consumers may not want to pay more for computers or phones with slightly better performance, and without a large consumer base, chip manufacturers will not be able to afford research into further improvements. Even supercomputers will be built from more, rather than better processors.

Perhaps this isn’t altogether a bad thing. For one, consumers may not be so quick to replace perfectly good devices with ones that are just a little bit better, and this will benefit the environment. Research will shift to making chips cheaper and/or more reliable rather than better performing.

There has been a tendency among programmers to not bothering with optimising code, since computers tend to become faster anyway; but once computers cease to become faster, making more efficient code may become more fashionable. As Rotman mentions in his article, it has been shown that with careful programming and the right choice of programming language, a program that took seven hours to run could be made to run in less than half a second — quite a gain. However, we should expect such efficiency gains to come with a high price tag, and where a gain in computer speed helps all programs, new or old, better programming only helps the software it is applied to — and it might not be possible to dramatically improve software for neural networks and machine learning, important features of artificial intelligence.

This might be all for the good. If we are stuck with good old ‘artificial stupidity’, we will at least not have to worry about artificial intelligence running amok and taking over the world.

--

--