Courtesy of Intel.

How Moore’s Law Came to Be

Core+
Published in
8 min readApr 27, 2017

--

Let’s begin with a conclusion and then tease out its meaning: Moore’s Law is the product of human imagination. “Moore’s Law” first came into circulation as a phrase in the mid-1970s, after a decade of publications and lectures by Gordon E. Moore on his understanding of the basic dynamics of and possibilities for manufacturing silicon microchips. During this decade, Moore, a Caltech PhD in physical chemistry, went from co-founding and directing R&D at Fairchild Semiconductor to co-founding and directing R&D at Intel Corporation, where he would eventually serve as CEO.

Gordon Moore. Courtesy of Intel.

In his publications and lectures, Moore developed an argument and made a prediction. His argument was that the silicon microchip could make electronics profoundly better and, most importantly, cheaper. With this, he saw that electronics would pervade all facets of society with “revolutionary” consequences. His prediction was that this revolution would happen in a particular fashion: constant, dramatic change in the nature of microchips and the reduced cost of improved electronics that they represented.

Follow Computer History Museum Facebook | Twitter

Fairchild had achieved a breakthrough in the manufacturing technology for making transistors (electronic on-off switches, the basic building blocks of digital circuitry) that set the stage for its pioneering of the silicon microchip as we know it. Simply put, microchips are made by chemically printing tiny transistors onto a piece of silicon crystal, along with the interconnections and other components needed to form an entire electronic circuit. Conventional circuits of the day were, in contrast, made by wiring together individual components.

Moore made several key contributions to this chemical printing technology at Fairchild and at Intel, and had a deep understanding of it. The silicon microchip was created initially for military electronics where price was of little importance. More critical were considerations like miniaturization, power consumption, and reliability. Moore was perhaps the first to realize that Fairchild’s chemical printing approach to making the microchip meant that they would not only be smaller, more reliable, and use less power than conventional electronic circuits, but also that microchips could be cheaper to produce. In the early 1960s, the entire global semiconductor industry adopted Fairchild’s approach to making silicon microchips, and a market emerged for them in military, particularly aerospace, computing.

By 1963, Moore saw that the possibility he had seen for the microchip had, in fact, come true. Fairchild’s simple digital microchips were cheaper to make than the set of individual components required to build the equivalent conventional circuit. The microchip had already become the cheapest form of digital electronics. As a scientist, Moore could see no fundamental barrier yet looming for the ongoing improvement of the chemical printing technology that underlay integrated circuit production. With the required investment of effort and money, the technology could be engineered to chemically print ever-finer features with great fidelity. With improved chemical printing, microchip makers would find their best competitive advantage by making microchips more complex; that is, containing larger numbers of transistors. And these more complex digital microchips would represent profoundly cheaper electronics.

In April 1965, Gordon Moore’s vision of this potential future reached its largest potential audience to date: the tens of thousands of readers of Electronics, a major weekly industry magazine. He wrote an article, “Cramming More Components onto Integrated Circuits,” presenting his view of the future of electronics and the microchip with a new twist: a numerical prediction. The view was, as always, about economics as much as technical possibility. He described how the chemical printing of microchips was, in effect, open-ended. If the investment were made, the technology would advance. Moore’s point was that such an ongoing investment would reward microchip makers handsomely. By shrinking transistors and putting more of them into microchips, everything got better: More complex chips would enjoy cost and performance advantages. By making good electronics more inexpensive, its use would spread. He described a world that he subsequently helped make real: “Integrated circuits will lead to such wonders as home computers — or at least terminals connected to a central computer — automatic controls for automobiles, and personal portable communications equipment.” ¹

To underscore his message, Moore made a numerical prediction. From Fairchild’s chemical printing breakthrough of 1959 into 1965, he observed, the number of transistors on chips had doubled every year, going from a single transistor to a microchip containing around 50 transistors. To achieve the cheapest digital electronics, microchip makers had doubled the transistor count on their chips every year. With nothing on the horizon to trip up the technology development or the economics, Moore predicted that this dynamic would continue for the coming decade to 1975. Microchip makers would continue to invest strongly in chemical printing technology, doubling transistor counts each year to get the best economic advantage, to minimize the cost of digital electronics. The microchip of 1975 would contain not 50, but rather 65,000 transistors.

There is little evidence that Moore’s 1965 article made much of a splash. However, some influential members of the electronics community, like Caltech electrical engineering professor Carver Mead, picked up Moore’s message and predictions, and helped to spread awareness of them. Most importantly, Moore was certain of his view and acted on it. At his second company, Intel, he led by example. The company pursued, with extraordinary success, cutting-edge chemical printing and highly complex microchips, first for memory chips and then for microprocessors. In 1975, Moore — now Intel’s CEO — gave a talk that was quickly published. He returned to his 1965 prediction and found that it had been fulfilled. Transistor counts for microchips had indeed doubled every year. Microchips did contain 65,000 transistors each. From a niche military product, microchips had come to completely dominate in computing. Again, he could see no roadblocks to the continued development of chemical printing or to the economics of the microchip industry. However, Moore believed that the effort would become harder and more expensive. He predicted that in the coming decade his “annual doubling law” would shift, doubling every year and a half, with the cheapest electronics of 1985 found in the form of microchips with 16 million transistors on them.

For the half-century from 1965 to 2015, this regular doubling of microchip complexity to minimize the cost of electronics and to maximize economic reward has been continually realized by the microchip industry and its suppliers of materials, equipment, software, and services. In a very direct sense, Moore’s Law has been the achievement of a wide community, a social production inspired by an imagined future and an experienced past. The development of chemical printing and the design of complex microchips have required many billions of dollars, and the coordinated effort of hundreds of thousands of people. As the ongoing effort became more extensive, social innovations were required: consortia like Sematech and the US DARPA VLSI program, as well as technology roadmaps. Indeed, Moore himself was instrumental in the creation of the first National Technology Roadmap for Semiconductors with the Semiconductor Industry Association.

As the transistor count on microchips has climbed past the billion mark, the cost to manufacture a transistor has dropped below the nanodollar, and the transistor-on-a-microchip has become the object most-manufactured by humanity. Estimates of the number of transistors produced in a single year now match, or exceed, estimates of the total number of all the grains of sand on all the world’s beaches. With computing devices made of microchips, the price of computing has fallen over a million-fold, while the cost of electronics has fallen a billion-fold. The microchip business has grown into a profitable, multi-billion dollar industry.

Moore’s Law has been the deliberate human creation of an unusually regular pace of unusually rapid change in the cost and capability of electronics, most notably computing. It may be unprecedented in the history of technology. And this regularity of revolutionary change had become so commonplace, that many take it for granted. For decades it has been possible for system-makers and consumers to simply plan on the fact that computing and microchips will become better for less at a steady rate. But this is changing.

In the 2000s, Gordon Moore himself wrote about the end of Moore’s Law. “No exponential change continues forever,” he wrote, “not even the transistor counts on silicon microchips.² On the technical side, he saw that the atom itself presented a fundamental barrier to chemical printing: it would be impossible to print something smaller. In 2015, some features of the transistors on microchips are already just tens of atoms thick. But it was the economic side of Moore’s Law, in its way the most social part of this community production, that he believed most likely to disrupt the dynamic. The expense of the chemical printing technology, now conducted in factories that cost several billion dollars apiece, would change the economics and create uncertainty about the future of the microchip, and, with it, computing. And it is precisely this uncertainty that the electronics and computing communities are starting to discuss ever more widely. Some look with excitement to possibilities beyond the traditional like novel computing architectures, quantum computing, and superconducting computers. Others look to exciting materials like carbon nanotubes and graphene. Still others see a longer run for the silicon microchip, with layers of transistors atop one another, or pulses of laser-light interconnecting them. Moore himself sees the glass half-full in the eventual shift in the microchip dynamic: “But even if the doubling-times stretch in the future, the rate of progress in the semiconductor industry will far surpass that of nearly all other industries. It is truly a revolutionary technology!”³

Notes

¹ Gordon E. Moore, “Cramming More Components Onto Integrated Circuits,” Electronics 38, no. 8, (1965): 114–117.

² Gordon E. Moore, “No Exponential Is Forever: But ‘Forever’ Can Be Delayed,”(paper presentation, ISSCC, February 9, 2003, Session 1, Plenary 1).

³ Ibid.

“How Moore’s Law Came to Be” was published in the Computer History Museum’s 2015 issue of Core magazine.

--

--

Core+
Writer for

Director, Center for Software History, CHM//Co-author, “Moore’s Law: The Life of Gordon Moore, Silicon Valley’s Quiet Revolutionary.” https://goo.gl/VgTCjc