How Gordon Moore Made “Moore’s Law”

The definitive story behind the rule that explained why our world changed — and is still changing — at a rate that’s still too awesome to grasp

David C. Brock
Backchannel
12 min readApr 16, 2015

--

By Arnold Thackray, David Brock and Rachel Jones

On April 19, 1965, chemist Gordon Moore published an article in Electronics magazine that would codify a phenomenon that would shape our world. At its core was a non-intuitive, and incredibly ballsy, prediction that with the advent of microelectronics, computing power would grow dramatically, accompanied by an equally dramatic decrease in cost. Over a period of years and decades, the exponential effect of what would be known as “Moore’s Law” would be the reason why, for instance, all of us carry in our pockets a supercomputer that only years earlier would have cost billions of dollars and filled many rooms. We call them “phones.”

In a new and definitive biography of Moore — called, naturally, Moore’s Law — authors Arnold Thackray, David Brock and Rachel Jones provide a thorough look at the man and his times. But perhaps its key section, printed below, tells the story behind the eponymous breakthrough that epitomizes the digital age — that fateful publication that still resonates a half century later.

Moore first began to develop his insight in 1959 when he worked for Fairchild Semiconductor, but he did not discuss the idea publicly for several years. In 1962 and 1963 he contributed some of this thoughts to, respectively, a science yearbook and a microelectronics textbook. But it was not until 1965, in that historic Electronics piece, that the world would see what became known as Moore’s Law: a regular doubling of computer power and halving of its cost.

Here is how Gordon Moore shared his “law” with the world. — Steven Levy

In February 1965, Gordon found his opportunity to engage directly with the wider electronics community: a letter from Lewis Young, editor of the weekly trade journal Electronics, asking for an in-depth piece about the future of microcircuitry. Electronics was well established and widely read, with a mix of news reports, corporate announcements, and substantial articles in which industry researchers outlined their recent accomplishments. It covered developments both within the semiconductor industry and in electronics more broadly, giving technology and business perspectives.

Young was planning a thirty-fifth anniversary issue, including a series titled “The Experts Look at the Future.” As the sole microchip expert in the issue, Gordon’s words would reach sixty-five thousand subscribers. It was the moment that he had been waiting for. He made a giant asterisk mark with his pencil at the top of Young’s invitation and underlined an exhortation to himself: “GO-GO.” Answering Young, he admitted, “I find the opportunity to predict the future in this area irresistible and will, accordingly, be happy to prepare a contribution.” Within a month he had drafted his manuscript: “The Future of Integrated Electronics.”

The piece reiterated much of what Moore had already written, but sought to be more engaging. Gordon’s confidence and comfort in his expert position shone through in his subtle use of dry humor and a clear, low-key style. His conscious attempt at warmth was designed to persuade readers both to buy into the future he foresaw and to help create it. Included, for the first time, were several explicit numerical predictions. He telegraphed the gist of his argument in a brief summary for the Fairchild lawyer who would review his draft: “The promise of integrated electronics is extrapolated into the wild blue yonder, to show that integrated electronics will pervade all of electronics in the future. A curve is shown to suggest that the most economical way to make electronic systems in ten years will be of the order of 65,000 components per integrated circuit.”

The claim was nothing if not bold. Sixty-five thousand transistors per silicon microchip (up from sixty-four in 1965) would be a remarkable level of complexity. These microchips with sixty-five thousand transistors would represent the most economical way to make electronic products. Gordon’s message was simple and stunning. Silicon microchips made better and cheaper electronics. Applications would widen throughout industry, technology, and society, and possibilities would emerge for computers to develop unprecedented capabilities.

In his opening paragraph, Moore set the tone: “The future of integrated electronics is the future of electronics itself.” Since the actual future lay beyond his reach, he aimed not “to anticipate these extended applications, but rather to predict for the next ten years the development of the integrated electronics technology on which they will depend.” Silicon microchips were now “an established technique.” Nowhere was this truer than in military systems, where reliability, size, and weight requirements were “achievable only with integration,” making silicon microchips mandatory. Beyond this, the use of microchips in mainframes was already surpassing conventional electronics in both cost and performance. Complex microchips of high quality would “make electronic techniques more generally available throughout all of society,” enabling the smooth operation of “many functions that are done inadequately by other techniques or not at all.” Existing technologies would be refashioned or replaced by electronics-based approaches, providing fresh technical, social, and economic functions.

The lower costs of systems, “from a readily available supply of low cost functional packages,” would drive this expansion.

He offered an impressive, visionary list of possibilities — “home computers,” “automatic controls for automobiles,” “portable communications equipment,” and the “electronic wrist watch” — a list that today seems conservative but in 1965 was startling, exciting, and provocative.

Silicon microchips provided a clear path to the realization of such futuristic, sci-fi possibilities. Here, indeed, was revolution.

Such change hardly seemed credible. IBM’s System 360 mainframe started at $113,000 (more than $1 million in today’s money). Fancier versions cost the equivalent of $7 or $8 million today. Less powerful minicomputers, like Digital Equipment Company’s PDP-8, cost the equivalent of more than $150,000 today; even minicomputers were as expensive as houses. The implication of and evidence for Gordon Moore’s argument — that microchips would bring “home computers” within reach of the ordinary buyer — was difficult to digest. He also made a more pragmatic point: in the near term, “the principal benefactors of the technology will be the makers of large systems.” Mainframes would become available at much lower cost and with much more computing power.

Thanks to his experience in Shockley’s Quonset hut [the humble facility where the famed, and infamous, William Shockley first recruited Moore to the world of silicon in 1956], Moore had been an active participant in developing the core manufacturing technology. He had built diffusion furnaces and “glass jungles” from scratch, before handing the job on to a technician. Now chemical printing technology was fully in place and was robust. As far back as 1962, in a note stuffed into his olive patent notebook, he had written, “There are no major problems left in silicon device technology.” For Moore, the technology was complete in the sense that he could grasp the fundamentals of its essential parts, but it was not limited by any immediate physical reality and was wide open to continuous development.

Uniquely among his peers, Moore predicated his vision on the idea that the manufacturing technology already had a trajectory of steady improvement. With intense effort and expensive investments, it could be remorselessly perfected to provide better yields of more complex microchips containing ever-smaller transistors. His philosophy of standard products for this future of complex microchips was central to his thinking, since such a future (with its ballooning costs for designing and developing ever more complex chips) was feasible only if large markets for high-demand, high-volume microchips were continuously developed. Meanwhile, he and his colleagues would improve each facet of the technology by concentrated hard work.

Photolithography could allow smaller patterns to be generated, with fewer yield-crushing defects. Better diffusion processes could improve chemical doping and reduce wafer damage. Epitaxy could produce better crystal layers, with fewer deformities. With novel materials and recipes to improve device stability and protection, oxidation could be refined. Contaminants could be more rigorously eliminated by cleansing of water, photoresists, acids, and gases. Larger, purer, and more perfect silicon wafers could be grown. Better metallization could provide more durable contacts and connections. Each aspect of the manufacturing technology could be enhanced to support steadily increasing miniaturization of transistors on microchips, to expand complexity and to improve yields. The power of Moore’s insight — still true a half century after his 1965 article — is that the revolution in electronics depended on improving existing silicon technology, not altering its essential character.

The simplicity of Moore’s belief allowed him to make the case for the future of microchips, electronics, and society. As chemical printing evolved, the economics of microchips would change. Over time, ever more complex chips would provide the cheapest electronics. To illustrate, he provided a plot:

The vertical scale shows the cost for making a transistor on a chip, with each increment representing a tenfold difference in cost. The horizontal scale shows the complexity of the microchip, as measured by the transistors it contains, with each increment representing a tenfold increase in complexity. The relationships represented by the curves are not linear but exponential: small changes have great effects. The three swoops illustrate the relationship between cost and complexity in 1962, 1965, and (hypothetically) 1970. In each case cost falls to a minimum and then rises with further increases in complexity; each successive curve is lower on the cost and higher on the complexity scale.

An astonishing change takes place: an 8-transistor chip, cheapest in 1962, changes to a 2,048-transistor chip, predicted to be cheapest in 1970.

Gordon combined the economics of integration with his philosophy of standard parts to assert that, in 1970, the transistor that could be made most cheaply would be on a microchip thirty times more complex than in 1965. The punch was in the tail: “The manufacturing cost per component can be expected to be at least an order of magnitude lower than it is at present.” In other words, standard microchips could make electronics ten times cheaper in five years. Moore and the industry at large could deliver, steadily, exponentially increasing electronics for the dollar. Revolution, indeed!

A second plot answered the question: “What will be the complexity for minimum cost over time?” Gordon’s answer, with a numerical prediction, strengthened the persuasiveness of his essay.

This time the horizontal axis is linear, showing each year from 1959 to 1975, while the vertical axis is exponential; equal increments represent the doubling of transistors on a microchip. Because Fairchild and its rivals had focused on minimizing the cost of electronics by investing in technology, “the complexity for minimum component costs has increased at a rate of roughly two per year.” Later this sentence would be seen as the first articulation of “Moore’s Law,” but the world’s attention would routinely rest on the “what” (the doubling of complexity) rather than the “why” (the minimization of cost by investing in advances in chemical printing in order to gain competitive advantage).

On a linear plot, Gordon’s graph would have been a hockey stick, typical of exponential growth.

Would the annual doubling trend continue beyond 1965? Moore’s second graph shows his affirmative answer — in 1975 the cheapest electronics would offer more than sixty-five thousand transistors to a microchip. In words, he put it dryly: “Certainly this rate can be expected to continue, if not to increase. The longer-term extrapolation is nebulous, although there is no obvious reason for stopping the curve before it intersects the top of the graph.” He closed with a quip as dry as the numbers themselves: “This curve was purposely plotted with a rather obscure unit as ordinate so that the logic of the extrapolation of the historical data might be appreciated without the confusion of the absolute numbers implied.”

In the remainder of his manuscript, Moore deployed skeptical questions to examine the “reasonableness” of 1975’s sixty-five-thousand-transistor chip. Could so large a circuit be made upon a single wafer? Yes, said Moore: there was plenty of room on an inch-diameter wafer to squeeze in sixty-five thousand transistors. The idea was just horrendously expensive in 1965; in another ten years, with improved chemical printing technology, it would be a different story. The crucial thing to remember was that “there is no fundamental reason why yields are limited, below one hundred percent. Nothing exists comparable to thermodynamic equilibrium considerations, which often limit yields in chemical reactions.” To Moore, the physical chemist, perfect yield was simply a matter of massive investment. “Device yields can be raised as high as is economically justified. It is only necessary that the required engineering effort be committed.”

Then came a question about the power consumption of a functioning sixty-five-thousand-transistor chip. “Is it possible to remove the heat generated?” The concern was prescient. Four decades later, heat would become one of the semiconductor industry’s major worries. While Gordon foresaw this, he believed that the heat produced could be handled. Rather than glowing brightly like vacuum tubes, reengineered microchips would achieve an improved “power density,” offering ever greater speeds for less power.

Gordon closed his discussion with the observation that the huge design costs for such a complex chip must be minimized, either by amortizing the engineering “over several identical items” or by evolving flexible engineering techniques, “so that no disproportionate expense need be borne by a particular array.” This was his philosophy of standard products, honed in the earliest days of Fairchild Semiconductor. As with transistors, so for microchips: the best would be those achieving standard functions.

Moore submitted his manuscript to Electronics, where cuts and editing diminished his original clarity of exposition. Under a more awkward title, “Cramming More Components onto Integrated Circuits,” the piece appeared in the anniversary issue of Electronics on April 19, 1965. Most An inch-and-a-half-diameter wafer, with chemically printed microchips. Each patterned square holds the metal contacts and interconnections for a single microchip. There are approximately five hundred microchips printed on this wafer, each with dozens of transistors.

Most of his key language made it into the published article, along with a cartoon indicating just how fantastic the idea of a home computer seemed, even to the editor of a main communication vehicle of the electronics community!

There is no evidence that the article made a splash at the time. It may or may not have been widely read, but it was not especially cited or republished.

Gordon’s published article in Electronics may have had little impact in the wider world in 1965, but it affected Gordon himself in a profound way. He had articulated his vision of a future that could be built. He now knew exactly where to go and took it upon himself to work to realize his vision and ensure that the prediction would come to pass. The understanding he had now fully achieved guided the rest of his career.

Through the later 1960s, Gordon expounded his analysis wherever he saw opportunity, in text and in oral presentations. He became as committed to persuasion as he was to developing the manufacturing technology itself: braving the limelight, trying to tame language (not his natural métier), developing rhetoric, and mastering presentation skills. Moore could diagram a sentence to perfection, but the art of persuasion depended on manipulating passions and emotions. It was not until the later 1970s that he began to make comments in his talks about the “sheer excitement” of his business. He was also, by then, perfecting the use of humor as a tool to disarm and engage his audiences.

Carver Mead — increasingly recognized as an expert in the field — began to take it on himself to promote Moore’s analysis. As time went on, Gordon’s numerical predictions came true. The doubling of circuit complexity, every year or so, slowly took on the name “Moore’s Law.” Even so, most discussions missed his underlying point that the doubling was not itself the fundamental dynamic. His breakthrough insight was that the pursuit of the best, lowest-cost electronics, motivated by economic competition, would necessarily create this doubling through an ongoing, extensive, and expensive social effort.

Excerpted from Moore’s Law: The Life of Gordon Moore, Silicon Valley’s Quiet Revolutionary, by Arnold Thackray, David Brock and Rachel Jones. (Basic Books.)

Cover image: courtesy Intel
Follow Backchannel:
Twitter | Facebook

--

--

David C. Brock
Backchannel

Director, Center for Software History, CHM//Co-author, “Moore’s Law: The Life of Gordon Moore, Silicon Valley’s Quiet Revolutionary.” https://goo.gl/VgTCjc