The Arc of Innovation

How Iowa corn, the germ theory, and Google Glass help explain how technology changes our lives

IN THE EARLY 1950S, A SECRET FORMULA SPREAD from the front lines of World War II to rural Iowa — and changed the way we think about technology and innovation.

2,4-Dichlorophenoxyacetic acid, or 2,4-D, was a synthetic plant hormone developed by British scientists during the war to boost production of food crops. A chemist at Iowa State University heard about it and thought it might be useful to local corn farmers as an herbicide. Using 2,4-D on weeds might cause the pests to, in effect, grow themselves to death.

The first test of 2,4-D was by a farmer — Farmer A — who lived not far from the university in Story County, Iowa, and was an acquaintance of the scientist.

That first year Farmer A enjoyed a banner crop of corn — a yield keenly observed by his neighbor, Farmer B. Soon, Farmer B began using the weed killer. When he, too, saw a better yield, word started to spread. The next year, four more Story County farmers adopted the herbicide, and by the following year, five others had as well. Within a decade, all thirteen farmers in the area had adopted the herbicide. It was a microcosm of the Green Revolution in action.

Iowa corn farmers may seem a long way from today’s technology vanguard, but the spread of 2,4-D was closely observed by Everett Rogers, a sociology student at Iowa State. Rogers was researching how ideas promulgate, a process he famously described as “the diffusion of innovations.” Though the story of Iowa corn farmers is mostly forgotten, the way Rogers described the diffusion of 2,4-D has become indivisible from the way that technological innovation is discussed today. He described that first farmer, Farmer A, as an “early adopter,” a term that is often used to describe those who wait in line overnight at the Apple store for the next new gadget. And the farmers who waited years to adapt? Rogers described them as “laggards,” which sounds pejorative but in fact nicely describes those of us who bought the 4S when Apple introduced the iPhone 5. We laggards wait for the early adopters to work out the bugs.

Rogers’s explanation of how technology spreads remains the definitive articulation; it still informs how we think about technology today. But as useful as it is, Rogers’s construct is in some ways incomplete. Because the impact of a technology isn’t just measured by how it spreads from one group to another — from scientists to the farmers, say.

The full arc of innovation takes into account the essential role of discovery, on one end of the spectrum, as well as the impact on society, on the other.

It’s an even broader—and more challenging — process than the one that Rogers lays out.

The full arc of innovation begins with the invention itself — what had to happen back in some laboratory in the UK to create 2,4-D before the herbicide made its way to Iowa. This larger arc of innovation extends to what happens after the technology is first deployed, when the innovation resonates not only on an industrial scale but also on a social one. In the case of 2,4-D, its social impact can be measured by the Green Revolution of the 1940s through the 1960s, when industrial agriculture made farming exponentially more efficient and lowered the price of food precipitously, transforming our access to cheap food (as well as, perhaps, leading to the explosion in obesity in the U.S. and other countries). This is the full arc of innovation: from laboratory, to technology, to industry, and then to society at large.

For me, the seminal example of this larger process is the germ theory of disease. The story starts off in 1875, under the microscope of Robert Koch, then an anonymous doctor in rural Germany. Through sheer determination and tenacity, Koch created a chain of evidence that proved that germs could cause human disease. His discoveries defied centuries of medical orthodoxy and paved the way for modern medicine. But first, they had to be accepted by his fellow scientists.

Koch wasn’t the first to claim that germs caused disease; Louis Pasteur usually gets the glory here. But none, not even Pasteur, were able to muster the evidence to turn their radical theory into broadly accepted fact. It wasn’t clear that germs caused disease, or whether they were its result. Koch, though, was the first to craft an experimental method that made causation fairly irrefutable. He had to invent new scientific methods in order to prove his science.

An illustration from an English newspaper of scientist Robert Koch at the peak of his germ-fighting fame.

Koch’s germ-theory discoveries — first of the bacteria that causes anthrax, then of the cholera bacteria, and, most triumphantly, of the bacteria that causes tuberculosis, the greatest killer of his day — weren’t just good science. They demonstrate a quintessential scientific revolution, a historical embodiment of Thomas Kuhn’s 1962 masterwork, The Structure of Scientific Revolutions. Kuhn — who gave us the phrase “paradigm shift” to explain the impact of a new scientific understanding — is the classic navigator in plying the waters of scientific discovery. But not even a scientific revolution amounts to true innovation. Science is only the beginning; true innovation must be felt by society, not by scientists alone.

So we move from Kuhn back to Rogers. Robert Koch’s discovery would have to make the leap from a scientific breakthrough to technological and industrial impact. That took place in operating rooms around Europe. Koch’s evidence confirmed the hunch of England’s Joseph Lister and others who had begun to pioneer the sterilization of surgical instruments and cleaning the operating room as ways to reduce post-operative infections. Crazy as it is for us to imagine it today, these practices were considered subversive in the late nineteenth century; Koch’s discoveries would provide needed evidence to advance the cause of antiseptic surgery.

And ultimately, to qualify as a true innovation, Koch’s germ theory would have to resonate on the social landscape. This is a much higher bar, especially in the nineteenth century, when ideas not only traveled more slowly, but science itself was largely inconsequential for most people. Breakthroughs like Newton’s physics and Darwin’s evolution didn’t impact the day-to-day lives of ordinary people. And so it seemed to be the same with germs. What could microbes and bacteria — “imps of the scientific imagination,” one wag called them — possibly have to do with people’s daily routines?

Quite a lot, it turned out. The germ theory didn’t just change medicine; it changed society because it demanded a radical shift in everyday habits. If germs indeed existed, then people would have to wash their hands, bathe, and—horrors! — stop spitting in public. These practices weren’t just filthy; thanks to Koch, they were now clearly understood to spread disease. And these diseases weren’t just bothersome; they killed by the millions, casting a pall on the cities of Europe and the slums of New York alike. Before the germ theory, they were viewed fatalistically, as unexplainable, unavoidable passages in human experience. But the germ theory made them something that humanity could fight against. They could be avoided. And this innovation led to the spread of many others: soap, for instance. In the United Kingdom in 1801, the citizenry used about three and a half pounds of soap per person. A hundred years later, with the germ theory a fact, sales had soared to 15 pounds per person­­—five times the amount.

The germ theory was perhaps the most significant innovation in human history since the development of agriculture.

Few breakthroughs have had such a profound impact on the quality and quantity of human life.

This arc of innovation is something that has played out many times since Robert Koch. The definitive example of our day is the personal computer, which was invented in the 1940s, and which IBM founder Thomas Watson apocryphally believed wouldn’t amount to much more than a marginal technology. “I think there is a world market for maybe five computers,” he supposedly said.

Watson’s prediction is often quoted in order to mock his prognostic sense. But when plotted against the arc of innovation, it’s fair to say that Watson was challenging the idea that the technology could make the leap from one level of innovation to the next, from scientific to industrial applications. The same can be said of Digital Equipment Corp. founder Ken Olsen, who made a similarly foolish-in-retrospect claim a generation later, in 1977. “There is no reason anyone would want a computer in their home,” he said. Easy to laugh now, but Olsen’s only sin was that he was vested in the industrial impact of computing; he failed to recognize that the technology might leap to the next stage of innovation, and impact society as well. As it happened, that social shift was happening in Silicon Valley, at the meet-ups of the Homebrew Computing Club (where Apple Computers was born). It would take another generation, into the 1990s and 2000s, before computers became commonplace in the home.

In my days editing at Wired, we used this chain — discovery-technology-industry-society — as an internal shorthand to judge at what stage an innovation stood. We understood innovation as most expert observers of technology do: as a process, and more or less a linear one. But it’s more than that.

Innovation is, in fact, a progression from one stage to another, higher stage.

That is to say, every step is increasingly difficult: proving something under a microscope is one thing; convincing your peers is hard; turning it into a tool harder yet; introducing that tool into industry harder still; and making that discovery matter to the average citizen, well, that is the hardest step of all. But just as the transitions are more difficult, the rewards and impact are greater as well.

This framework also helps us see where—and sometimes why — various innovations have stalled out. Robotics has long captured the public’s imagination, but robots have been much more useful in industrial contexts (manufacturing, surgery, the military) than in any genuine social sense. Even those technologies developed with the explicit objective of social impact still have to work their way through the framework. For all the chatter about Google Glass, it’s true utility might be more suited for use by physicians or in job training than by the fashion minded.

That’s not to say that there aren’t many legitimate discoveries with profound industrial implications — and potentially lucrative ones, too. But true innovation is a harder game. It requires a bit of serendipity, and that a discovery matter enough, and prove sufficiently resonant, to either make people’s lives easier or prove that the benefit of changing our lives is clear. In truth, innovation doesn’t happen all that much, perhaps once a generation or decade. But that, of course, is what makes it so special when it does happen.