Vertical Dis-Integration, Roadmapping, and Bat-Signals

This post is by Cyrus Mody, Professor and Chair of the Department of History of Science, Technology, and Innovation at Maastricht University.

My first reaction when I hear people talking about the 4th Industrial Revolution is that I only have the barest sense of what is meant by the 3rd. It seems to me that the Owl of Athena has only just left its perch on the 2nd Industrial Revolution, so it’s surely a bit premature to talk about the 4th. But premature talk is perhaps one of the distinguishing features of the 4th IR.

So let me try to put myself on firmer ground by using something I know quite a bit about — Moore’s Law — to understand the transition from the 2nd Industrial Revolution to the 3rd, and then to speculate from there as to what might be going on in the purported 4th IR. Now, as you may know, Moore’s Law is the rule of thumb that the most profitable number of components on a commercial microchip doubles roughly every two years. The doubling period was every year in 1965 when Gordon Moore, a co-founder of a start-up called Fairchild Semiconductor, first articulated this observation and predicted it would continue for the next ten years or so. With modifications, it’s held true for a half-century, though it may be slowing down now.

Gordon Moore, Fairchild Semiconductor, 1965 — for me, that’s the fulcrum when the global economy began to tip from the 2nd Industrial Revolution to the 3rd. To understand why I think Moore’s Law is one of the important moments of transition from the 2nd to the 3rd, though, I have to first explain what I mean by the 2nd and 3rd IR. So let’s begin at the beginning: the 1st Industrial Revolution, as usually invoked, was the late 18th and early 19th century emergence of a mechanized manufacturing economy. The 1st IR’s iconic technology was the steam engine, used to run locomotives, steamships, textile factories. Its iconic fuel was coal, though as Thomas Finger has pointed out, really it was fueled by wheat, which fed factory workers.

The 2nd IR has slightly less consensus behind it, but in general it’s taken to have begun in the late 19th century and ran until, well, I’d say we’re still in it (but certainly it hit some kind of wall in the late 1960s). Its iconic technologies were the telephone, the internal combustion engine, the steam turbine, the electrical power plant, the airplane, the automobile, radio, etc. Oil partially displaced coal, giant line-and-division monopolies partially displaced older family-owned businesses. You could say that Alfred Chandler’s The Visible Hand is the romantic epic of the 2nd IR firm.

And this period produced some gigantic Chandlerian companies. For instance, AT&T, the American telecommunications monopoly, employed a million people at its peak. These mega-corporations sometimes grew through diversification. Famously, the American electronics giant RCA owned rental car and frozen food subsidiaries, while its competitor General Electric was in everything from jet engines to television broadcasting. But 2nd IR companies also often grew through vertical integration, so that every piece of their operations, from raw materials to finished products, could be done in-house. Whether as a matter of diversification or vertical integration — this was a matter of debate — these companies often included units devoted to scientific research. The classic 2nd Industrial Revolution companies possessed giant corporate laboratories churning out discoveries. Many of those discoveries were given away, so that firms could claim to be operating for the public good and thereby legitimate their monopolies. A particularly notable example is the transistor, the building block of modern computers, which was invented at AT&T in 1947 and quickly shared with other companies to create what became the global semiconductor industry.

By 1965, there were (to a first approximation) two types of firms in that industry. On the one hand, established 2nd Industrial Revolution giants like AT&T, RCA, Westinghouse, IBM, General Electric, and Philips. And on the other hand were smaller, usually younger companies trying to diversify their way into solid-state electronics as a way to join the giants: e.g., Fairchild Semiconductor (a subsidiary of Fairchild Camera), Texas Instruments (an oil exploration firm), Motorola (car radios), Sony. Naturally, there were a few contenders which didn’t fit this rough categorization (e.g.,Philco, which wasn’t quite a giant but was an old, diversified company), but this broad distinction covered most of the industry.

Despite — or perhaps because of — the odds against them, the young start-ups were remarkably innovative. For instance, the integrated circuit — the basic way of making transistors in almost every mobile phone and computer today — was simultaneously invented at Texas Instruments and Fairchild. But small companies also benefited from the leakage of research from the giants. Firms like AT&T which built everything in-house took a long time to get new inventions into products. Firms like Fairchild which built components for other firms’ products just dumped stuff on the market as quickly as possible to ensure cash flow. So if a new idea came out of AT&T or IBM’s research labs, it was often commercialized by Fairchild or TI first.

However, small companies that make money soon suffer from big-company problems. As Ross Bassett and Christophe Lécuyer have shown, that’s what was happening to Fairchild at exactly the moment when Gordon Moore first articulated Moore’s Law. Its research lab was starting to go off in crazy directions and produce innovations that the manufacturing arm couldn’t turn into products, while the manufacturing arm was encountering problems which the research lab wasn’t interested in solving. Everyone at Fairchild with a good idea started to leave to found new firms. Silicon Valley people today speak with reverence about these “Fairchildren” — but losing talent means something’s not working.

So in1968 Gordon Moore and Robert Noyce formed a new company, Intel — one of the first 3rd Industrial Revolution companies. Not that they called it that, but we can use that label in retrospect. So what made Intel a 3rd Industrial Revolution company? Well, it was explicitly designed to be the opposite of a 2nd IR company. One does almost everything in-house; the other (Intel) does next to nothing in-house. Moore and Noyce decided, for instance, that Intel wouldn’t have a separate research lab. Instead they kept in-house research tied to the manufacturing line, while outsourcing long-range research to companies like IBM, to universities, government labs, and industrial research consortia.

Over time, other firms adopted that model, and shed more and more activities to outside vendors. For instance, where 2nd IR companies like IBM and AT&T developed a lot of manufacturing equipment in-house, Intel and its peers instead relied on vendors to develop new tools. Even manufacturing itself was eventually outsourced. Today, Intel is actually an outlier in doing its own manufacturing; most firms outsource manufacturing to so-called “foundries” such as TSMC. Almost every function that went into making a product at a place like IBM is now chopped up and divided over dozens of firms. Apple is probably the paradigmatic firm of this type; very little that’s made by Apple actually goes into an iPhone — instead, the company’s role is to coordinate a network of vendors who each contribute a piece to the whole. Nor is this shift limited to electronics companies: think about how the big movie studies work today versus how they worked until, say, the release of Jaws and Star Wars; or how the big pharmaceutical firms work today versus how they did before the advent of biotech.

The big question for 3rd Industrial Revolution firms is: how do you know what to do? In a 2nd IR company, information moves up and down a chain of command. Everyone is supposed to know what their unit should do to move the company forward. You know where the components of your products come from — you make them yourself. You don’t have that luxury in a 3rd IR company — everyone operates in a sea of other companies swimming in every direction. How can you ensure that you get the parts, the technologies, the knowledge that you need when you need them? How can you ensure that you and your customers and vendors are on the same trajectory?

The answer, I think, is something like a bat-signal. You know how those work, right? You can’t rely on the police — in-house security is too slow and too conventional and upper management doesn’t know what it’s doing, all classic 2nd Industrial Revolution problems. So you outsource protecting the city to a private contractor, Batman. But no one knows who Batman is, so you can’t contact him directly to tell him what you need. Instead, if there’s trouble, you make him aware of it by beaming a signal into the sky that everyone can see and react to. That’s the 3rd Industrial Revolution way — no one tells Batman what to do, but he can see what you need him to do.

Here we return to Moore’s Law. When Gordon Moore first articulated it in 1965, it was just an observation about his industry that other people thought sounded right. People only started to call it Moore’s Law, and fold it into business planning, in the ’70s. They started to say things like, “transistors are this size today, what do we need to do to be ready for them to be half that size in two years?” At first this was an ad hoc way of organizing work, but in the ’80s firms began using Moore’s Law as a pacesetter to keep everyone in a supply network synchronized. By the end of the ’80s, Moore’s Law was formally enshrined in industrial “roadmaps” which operate just like bat-signals. Everyone in the industry or wanting to be in the industry can look at the roadmap and know just where their competitors and vendors will be two, five, ten years from now, and therefore what they need to do to survive over that timeframe.

Moore’s Law has survived much longer than anyone thought was physically possible, and for that reason semiconductor roadmaps have come to be seen as incredibly successful organizing devices. Thus, lots of other industries (and government agencies aiding those industries) now think roadmaps are the way to go. Every “emerging technology” gets its roadmap these days. Yet few people think these other roadmaps have worked as well as the semiconductor ones have. There are lots of reasons for that: for one, Moore’s Law keeps getting tweaked so that it’s apparent success is somewhat artificial; for another, the semiconductor industry is enormous and wealthy and seen by lots of countries as critical to economic and/or national security, so there’s lots of will to make that industry’s roadmaps work. Whatever the reason, roadmaps outside semiconductors just don’t seem to have the same traction. Yet other industries still have the same need for bat-signals.

So what to do? Well, for the past 20 years — especially the past 10 — we’ve seen the rise of powerful companies which aren’t 2nd Industrial Revolution firms and they aren’t quite like most 3rd IR firms — so maybe they’re the vanguard of a 4th. Maybe. If that’s really a robust way of putting things, then it’s important to note that this revolution will happen the same way as the rest: by building on the technological, social, administrative, cultural, and financial possibilities opened up by previous revolutions. The 2nd IR required the steam engines of the 1st; the 3rd required the transistors and computers invented by the 2nd; the 4th requires the networked communications and lean business models made possible by the 3rd. Companies like Amazon, Facebook, Uber, Twitter, and Google all assume that information moves fast, and in every direction at once, and that innovation is constant and ubiquitous thanks to advances made in the 3rd IR. But these companies have also returned to the 2nd IR model of expanding beyond their core competence — think about all the Google projects you know about, plus all the ones that we can be pretty sure they haven’t told us about yet, or all the different things Amazon does, from making TV shows to cloud computing to, still, selling books.

These companies still use bat-signals, but not the dry, technocratic roadmaps that 3rd IR companies employ. Instead, their leaders are constantly in public, beaming visions of the future into the sky so everyone knows what they want without them having to issue direct orders. Presumably they do issue orders within their companies, but the reason they project these bat-signals is that their desires are just as (probably more) likely to be met by external actors whom they can later buy out or form collaborations with. They’re also spurring their in-house teams by threatening them with external competition. If Jeff Bezos tells the world Amazon will make deliveries with drones, he’s surely got an in-house unit developing those drones, but he wouldn’t be sad if some start-up comes to him with a better solution so he can buy them out. That’s how these companies work.

Bat-signals are also necessary for these companies because they are so reliant on a steady stream of investment. Think about Amazon, which everyone wants to invest in but which still doesn’t really turn a profit, or Uber, which is basically a struggling taxi company but has a stock valuation larger than the GDPs of several EU members. You have to project a truly spectacular vision of your company’s place in the future for people to invest in companies that have no near-term plans to, you know, make money.

That’s perhaps why it doesn’t seem to matter that many of these bat-signals don’t seem very tethered to a world in which actual people live. Most bat-signals projected by the captains of the 4th Industrial Revolution will turn out to be wrong. Some of them we know will be wrong just from back of the envelope calculations: whatever Elon Musk says, we can be pretty certain that it is physically — not to mention economically and politically — impossible that more than the tiniest fraction of our species will ever walk on Mars. But right or wrong isn’t the point of a bat-signal — it’s a guide to action, not a declarative statement about reality. And to understand how it guides action, we need analysis like that provided by historians and sociologists of science and technology. For instance, when Jeff Bezos or Richard Branson or Elon Musk push visions of space tourism and (at least in Musk’s case) space colonization, we have only to read people like Patrick McCray and Asif Siddiqi to learn that they’re carrying forward ideas promoted by people like Wernher von Braun in the ’50s and Gerard K. O’Neill in the ’70s — ideas predicated on (A) siphoning a very large chunk of global GDP to develop space technology, and (B) securing assent from 99.9% of the human race to help the remainder inhabit a libertarian paradise in orbit.

Again, accuracy isn’t the point of these predictions. The point is that the captains of the 4th Industrial Revolution get to put those bat-signals up there so that the rest of us can see that’s a direction that other people might go. The bat-signal creates followers, and it creates people who are left behind. Now, some of these bat-signals are relatively benign; if Jeff Bezos thinks drones will deliver packages, well, there are worse things for drones to do. That’s not a future I mind. But other captains of the 4th Industrial Revolution are beaming up much more dubious signals. For instance, Peter Thiel’s penchant for seasteading looks to me like a ploy to destroy the idea of a public good — i.e., it promotes the aim of siphoning taxable wealth onto artificial islands populated by the rich and beautiful and leaving nation-states to starve.

Thiel’s an extreme example, but he’s hardly alone in projecting visions of a future which short-circuits democratic modes of governance. For instance, as the Guardian has noted, it’s a bit much when Mark Zuckerberg talks about a Universal Basic Income instead of, you know, paying his share of taxes and supporting a strong social safety net of the kind that we know works pretty well! Or when Richard Branson talks about a gang of rich guys paying for geoengineering schemes to prevent global warming, I just think, “Or, you could promote a carbon tax and invest your billions in a political organization which could seek democratic means for making carbon taxes a reality!”

I understand their urge to play both Batman and Commissioner Gordon — i.e., to call for help and then to also come to your own (and everyone else’s) rescue. But we should ask ourselves whether Gotham wouldn’t be safer with a functioning police force of its own — i.e., maybe our innovation system, and our politics more generally, would be healthier and we would all be better off if we had democratically-accountable ways of identifying social problems and a robust public sector to shepherd solutions into being, instead of relying on a small number of 4th Industrial Revolution firms to do it for us.

That was the motivation for groups like Science for the People and government efforts such as the Office of Technology Assessment and the National Science Foundation’s Research Applied to National Needs program (RANN) in the 1970s (and note that Science for the People has recently been reborn). Today, it’s the impulse behind projects like the “national science agenda” which the Dutch government sponsored a couple years ago. I’m somewhat skeptical that such efforts do more than come up with an agenda which the organizers pre-cooked, but at least there’s some smidgen of democratic legitimacy there. So we have example of how this would work, both from the past and present. One way or another, we need to level the playing field — i.e., to figure out some way for ordinary people to beam their own bat-signals into the sky, so that each of us has some means of guiding action. The same technologies which have given rise to Bezos and Zuckerberg could, potentially, make that possible.

In any case, I think we stand in the same place relative to the 4th Industrial Revolution as our forerunners did in the years around 1970 relative to the 3rd. Those were the years in which firms like Intel arose via a critique of the 2nd Industrial Revolution. But there were lots of other critiques around in those years as well, from Ted Nelson’s to Mario Savio’s to Fritz Schumacher’s. Some of those other critiques — e.g., Nelson’s — eventually made common cause with the 3rd IR. Think about Apple’s 1984 ad, for instance. But the others, like RANN, OTA, and Science for the People, were marginalized. Today, the 4th IR is being put forward as a critique of the 3rd, but plenty of other critiques are available. Some are rapidly being appropriated by the Thiels, Musks, and Zuckerbergs of the world. But let’s try to ensure that the critiques which imagine a world the rest of us might actually want to live in have a chance.

--

--