What is the fourth industrial revolution?

Kyle Libra
4 min readApr 5, 2017

The first industrial revolution was the movement towards new means of manufacturing from the late 1700’s to the mid 1800’s. Inventions like the steam engine dramatically reshaped entire industries ushering in staggering increases in productivity along with significant societal changes. When 90% of the population is no longer needing to tend the fields all day just to produce enough food to for society to eat, that’s a big deal.

This was closely followed by the second industrial revolution which featured the mass adoption of these practices and the refinement of many key inventions which kicked off the initial revolution. These two eras mostly seem to be split up for easier historical categorization. Phase one was initial adoption, phase two was mass adoption.

The third industrial revolution was the adoption of computers in manufacturing processes and then all business practices beginning in the 1980’s. Just as the steam engine had done 200 years prior, the computer would do again. Productivity skyrocketed along with dramatic increases in the standard of living.

To quote a16z’s Benedict Evans “Everyone here is a cell in a spreadsheet. The building is an Excel file. Every Monday they press f9 and recalculate.”

So then, what is the fourth industrial revolution? The rise of artificial intelligence and widespread automation of work. It’s generally agreed upon that the transformation the world saw in the previous three industrial revolutions is about to happen again, but this time it will be even more dramatic than before. The primary disagreement is not on what, but when. It’s a debate of how quickly is this actually going to happen. Much discussion is also focused around questions of what this does to society and how we should try to prepare for it, if that’s even possible.

What does the world look like if this graph again doubles, but in half or even a tenth of the time? (Source: https://ourworldindata.org/economic-growth)

The most optimistic viewpoint is that just like applications such as Microsoft Excel had the ability to boost an individual worker’s productivity to the point where entire rooms of workers were no longer necessary, that didn’t happen overnight and neither will this. This new phase will take time and will really only amount to the knowledge worker having much better tools. Maybe over time some positions will be eliminated, but that’s the price of progress. Just as some jobs didn’t exist ten years ago, new jobs will emerge in the future. While it might appear that some of these changes will happen suddenly and without warning, taking a step back would reveal the twenty steps which slowly over time got us to this point.

The most pessimistic point of view is far more interesting to consider. On this side of the argument, first we will see mass global unemployment as robots replace all human workers, followed by artificial intelligence that enslaves and destroys humanity. Imagine the Terminator movies, but in real life. Sam Harris has a great TED Talk on this subject and Vanity Fair just profiled Elon Musk and his efforts in this area. It’s not just fringe thinkers who earnestly believe this is where we are headed, Bill Gates and Stephen Hawking are among the most outspoken critics of the rush to create AI.

So where does that leave us? Is this not worth worrying about or is humanity doomed? The truth is likely somewhere in between.

On one hand Treasury Secretary Steve Mnuchin was recently quoted as saying “it’s not even on our radar screen…. 50–100 more years.” On the other, companies like Postmates are experimenting with doing deliveries via driving robots and Amazon is actively trying to do the same with drones. As a government official, saying you see wide spread job loss as a short term likelihood would be irresponsible to say the least. And one robot being tested in the field is a far cry from every single delivery job at every single company being replaced by a robot tomorrow. Somewhere in between lies the truth.

There are studies like one from PwC saying 38 percent of U.S. jobs could be lost to automation in the next 15 years. Meanwhile, the World Bank thinks two out of every three jobs in the developing world will be eliminated. At the World Economic Forum there are discussions on the need for a universal basic income to balance out inevitable job loss from automation. In Kenya there’s a group running an actual beta test of the concept.

For every doomsayer, there’s an eternal optimist trying to make the world a better place. No one has a crystal ball to tell them exactly what’s going to happen next. It’s easy to cherrypick or focus on certain anecdotes and examples to support either case. When a tweet can register as the number one news story instantly across the entire world, it’s easy to lose perspective. Twenty years removed from widespread broadband access and yet AOL still has millions of subscribers paying for dial up, yet to even discover Twitter. In trying to predict where advances in Artificial Intelligence will lead us, I’m reminded of a William Gibson quote, “The future is already here — it’s just not very evenly distributed.”

All of this is happening in the shadow of an increased spotlight on income inequality and a Presidential election that suddenly shone a light on a stark divide in terms of economic opportunity even within the United States. This angst and uncertainty around technological change isn’t new. Cars replacing horses didn’t usher in the apocalypse and neither did spreadsheets replacing accountants. While it feels like this is different, that’s probably what everyone felt in the previous industrial revolutions.

--

--