IMO the author spent so much time in the corporate forest that he fails to see the trees and feel the wind that bends them.
The classic Porterian barrier/opportunity analysis is based on one large assumption: the identical externalities. On the contrary, virtually all of the Robbie’s examples violate this assumption, mixing together technology trends, demand tailwinds, supply shortages and such. Robbie claims that control/centralization delineation is what matters the most, but in reality those factors are often secondary or even tertiary to ye olde supply/demand paradigm. To put it bluntly, Uber can totally afford being a badly managed company, while a startup making buggy whips will never become a unicorn no matter how many RFCs they write.
Even more glaring example of the power of market externalities would be the IOT. For anyone not working on Tasman Drive, IOT has never ‘gained and lost momentum’ — it simply never existed outside the Cisco’s PowerPoints designed to persuade people to buy more of their networking hardware. Every person who can find their butt with both hands can reason a bit about sensor networks and realize that (a) they are not Internet because 99% of data processing in them is done locally and (b) robots tend to have extremely low bandwidth requirements in the first place, so even when the number of sensors is large, the cumulative traffic they generate is very small. So even though we can supply the technology for sensors to communicate, the demand for humongous robot-robot communication web simply does not exist.
To summarize, for technology to become critical, it must exist on the short-lived edge between the roaring demand and the imperfect but functional supply. Dry (or a slowly trickling) supply chain simply does not work, as it cannot appeal to a critically large mass of adopters
So what do these supply and demand laws mean for the AI?
There might be a huge upside in demand (virtually any non-creative labor is potentially a victim of AI), which excites many VCs. Unfortunately, the supply side is virtually non-existent. Today’s AI amounts to some technology advancements of algorithms from the 1960' s that make non-linear classifiers work ‘well enough’. As a result, we are nowhere close to general AI, and we are seriously behind in robotics — which is the flesh and body of the physical AI platforms. In fact, it can be even argued that technologies and companies that threaten job markets the most (like Google or Uber), are not reliant on AI at all.
So it would not be a large mistake to state that today’s AI is restricted to a very small domain of applications – such as NLP or the visual scene recognition. It is not destined for self-awareness (even at the cockroach level), and it is not ripe for most of commercial applications (e.g. Level 5 cars) that appear to be in the ‘low hanging fruit’ domain. Moreover, given the non-linear history of technology breakthroughs, we cannot even be sure that AI (at least in the current form) would not be mothballed once again after running out of immediately reachable applications (remember fuzzy logic?). This did happen before, and it might happen again.
So no, AI is not different from the previous technology waves.
What is different is just a set of people who think they witnessing the birth of technology that will trump all other technologies. This set just naturally replaces itself every 20 years or so.