Self-Driving Cars and the Machine Learning Conundrum

Lessons from leeching and proprietary ecosystems

Jeremy Liu
The Pointy End
8 min readDec 30, 2016

--

Image source: Uber

Almost every technology story you read that talks about the ‘future’, will mention AI and machine learning. The development of artificial intelligence has the capability of being even more transformative than the smartphone revolution was by once again shifting the balances of power upstream to new product components, away from apps and platforms, and into the hands of firms which can aggregate and interpret data in the most effective and useful way. It’s no surprise that the race for AI, is technology’s next great paradigm.

I use the term upstream to reference a shift in the product components in which value is derived, but also to make light of the fact that this shift in power is a recurrent phenomenon which aligns closely to the occurrence of ‘leeching’, described in my last story — ‘Leeching’ and the Strategy of Compatibility’. Put simply, leeching is a process through which firms utilise various platforms in order to deliver services, however through their multi-platform compatibility, do not increase the defensibility of the individual platforms they utilise. Over time, leechers exercise unique capabilities derived from their multi-platform strategies to compete against the firms they initially exploited. For example, Snapchat which uses mobile operating systems to deliver its services, is now positioning itself as a competitor against digital imaging devices and native smartphone cameras by introducing features such as a permanent camera roll and hardware products like Snapchat Spectacles.

In this discussion, the definition of platforms is not limited to software systems as the word is often used. Here, a platform can be any sort of entity in which another component in the product stack is reliant on; a platform can therefore include hardware. A computerised product can typically be reduced to consisting of three fundamental components — hardware, platform, and apps. Hardware sits at the bottom of the stack as the most foundational element; operating systems, applications and ‘soft’ components such as cloud, sit atop the stack. Thus, as the illustration below visualises, hardware is the platform for operating systems, and operating systems are platforms for apps, and apps can be platforms for various specific things such as social networking, or multi-media production and consumption. The illustration also represents the flow in which leeching can occur throughout the stack.

As noted in my previous story, many smartphone applications — Snapchat and Messenger most notably — are leeching off and ‘overthrowing’ the host platforms they utilise, attempting to overtake smartphone native photography and messaging respectively. However, leeching strategies can occur between OS and hardware components too. A great example is Microsoft during the heyday of Windows. As the pre-eminent PC operating system, Windows became available on the hardware of firms such as Sony, HP, Dell, Acer and Lenovo. The pervasiveness of Windows and its licensing model eliminated the defensibility of hardware businesses, effectively reducing PC hardware to low margin commodities. All the while, Microsoft did the least work and made the most money, and in recent years has introduced its Surface line of PCs, pitting itself against the very firms it relied on for decades.

Although leeching specifically occurs between immediate increments in the stack, subsequent strategies to overtake platforms can have impacts that surpass immediately adjacent components. For example, Google which offers many multi-platform applications such as Google Now, Allo, and Google Home, now offers hardware products such as the Google Pixel smartphone, Google Home speaker, and Chromecast. As Ben Evans notes, smartphones have been swallowing physical products such as calculators, alarm clocks, and cameras for years, but recently many smartphone-first features have been unbundled into physical products. Google now finds itself competing against Apple and the very hardware firms such as Samsung which have, through their support of Android, been instrumental in enhancing Google’s competencies, particularly in data aggregation and machine learning. Although Google’s fundamental business model rests in the cross-platform application layer of the stack, it’s able to leech off multiple components at lower levels of the stack. As such, this model suggests that the lower a component is in the stack, the less ‘power’ it tends to have, and thus the potential for value generation is limited.

History attests to this fact. Throughout most of the 20th century before networked software, hardware was the most important (and only) component in the product stack. This played into the hands of firms such as Sony, which dominated consumer technology with their knack for design, miniaturisation, and engineering. However as networked software and computing applications developed in the late 20th century, this power shifted upstream to OS platform vendors such as Microsoft. These days, although hardware remains the most foundational layer of the stack, it is essentially a commodity; no surprises that Samsung appears to be the only Android OEM that actually makes money.

So what does this mean for machine learning?

The history of artificial intelligence is fascinating. This story on the New York Times recounts much of the history of AI development in great detail, tracking the progression of AI as a rules-based pursuit into something that we now call ‘machine learning’. Simply, machine learning is an AI paradigm that allows artificial intelligence to draw conclusions by analysing large swathes of data. The example which is most commonly used to describe machine learning is how AI technology can identify animals, like a dog. Traditionally, rules based AI would inscribe a definition of dog, such as four legs and two ears, and trust the technology to recognise the animal based on those rules. Machine learning on the other hand, simply gives the AI perhaps a thousand pictures of a dog and has it find the similarities and characteristics on its own. The advantages of machine learning are its accuracy, but also its natural ability to get better over time if given the opportunity to analyse more and more data at scale.

Why this matters is because the lessons of the leeching continuum tell us that ecosystems are more defensible when they’re siloed. Siloed platforms develops first-party applications with deep and useful integration, and limit the infiltration of cross-platform third-party applications. If this strategy sounds familiar, it’s because Apple has been doing it with iOS for years. As a hedge against Google, Apple developed its own mapping service, Apple Maps in 2012. Despite Apple Maps being mostly worse than Google Maps, Apple Maps is amazingly three times more popular than Google Maps amongst iOS users by virtue of it being the default. Additionally, the recent release of Siri’s API, SiriKit, appears to have been intentionally limited to a select number of application types, reducing the potential effects of leeching. For example, Siri currently can’t access Spotify’s cross-platform music streaming app but can access and play music through Apple’s preferred and default music application, Apple Music.

The only time when developing open ecosystems and platforms is a definitively sound strategy is when the platform itself isn’t actually a core element of said strategy. For example, Android OS is a mere cameo in Google’s core strategy of providing application layer services.

With that in mind, the incentives all exist for platform players to maintain siloed ecosystems. On the other hand, incentives exist for application players or ‘leechers’, to attempt to infiltrate these siloed ecosystems. Artificial intelligence and machine learning remain application layer pursuits, resting on top of platforms. But if platforms are motivated to become walled gardens, how will AI applications obtain data at a scale that is necessary for machine learning to be as effective as it can? Of course, Google has Android as a platform where it can prioritise its own applications for the sake of machine learning, but on iOS, Google’s hands are tied. Google doesn’t control enough touch-points on iOS for its machine learning tech to gather the amount of data that it would like. The touchpoints that Google does control on competing platforms such as the Google app on iOS play second fiddle to first-party solutions such as Siri. Although Siri is arguably worse than Google’s own Google Now assistant, the friction of having to deliberately open the Google app to use it should ensure that Siri remains the most popular digital assistant solution on iOS. Importantly, data gathered for machine learning through Siri is data Google doesn’t get. Time spent using Siri is time not spent using the Google app.

But this conundrum extends further. As mentioned earlier, ‘platforms’ as structural components aren’t limited to software operating systems, but also to hardware. And interestingly, as smartphones increasingly become ‘good enough’, much effort is currently being allocated to making intelligent other hardware items such as cars, televisions, and home appliances. Self-driving cars is one of the more interesting new frontiers in technology these days. Cars, regrettably haven’t had a radical innovation since Henry Ford’s assembly line. Self-driving cars must rely on a lot of smarts to actually work, and much of this intelligence is derived from AI and machine learning.

The car is a platform, and in this case, the physical car is the hardware, and the artificial intelligence engine enabling self-driving is of course, the software. The conundrum is that car manufacturers have seen what happens when software firms leech off hardware. What Microsoft and Windows did to PC manufacturers, reducing them to unprofitable, low margin commodity firms, is the perfect parallel. Unsurprisingly, they want no part of it. As such, Google, Apple and Uber may have extraordinarily hard times convincing the car manufacturers to adopt their respective self-driving platforms. Just like the future panned out poorly for Sony (which no longer makes PCs), IBM (which off-loaded its PC business to Lenovo), Compaq (which was swallowed by HP), and the like, the future looks equally as bleak for Toyota, GM, Honda, Kia and the rest.

Ideally, the car manufacturers would want to develop their own machine-learning tech (a long shot), or strike partnerships with whichever firms are already developing the brains for self-driving cars, that is, the Apples, Googles or Ubers of the world (much more likely). Essentially, we shouldn’t be surprised to see a lot of vertical solutions in the self-driving industry, whereby the hardware and software components of the stack are vertically integrated, and protected. For the industry’s sake, I’m not convinced this is in fact the outcome for the greatest good, after all, machine learning is heavily reliant on scale and its incredibly difficult to achieve scale when the industry is fragmented into several proprietary standards and ecosystems. On the other hand, fostering cross-platform machine learning functionality might just be financial suicide for car manufacturers. That is the conundrum of machine learning, it will be interesting to see how car companies respond.

Happy New Year!

--

--

Jeremy Liu
The Pointy End

I write about digital economics, technology, new media, and competitive strategies.