Removing the headphone jack makes sense

Gautam Mainkar
Asymptotic Future
Published in
6 min readOct 10, 2017

In a move that has generated a lot of controversy, made all the more acerbic because they mocked Apple for doing it last year, Google announced the Pixel 2 and Pixel 2 XL, and neither phone has a headphone jack.

Pixel 1 Ad Spot

They aren’t the first manufacturer to drop the formerly ubiquitous 3.5 mm connector, and probably not the last. The official explanations being offered for this are mostly along the lines of:

  • It frees up space inside the phone for new chips and bigger batteries
  • It lets manufacturers make thinner phones
  • Wireless is the future anyway, so cut the cord now

That’s all good, but if it’s only about getting a cumbersome piece of equipment out of the way it doesn’t make sense that:

  • Apple has introduced the custom-made W1 Bluetooth chip to facilitate pairing with its AirPods and Beats wireless headphones, which means they can integrate more seamlessly with the iPhone
  • Google’s Pixel Buds have an Assistant feature that only works with Android devices, and an in-ear live translation feature that only works with Pixel phones, so Pixel Bud users will have a better experience than other users.

Why try to lock users into their own ecosystem if it’s only about getting rid of old technology? The obvious explanation is they’re both trying to make more money from users by pushing proprietary hardware on them. The Verge said as much, fearing that this will result in walled gardens and the death of the audiophile headphone market.

The overall sentiment is that Google and Apple have killed the headphone jack prematurely, and are just trying to milk customers for an extra $160 for proprietary headphones. But there is perhaps another long-term goal that could explain this.

The inexorable march of Moore’s law

The most famous law in technology predicts that the number of transistors that can be fitted on a piece of silicon doubles periodically. The upshot of this is that we get more functionality from smaller chips.

More transistors on the chip means more processing power on a smaller chip

In fact, Apple’s new A11 Bionic chip has blown the roof off mobile SoC benchmarks, offering a level of performance comparable to much larger laptop chips in some scenarios.

Since the rise of computing, computers have shrunk continuously, from massive mainframes to desktop computers and laptops to mobile phones. With Intel confirming that Moore’s law will continue to hold at least in foreseeable future, we can expect our chips (and the form factor of our devices) to continue shrinking.

The UI paradigm of our primary devices has already changed once in the 21st century, from keyboard-and-mouse PCs to touchscreen mobiles. The next paradigm may well be around the corner, and like the previous one, it will improve the user experience immensely, by making it more personal and easier to use.

On-device machine learning

Both Apple and Google recently announced breakthroughs in on-device machine learning. Apple introduced a neural engine on its A11 Bionic SoC, which powers their new FaceID technology and Animojis, while Google announced a music identification feature on the Pixel 2 which uses an on-device database to run pattern-matching algorithms and identify ambient music without having to reach out to the cloud. They also introduced the Google Clips camera which does on-device ML to figure out what to click, and when. Some of these applications may seem trivial right now, but the potential that the technology unlocks is huge.

Both Siri and Google Assistant use voice recognition algorithms to understand and interpret what we’re saying to them. Voice recognition is fundamentally the same kind of problem as face recognition or music recognition. Google Home can already identify different users on-device. It seems reasonable that the on-device ML infrastructure being built by both companies can be used to power more intelligent, capable and independent voice assistants.

Moore’s law + On-device ML = Her

Spike Jonze’s Her showed us a vision of the future where near-sentient personal assistants with the witty repartee of Scarlett Johansson were in everyone’s ears. It feels pretty realistic, because that’s an intuitive user interface that just makes sense. There’s nothing more natural and simple than just saying what you want, and having it be understood and done for you.

The combination of smaller chips and on-device ML creates a path to a revolutionary new UI paradigm: Smaller SoCs can be put into devices small enough to fit in the ear, or over it. Voice recognition powered by on-device ML (free from the vagaries of cellular signal strength or WiFi availability) can bring the UX even closer to the end user.

Mobile apps are much simpler than PC software and light-years away from the command-line interfaces of the dawn of personal computing, but they still need to be installed and configured to work. And our mobiles are always on our person, but using them still requires the physical act of taking them out of your pocket, unlocking them, and navigating through the UI to find the app you need.

A voice-based UI removes even that last bit of friction and makes the system even more accessible. It’s the future, because it simply makes sense.

It just works

Lock-in for the future

To come back to the original point, the Verge is right: the end goal is to have a closed ecosystem, with the headphone becoming a more independent device in the years to come, and maybe even the dominant device at some point in the future. Getting users off third party products and into the headphone ecosystem being promoted by both companies gives them the space to slowly transition to the voice-driven UI paradigm that they can undoubtedly see coming.

The alternative for both companies is to go the way of Microsoft in the mobile era: Left dominating a UI paradigm (PC) that plays a smaller and smaller role in the lives of most users. Microsoft did actually develop an extremely capable and good-looking mobile OS, but it didn’t matter. The first mover advantage had gone to Apple in some markets and Android in others, and Microsoft was never able to catch up. There were many lessons to be learned from watching the dominant player of the PC era stumble, but the most important was perhaps this: once the technology can support it, the transition is inevitable. Better to be the one controlling how it happens, rather than be left behind by it.

Getting consumers used to the idea of buying expensive proprietary headphones and increasing their capabilities with every new release is much easier than trying to develop the technology in a vacuum and convince users to transition away from a familiar paradigm. It also reduces the risk of leaving the door open for a new entrant into this space, although one other company has already made a grand entry.

Amazon has poured a sizeable portion of its immense resources into developing its Echo ecosystem, offering a variety of devices addressing diverse market segments. Having lost out on mobile with its failed Fire gadgets, Amazon has firmly set its sights on the Next Big Thing. Both Apple and Google will know from their own past experience the threat a newcomer in a new paradigm poses.

The demise of the headphone jack is a big loss for consumers today, who will need to either rely on a confusing array of dongles or the frustrations of Bluetooth audio to get their music fix. They have certainly not been reticent to voice their discontent, and both companies could stand to lose a part of their mass-market users on the one hand (who will look to cheaper alternatives with headphone jacks) and also the niche audiophile market (who will prefer something like the LG V30 to plug their high-end headphones into). But for both Apple and Google, that seems to be an acceptable (but still courageous) sacrifice to prepare for the arrival of a new era in computing.

Phil Schiller — SVP of Marketing, Apple

--

--