3 Clues for an Apple AR Ecosystem

What if 2017 brought a compelling new AR ecosystem that has a simple in-the-ear Conversational UI, a powerful AR device you own already, and a robust AR commerce and machine learning backend?

Here are three clues for an Apple AR ecosystem that’s voice first, in your pocket, and useful everday.

1. AirPods

The first clue comes from the announcement of the AirPods. These are more than just for listening. Apple’s AirPods are the equivalent of the ear-piece in the movie Her. Simple to slip on, just start talking to Siri. The small size, sophisticated microphones, and auto activation when putting in ear make them the perfect device for a Conversational User Interface (CUI).

Apple AirPods

2. iPhone is a Perfect AR CPU/GPU

The second clue is Robert Scoble’s speculative post on the 2017 iPhone. I don’t agree that the iPhone will be clear and used as the actual AR optics — that seems unworkable and besides, there’s already a great camera that “sees” behind the phone.

But looking beyond the literal interpretation, Scoble shows how the phone is the perfect computational device for doing the work of AR. Why wear that on your head? You already have a powerful 3D capable computer in your pocket or bag. And the 2017 iPhone’s A11 processor could easily have custom AR optimized silicon.

Clear iPhone Concept by Robert Davis

3. Siri is an AR Companion

The third clue is Brian Roemmele’s tweet about this Apple patent, which outlines how Siri becomes an interactive participant in conversations, contexts, and transactions. This is key to a compelling AR. Passive AR has limited applications. But AR that responds to voice for queries, changes, and actions becomes highly engaging and opens up opportunities that benefit from the AirPod/Siri features: hands-free, private audio, natural language recognition, task completion, and commerce.

Plus. Edge Machine Learning and 3D sensors

Additional indicators for this ecosystem are the recent rapid advancements in Machine Learning (ML) along with vastly better and cheaper sensors. In 2013 Apple purchased PrimeSense, the original maker of the Microsoft Kinect. These sensors have machine learning computation built into the sensor itself. This is edge computing, which pushes computation as close to the action as possible, offloading upstream bandwidth and computational requirements.

The ability to quickly 3D image one’s surroundings and perform object recognition using ML makes AR vastly more interesting and usable. These sensors are getting smaller and smaller — for example, here is the latest Intel Realsense system which is just a 1/4” tall by a few inches long.

Intel Realsense 400

What Does All of this Add Up To?

Put together AirPods for a CUI, a powerful AR-optimized iPhone, AR oriented Siri, and PrimeSense based 3D sensors, and you have a compelling Augmented Reality ecosystem that could have an significant impact on everyday life and much broader reach than VR.

Just point your phone

Imagine you just purchased a new IoT thermostat and are installing it. You slip an AirPod in your ear, point your phone at the device and say “Hey Siri, how should I connect the wires to this Ecobee?”

The phone displays the live camera feed of the actual Ecobee overlaid with a 3D diagram showing the correct wiring. This diagram constantly updates perspective as you move your phone around the device. Once you’ve hooked up your wires, you ask “Does this seem right?” and Siri might say, “It looks like you put the ground wire one slot below where it should go — see that highlighted in the diagram?”

Full immersive experience with AR glasses

For a more immersive experience, you put on AR glasses which are very light because you don’t need a CPU/GPU on your head. The phone stays in your pocket/bag and talks wirelessly to your AR glasses to provide all the computational heavy lifting — stereo video, 3D rendering, object recognition, CUI, spatial audio, etc. The glasses only need translucent display lenses and sensors for 3D object recognition (including your hands). And of course, the Airpods provide a microphone for the CUI and spatialized audio to help you hear where things are in the virtual space.

With your glasses on, you start a meeting with two colleagues in person and another who’s participating remotely (but who appears holographically in the room). You are all working on an architectural model, part of which is physical (represending an existing structure), and part of which is virtual (the new structure). In this “merged reality” the virtual and physical blend seamlessly as you make changes, both by “touching it” with your hands, and by interacting with Siri who is another participant in the discussion. “Hey Siri, show us four stories instead of three, but with the same square footage”

Bringing Personalities to AI inside of AR

To add my Animistic Design take on AI and AR, I believe we’ll start seeing the development of a kind of animistic, AI based, virtual Internet of Things inside of AR. In our architecture example, there could be AI driven 3D actors — not skeuomorphic humans, but odd characters — who are content experts (e.g. energy use, cost estimating, urban planning, etc.) who make suggestions, move and change virtual objects, and participate in the conversation based on their own goals, points of view, and expertise. In addition, an animistic actor could be a proxy for a human team member who can’t participate in the meeting—providing a curated version of that person’s point of view, goals, suggestions — but is prepared by that person to represent their interests.

You can imagine a thousand other applications in vertical contexts — a factory worker assembling airplane parts with realtime instructions, a team of college students experimenting with different product design ideas across continents/cultures with an AI “teaching assistant” advising/interpreting, transportation planners working on mobility solutions with sophisticated AR simulations purchased as add-ons, a teenager interactively building a new Nike shoe with their connected friends.

“My own view is that augmented reality is the larger of the two, probably by far, because this gives the capability for both of us to sit and be very present talking to each other, but also have other things visually for both of us to see” — Tim Cook

Tim Cook has repeatedly signaled that Apple is interested in AR, and got even more specific as recently as September. Perhaps this ecosystem is where Apple is going. If not, someone else will.

Show your support

Clapping shows how much you appreciated Philip van Allen’s story.