Notation, AR, and Sib Mahapatra

Welcoming our first Venture Fellow, and sharing our deep dive into the AR Ecosystem

Notation
Notation
12 min readJan 31, 2018

--

Late last year, we decided to run a new experiment at Notation, and welcomed Sib Mahapatra as our first “Venture Fellow.” Sib had an impressive tenure while at Redfin in Seattle, helping to launch their New Ventures Group, and is a recent transplant to NYC. He’s an incredibly smart and passionate technologist, and so we explored creative ways to work together while he was considering his next full-time endeavor. Ultimately that led to our first Venture Fellow position, during which Sib has focused his research on the emerging AR ecosystem. We’re publishing his recent internal memo below, and some of his initial findings, to highlight our growing interest in this area. We realize it’s a tad long, but for those interested in AR, we’re curious to hear your thoughts and feedback, and to trade notes with one another.

Many thanks to Sib so far, and please welcome him to the Notation crew!

Notation Capital: Investing in Augmented Reality — January 2018

Why AR Matters

In 2018, Notation plans to make meaningful investments in the Augmented Reality (AR) space. We believe augmented reality could succeed mobile as the dominant platform in computing over the course of the next decade. Unlike virtual reality, which requires users to immerse themselves in enclosed worlds distinct from their surroundings, AR integrates virtual elements — graphics, objects, data, even audio and haptics — into our real-time experience of the world around us.

The technologies underlying AR are still in their infancy, and we’re likely more than a year out from the launch of a consumer-focused wearable, but we believe this is a particularly interesting moment to make core investments out of our funds into this category. Within the past eighteen months, smartphone-based AR has reached a sufficient level of accessibility for the developer ecosystem, as well as consumer usability, for a mass audience to start paying attention. We believe it’s unlikely that AR will achieve paradigm-altering traction on a smartphone form factor, but mobile provides a fertile development environment for core AR infrastructure and pioneering applications to finally get built in the near-term.

Combining our digital and physical worlds will unlock user experiences that feel like magic. The first wave of use cases for augmented reality will involve adding to the physical environment: overlaying a construction site with blueprints for a foreman, superimposing a treasure hunt on a playground, or generating a virtual TV. As AR gains adoption and trust throughout society, a second wave of experiences will expand the scope of AR from information consumption to interaction. We believe augmented reality will become a universal remote, generating on-the-fly interfaces for connected devices and giving users control of both the physical and virtual components of their environment.

How AR Works

How quickly will we get to the future described above? The quality of the AR experience will ultimately determine the speed of this progress. Augmented reality combines innovative approaches to both software and hardware development, as well as content production and applications, which means that the quality of AR is a function of both what you see and how you see it.

At its core, the content creation process for AR is similar to making content for desktop, mobile or virtual reality platforms: developers create assets on game development platforms, such as Unreal and Unity, or use techniques like photogrammetry to generate 3D textures and meshes that represent real objects.

An AR enabled device then transforms those visual assets into AR experiences by handling two essential functions. First, in order to blend virtual content with the physical world, an AR device needs to understand its environment by creating a map, and understand its position through a process called localization. This capability is known as simultaneous localization and mapping (SLAM), and it improves on older approaches to AR, which relied on physical markers to position virtual objects.

Next, the device needs to display the experience to the user. Microsoft, Apple, Google, Snap and well-funded startups like Meta and Magic Leap are hard at work developing hands-free AR devices, and smartglasses like the reborn Google Glass, ODG and Vuzix are already shipping to enterprise customers. Standalone AR “hero devices” from Apple and Microsoft are likely to hit the consumer market in 2019.

Why Now

Though we’re years away from the launch of a flagship wearable, over the past eighteen months, mobile AR has expanded the addressable user base of AR and lowered the barrier to entry for developers who want to make AR applications. As AR makes the transition from technological novelty to platform with real utility, it’s important to note four key developments that we believe are responsible for the maturation of mobile AR (today and tomorrow):

1. ARKit and ARCore solve SLAM with VIO

The vital technical advance in enabling mobile AR is solving the SLAM challenge, mentioned above. Apple ARKit launched in the summer of 2017 (helped by acquisitions of Metaio and Flyby) and Google ARCore arrived shortly afterwards. Both use a technique called visual-inertial odometry (VIO) to integrate data from a RGB camera, accelerometer and gyroscope and track position in six degrees (x/y/z translation as well as pitch/yaw/roll rotation). Also incorporating tools that detect horizontal surfaces and lighting conditions, these SDKs endow tens of millions of high-end smartphones and tablets with latent AR capability.

On their own, the rudimentary experiences that these platforms enable are not persistent, shareable across users and devices, robust against bad lighting conditions, or deeply interactive with their environment. Nevertheless, they have lowered the bar to experiment with AR, spurring tremendous interest from developers who want to create front-end experiences as well as extend this native infrastructure to unleash more expansive AR capabilities.

2. Content generation is getting easier

Facebook has invested heavily in mobile AR, but its Camera Effects platform lacks the hardware and OS access that would allow it to easily implement VIO. Nonetheless, Facebook is making important contributions to mobile AR content creation: AR Studio and Frame Studio allow creators with no special experience to create simple AR face and world filters that work on any Messenger phone. Just a few weeks ago, Snap announced the launch of their own open platform for comparable AR content creation experiences.

Other new tools that lower the barrier to experimenting with AR include Google Poly, a free library of 3D objects, and Amazon Sumerian, a browser-based Unity competitor that seeks to make the creation of 3D applications accessible to creators without specialized programming or graphics experience. By democratizing the creation of AR content, these tools are beginning to make it even easier for developers to get started with AR.

3. Better mobile hardware is coming

Using VIO to solve SLAM is an impressive technical feat that expands the immediate reach of mobile AR experiences, but it does have technical limitations. For example, Google Tango can fully reconstruct scenes in 3D, enabling features like occlusion and actual interaction between virtual objects and physical objects (ie. mixed reality), but these features require more sophisticated depth and fisheye cameras.

Academic and industry research on generating 3D geometries from a single RGB camera is ongoing and in the works. In the meantime, Apple’s embrace of the 3D TrueDepth Camera on the iPhone X — which is already powering AR experiences like Animoji — is a good signal that upcoming phones may offer hardware (like the Intel RealSense camera) that supports more robust AR features. We discuss this in more detail below.

4. Mobile AR experiences can work today!

Ultimately, mobile AR’s role in the ecosystem may be to demonstrate that AR experiences can in fact be valuable to end users (and companies) today (i.e. right now). The popularity of Pokemon Go as well as Lenses on Snapchat and Instagram are both strong signals that even rudimentary mobile AR experiences can be compelling in their own right.

Mobile AR is a playground and opportunity for the big tech companies and platforms to start building out their ecosystems today — core infrastructure and developer SDKs, content creation tools, defensible data moats — that will shape the ecosystem of tomorrow before standalone AR hardware is mature enough to get adopted by the mass market.

External Catalysts

Beyond advances in the core mobile AR technologies described above, it’s also worth noting adjacent trends that could accelerate the development of compelling AR experiences:

  • Virtual reality: AR and VR share foundational technologies, including similarities in content generation and eye tracking, for example. We expect investment and research to continue in these areas and benefit both VR and AR.
  • Voice: Enabling user inputs for standalone AR devices remains a largely unsolved problem. The solution will likely consist of some combination of improvements in eye tracking, gesture recognition and voice control. Thus we believe that as voice powered user interfaces continue to proliferate, they will also lead to better AR user experiences.
  • IOT: The number of connected devices worldwide will double from 15B in 2015 to 30B in 2020. More smart devices, particularly cameras, will expand the scope of AR use cases, turning more objects into beacons that can assist visual SLAM algorithms with tasks like mapping and occlusion. This will play a key role in ultimately enabling AR technologies to become a “universal remote” that can interface with both virtual and physical objects.

There are headwinds to consider as well. Battery life is a major technical area of concern — playing Pokemon Go for an hour drains a third of an iPhone 6S battery, and higher-quality experiences are even more energy intensive. Regulatory and cultural risks also remain present, especially within the U.S. domestic market, where AR wearables tend to be viewed as intrusive and predecessor tech like the original Google Glass was received poorly by consumers.

What’s Next

Although there have been a number of exciting AR innovations in recent years, we still consider it to be “frontier technology.” Pokemon Go taught us that basic AR apps can capture millions of users if implemented well today, but it’s also worth noting that no standalone app has achieved similar traction since. If the promise of AR is seamlessly blending our virtual and physical worlds, the state of the art is still far from seamless.

In evaluating both horizontal AR infrastructure and application verticals, we plan to pay particular attention to products and services that can build defensible and platform agnostic moats that are durable against a transition from mobile to wearable platforms.

Horizontal Opportunities

Given our strong preference at Notation to invest in scalable and capital efficient software companies, as well as the steep competition from AR incumbents and startups building display, compute and sensor hardware, focusing on horizontal software and data startups that are platform independent fits well with our interests and experience.

The following chart is a non-exhaustive list of core capabilities that we believe are critical in creating seamless AR experiences, as well as where they are in the current development cycle. Capabilities in green are supported and widely deployed, capabilities in orange exist as proof of concept but are rudimentary or not widely available, and capabilities in red have yet to come to market. Some of these capabilities are natural feature extensions for existing startups and companies, but others may be good candidates for new startups.

Core AR Capabilities

The chart above focuses on technology that we believe will power the core initial experience of augmented reality. It does not cover the horizontals that may power the evolution of AR from informational overlay to fully interactive UX layer.

Capabilities in the proof-of-concept stage (orange) are largely limited by the current state of hardware in the market today. For example, vertical plane detection and spatial mapping (understanding the 3D geometry of a scene) are achievable today by phones that support the deprecated Google Tango standard and use RGB and IR depth cameras. Most mobile devices in market don’t have depth cameras, and don’t have the processing power to support the CV algorithms that could generate spatial maps using a single RGB camera. The iPhone X and upcoming Samsung S9 both include depth-sensing hardware, and wearables are being designed with these capabilities in mind.

Though startups will likely come to market that produce software implementations of the hardware-limited AR capabilities listed above, we expect these features to be commoditized over time as depth sensors and tightly calibrated IMUs become standard issue on phones and wearables alike. Instead, we plan to focus on startups that we believe can build persistent moats with differentiated go-to-market strategies in acquiring developers or consumers as users. For example, crowdsourced spatial mapping will play a vital role in enabling accurate localization of shareable AR experiences, and could generate a significant data moat at scale.

Vertical Opportunities

The AR market has seen an explosion of applications in both consumer and enterprise over the past year, with the long tail driven by ARKit and ARCore. Aside from Pokemon Go, no AR-first app has come close to matching the penetration of AR experiences that live within social platforms like Snap and Facebook. Super Ventures has put together an outstanding market map segmenting the first wave of AR applications. Below, we discuss some of our vertical areas of interest in the coming months and years.

Consumer

Seven out of the top 10 grossing AR apps on iPhone are games, and they account for 35% of ARKit-only apps, 53% of downloads and 62% of revenue worldwide.

Of the top non-game apps that use ARKit, apps like TapMeasure, IKEA Place and HomeCraft are most popular, signaling that real estate and retail use cases are compelling with existing technology. Last month, Snap launched a sponsored lens with BMW that reached 13 million users in Europe with an AR model of an X5 SUV. Amazon recently entered the fray, offering a iOS shopping app called AR View that lets users view thousands of items in AR.

Gaming, media/messaging (for example, Giphy World) and social applications all take advantage of the inherent immersiveness of the AR experience, and so we expect these categories to be important in the years to come. But regardless of vertical, we believe the key to long-term sustainability will be for these products to generate a deep enough network, data or user moat to survive the likely shift from mobile AR to wearables.

Enterprise

According to head-mounted display manufacturer ODG, 44% of Fortune 500 companies are setting aside a budget for augmented reality in 2018. The manufacturing industry has already started to embrace AR, purchasing wearables from companies like ODG, Google, Meta and Vuzix to provide workers with real-time instructions and guidance. These organizations use companies like Upskill and Scope AR that provide platforms for custom implementations.

Construction and medicine are other markets which have started to embrace AR technology on the front lines. Given their early adoption of wearables, one major upside to investing in enterprise platforms is their head start in developing best practices for user interaction with wearable AR technology.

Retail — both physical and ecommerce — is one of the largest opportunities for enterprise AR. The enduring slump in physical retail means that stores are hungry for customer acquisition tools that provide more “experiential” shopping to potential customers. Startups serving retailers of products that rely on visual presence could be an interesting early AR use case — Get The Look, for example, is an app that lets users try on virtual makeup!

Concluding Thoughts

In many ways, AR is the archetypal example of software eating the world. It’s one platform with the potential to unify our interaction with the two worlds we inhabit every day: the realm of virtual data, objects and experiences, and our physical reality.

In the past few years, we’ve been hesitant to make significant investments in AR due to technical constraints as well as consumer hardware flops like Google Glass or Spectacles, although it’s worth mentioning we have made a few investments in computer vision products and companies. While we acknowledge we’re still in the very early days for the AR space, given the massive value created during major platform shifts (and we believe this is one of them), it’s alright if we don’t time adoption perfectly, and we don’t expect to.

Starting this year, we’re primarily looking for opportunities in what we believe could be foundational AR tech, focusing on software and infrastructure layers of AR that are platform agnostic and empower the core user experience. We also realize that there are many emerging vertically focused consumer investments coming to market in messaging and social, gaming, retail, real estate, and entertainment broadly, so we plan to keep our eyes and ears open for opportunities in these verticals as well. Finally, we’re beginning to take a closer look at enterprise platforms that have deployed head-mounted wearables, and are developing best practices in user input and interaction.

What do you think about augmented reality? What did we miss? If you’re tinkering with ideas in AR, we’d love to hear from you!

--

--