Sharing in AR, Today and Tomorrow

The last post considered “AR as speech” and the need for sharing controls that will be necessary to allow for a harmonious sharing of Augmented Reality.

Platform Selection

Discounting AR for a moment, when users decide to share something online, they make a decision about which platform to use: Facebook, Instagram, Medium, Reddit, Pinterest, Yelp, or Twitter — each designed to enable a specific type of authorship and sharing. Each platform has its own set of sharing controls, social norms, and user management challenges. This trend of platform multiplicity will continue, especially if the trend of adding AR as a feature to existing apps also continues.

“AR View”, the Amazon AR feature app

In the short term, established platforms experimenting with AR features will be a crucible through which we all collectively learn what works and doesn’t work in AR. This will be a messy process with a lot of lasting benefits that will take time, during which sharing controls will vary widely. If history has anything to teach us, soon after this fragmented experimental period we should expect to see mergers, acquisitions, and copycat implementations that take AR into a more uniform, commodity technology. Once the AR industry turns this corner, interoperability, open standards, and winner-take-all incentives will make an integrated, general purpose augmented reality possible — one that allows for discovery of nearby services and simultaneously viewing AR content from disparate sources.

Let’s look at the types of sharing controls which make sense in a general purpose AR.

Saying Things Online

While the “grammar” of systems may vary from platform to platform, high level concepts like author, recipient, subject tags and other metadata become the dimensions upon which sharing controls can operate.

Let’s look at the most relevant ones:


Content on the internet is generally attributable to an author. The term we use may vary depending on the medium (poster, user, composer, creator, etc), but the concept applies universally to websites, mobile applications, gaming platforms, and so on.

Like the Internet itself, potential uses of AR are diverse enough to warrant many different user schemes, including strong, provable identities and persistent pseudonyms. However, the integrated nature of any “AR browser” requires that pseudonymous and anonymous content cannot simply be ambiguously intermingled with verified, strongly identified content or bad actors will be able to impersonate authoritative sources and mislead others. (A subsequent post will be dedicated to identity management as it applies to AR.)

As it is with the rest of the Internet, the creation of untraceable anonymous content is just not compatible with a general purpose AR platform.


As with an author, many online platforms require one to specify a recipient. Any private form of communication, be it a DM, privmsg, whisper, or email works this way. However, on fully public posts, such as tweets, no recipient user specification is necessary (though tagging usernames can call the attention of others to each tweet).

Between fully public and fully private postings lives group sharing, where users can post to a subset of users, either by enumerating a list of recipients or by addressing users by a shared attributes. Mailing lists, Facebook Groups, private subreddits, and any group chat exemplify this model.

Private, public, and in-between — all will apply in AR.


Categorizing content also varies widely from platform to platform. A few examples of subject specification include:

  • free text fields (such as an email subject line)
  • a nested hierarchy of subjects (such as Usenet or a bulletin board)
  • a flat hierarchy of subjects (such as Reddit)
  • hash-tags (such as Twitter or Instagram)

Note that the line between subject and recipient is blurred in systems where recipients can be defined as those people interested in a specific subject. For example, a reddit post or facebook group post has no explicit recipients — the members of the forum are the presumed recipients who have opted-in based on their interest in that community’s focus.

Beyond the present day world of fragmented, special purpose AR applications, filtering content by subject is certainly a baseline immediate requirement for integrated multi-purpose AR tools. If we are ever to achieve a broad utility comparable to a web-browser, robust search and live content filtration tools are necessary. There is no scalable version of a persistent AR world in which all nearby virtual content is shown to the user simultaneously. I’ve seen this filtration referred to as “lenses”, however “AR lens” also means different things to different people.


The content itself can also be the basis for sharing controls. Examples include limiting image/attachment sizes, blocking black-listed words, auto-detecting inappropriate content, and even human moderation of each post.

User created AR content does not yet undergo any sort of moderation, but other mature, scalable 3D content platforms (such as SecondLife) do impose polygon limits on user creations.


Many social networks already incorporate location, especially “mobile first” apps like Snapchat and Instagram where location data is readily available.

Snapchat Map publishes to a map view

Compared to AR, this style of post annotation is more specifically focused on the map view of the world. (It’s worth noting that there is no widespread anxiety about vandal Snapchatters placing photos and videos on someone else’s property — yet we see this problem in AR when the content attracts too many people to the location with insufficient consideration for the impact.)

As we established in the previous post, it isn’t feasible to prevent other people from creating augmentations on one’s property, but more importantly, while the conflict surrounding AR rights issues is currently modest compared to other Internet speech issues, bear in mind how small the audience of AR users is today. Any platform which permits content to be placed without any moderation or accountability features will not stand the tests of scaling up to millions of users.

Public Spaces

The first appearances of persistent multi-user AR suggest a completely public augmented world. Any augmentation, created anywhere is visible to whomever else is nearby, often summarized on a map view so others can more easily find them.

Publicly sharing content in Mirage

This unrestricted model of sharing is a natural first step, but contributes to some of the misunderstandings and anxiety surrounding AR. The most obvious negative consequences of unrestricted AR are virtual graffiti and vandalism, but the lack of filtration, moderation, and management features would result in a slow loss of usefulness due to virtual objects accumulating and cluttering high traffic locations .

This gets worse as more and more users begin experimenting with persistent AR applications. Yet high traffic public places stand to benefit greatly with augmented public signage, landmarks, and other interactive smart city features.

Private Spaces

As multi-user persistent AR matures into something with contemporary online sharing controls, being able to share private augmentations with specific individuals will become more useful, particularly in our homes and private spaces. Granting friends access to the AR controls for your smart home appliances or inviting them to join you in a virtual table top game could be accomplished this way.

Simulating in home IoT AR from I. Yosun Chang of permute.xyz, presented at SIGGRAPH2018

Completely private single-user AR features are possible without sharing controls, as they could be stored entirely locally, on-device. However any persistence, backup, or sharing will require storage in a cloud-based geospatial index.

Group Spaces

Between fully public spaces and private spaces are our shared spaces, home of some of the most valuable use cases in AR. Workplaces are a perfect example where non-public augmentations would benefit from being shared between many users. Industrial facilities such as factories, warehouses, and refineries stand to gain tremendously from AR technology; as do the smart-office applications for indoor navigation, room reservation, and facilities management. Use cases with strict security requirements may justify distinct standalone infrastructure, but the eventual advantages of integrated, multi-use AR will place a pressure on those systems to behave similarly to the public AR platforms.

Workplace AR

Putting It Together

By assembling author, recipient, subject, location, and content together, we can craft AR sharing controls that make these use cases possible:

  • Alice shares a board game session with Bob in her dining room.
  • Charlie shares the virtual HVAC sensors with Daniel, a new hire at the University.
  • Erin builds a 3D model of a long demolished mansion on the site of its former location and shares her creation with her local history group.

As explained previously, these examples are first going to appear in separate applications, but the inherent advantages of AR integration and interoperability will eventually follow, especially when AR moves into the headset in the form of AR glasses.

What is Underway Now

Ubiquity 6

Since the last post, Ubiquity 6 has surfaced as an AR platform for sharing and editing AR content. Having just launched, technical details are scant, nevertheless it is exciting to see AR growing in this direction.

Ubiquity 6 Launch Demo

It’s not realistic to expect Ubiquity 6 (or any other nascent AR platform) to come into the marketplace having already solved all these problems. Watch this series for details on their strategy as it emerges.


Just last week VERSES left stealth mode and launched a new blockchain token expressly for AR applications: indexing spaces and smart objects. Blockchain (and more specifically smart contracts) represents an entirely new family of geospatial indices — one which offers to trade the advantages of decentralization with the challenges of blockchain adoption. Watch this space for more detailed information on this strategy as it emerges.

Unity Project MARS

Even better than a global AR sharing platform would be developer tools and services that allow for AR sharing across otherwise fragmented applications.

Just a few weeks ago Timoni West, the Director of XR Research at Unity, presented one of the first detailed looks at Unity’s Project MARS — Mixed and Augmented Reality Studio. Launched at Unite Berlin in June of 2018, Project MARS is a broad set of tools for Unity developers targeting some of the larger problems every MR/AR developer will soon face, including early proposals for UX modalities and most relevant to this post: AR permissions.West expresses her permission model as a matrix for describing application states:

Image for post
Image for post

The entire presentation was captured at a Mozilla event, and can be viewed below. The discussion on AR permissions begins at 23:40.

Timoni West presenting on Project MARS. (AR permissions begins at 23:40)

The Mixed Reality Service

Looking back to 2016, the first proposal of a general purpose geospatial index for augmented reality would be Mark Pesce’s Mixed Reality Service (or MRS). The MRS proposal is many years ahead of its time. Before any commercialization or traction of any kind, on any AR platform, Pesce drafted a protocol to allow for diverse providers to publish AR services in a discoverable manner.

Better still, unlike Unity’s Project MARS, the MRS specification is an open standard, and a working group with the W3C has been created to oversee its development, though little progress has been made since the most recent draft was published almost two years ago. One would hope that the imminent success of AR hardware and persistent AR platforms will soon attract more interest in the open standards and web development communities.

Tony Parisi presenting Mark Pesce’s Mixed Reality Service

In its current form, the MRS proposal is not yet complete, but serves as a stake in the ground around which proponents of an open AR web can gather.

What’s Missing?

Tune in next time for a closer look at the gaps and obstacles in front of these AR Cloud platforms and standards.

Written by

XR & Spatial Technology

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store