AR Contextual Messaging (the UX)

Lewey Geselowitz
3 min readJun 29, 2018

(original article, part 2 of AR series)

What is an AR Contextual Message?

Messages that are made or viewed within the context of a matching location or object.

Image: a contextual image of someone placing cookies in a display case. When viewed by another, the image shows where the capture was taken and includes a summary of where, when, and what was said.

How are AR Contextual Messages made?

Typically made by users who also want to share content related to the surrounding area or similar objects. Similar to common messaging usage with smartphones, contextual messaging is the most direct and useful form of AR, in addition to working naturally with existing messaging systems.

Image: a sharing interface for contextual messaging includes the contextual overlay and dictation preview, while automatically tagging by the location and object.

How are AR Contextual Messages brought up? (SEARCH)

Visual search is a fundamental feature of AR (inherent to world tracking for instance); therefore, search will likely be the primary interaction for finding content. In this design the first result will show up in preview mode, while others are idicated with lists or icons because there is no unique answer to for any perticular search query (the idea of automatic “triggering” rather than search, limits many existing AR applications to a single narrow use case). Search results can be easily tailored using voice or other filters.

Image: AR search results shown in context. The top result is previewed in place, lists and icons are shown for alternate results, and the user has options such as voice for search refinement.

What does an AR Contextual Messaging App look like?

Based on the theory that searching and sharing are the fundamental pieces of functionality required, (see part 1 of this series), and that the hardware steps to perform both are similar right up until the end, it is likely that the ideal AR app with simply search automatically, and then optionally let you share that capture, thus covering both use cases:

Image: Summary of the interaction flow, see part 1 in the series.

Image: FINAL APP: automatically searches, allows voice for recording and/or refinement, and has a single button for sharing that capture (some people will even set sharing to automatic, making every moment a search query and statement for their lives). This is likely to be THE app used in AR, especially in collaborative environments, allowing content to be shared and received in context; just as communication was originally meant to be.

Tune in next time to discuss more on the art and practicalities of AR UI layout, and the living sculpture of volumetric ecosystem modelling.

- Lewey Geselowitz

--

--