Developing A Mixed Reality App for MetaQuest 2— Pt. 3: Finishing Touches

Jad Meouchy
badvr
Published in
11 min readDec 29, 2022

What if a mixed reality application could give you a magical superpower? Ok, we can’t do that BUT we can allow you to see all your wireless networks in a single glance, and interact with the data, in real-time. Close enough, right?

Welcome to Part 3 of a series about building a mixed-reality application (called SeeSignal) for the Meta Quest. This post covers the third and final sprint to the store, applying finishing touches and making plans for the future.

Download SeeSignal today — via the MetaQuest App Lab!

Sidenote: If you are new to this series, be sure to also check out our other posts in this series: part one and part two!

Recapping the Journey

The first article — Getting Signals — covered the early spark of inspiration that kicked off this adventure. Discovering a new ability to see the invisible wireless radio signals all around the room. Reaching out and tapping on holograms for detailed information.

The second article — User Interface — expanded to include the challenging design process for the interactive control elements including the Heads-Up Display and graphical widgets. There were some rather unconventional solutions, like using a 3D printer pen to mock up elements in real life. Here are the big takeaways:

  • Keep all the user interface elements within reach, and try a “split” design for wide panels
  • Consider wrist mounted buttons for convenient mode switching
  • Persist the state between sessions so the user never has to re-enter settings

Finalizing Design

After a round of preliminary user testing, several minor design changes were proposed and made, specifically in the HUD, signal popup, and settings panel. These edits were meant to reduce the number of “clicks” needed to complete common tasks, provide better indication and awareness of application status, and polish the visuals.

HUD Revision

First, the HUD was adjusted to give a stronger shape and include the icons in a more relevant area. Further, the little tab on the top section was rigged to show the current signal strength where the user is standing, with the tab color matching the color of the nearby sticks. The Minimap and Finder gadgets continue to be anchored to the sides but have been slightly redesigned for better symmetry and balance. They were also slightly angled inward to feel more cohesive.

Adding ‘arms’ to the HUD to intensify the visual perspective while keeping the central area clear

The animation state machine for the HUD is shown below, which highlights the architecture of how multiple layers are used to manage parallel states of things like visibility, Wi-Fi presence, controller type, etc. The Base Layer is used for general HUD visibility and only has two transitions for switching between Active and Inactive based on the Visibility boolean parameter.

Animation controller with multiple layers and parameters

Status icons were moved and enabled along the front tips of the gadget arms. Through these, one can quickly understand network connection state, hands/controller status, whether relative mode is enabled, and another quick glance type of information. When the user switches to hands, the gadgets still undock, but the status icons remain where they are.

Signal Popup Improvement

When tapping on a stick, a panel of information appears to indicate the values associated with that point in space. The new design makes the information more understandable and usable by a wider base of users. Strength is now shown as a familiar metric of 5 stars, the raw RSSI value or a human-readable format (e.g. good/great) is shown dependent on a player setting, as well as frequency, and confidence.

Balancing color, contrast, and information density

Subtle colored edges were added to the top and bottom of the panel that matches the color indicator of the stick value, making the whole design feel cohesive and consistent. Compared to the original concept, this one is more refined and clean.

Unified Settings Panel

A major piece of critical feedback received during the test period was in regard to display parameters in the settings panel. Adjusting Minimap Zoom and HUD Distance was nice to have, but there was no way to preview the newly selected value without exiting the settings altogether.

A “ghost” visual feedback mechanism was engineered by listening for activity on those settings sliders and then temporarily drawing an alternate visual version of the Minimap and HUD frame. Now, when moving your fingertip to change those values, you see a live animated preview that lingers for several seconds.

This was a major usability improvement and will represent a principle of UI/UX that we intend to maintain going forward: WYSIWYG (“What you see is what you get”). Watch the animations below to see it all in action.

Live editing the minimap zoom level as well as the HUD depth distance

HUD distance can be adjusted from close to far through a range of reasonable values, and a semi-faded “ghost” HUD will appear to show a real-time preview of the exact positioning. The same goes for the Minimap which has multiple zoom level options, all of which can be easily previewed through the live interaction. Hopefully, this helps the application become easier to use and more intuitive so that one doesn’t need to read an instruction manual to get up and running!

Integrating a Tutorial

SeeSignal and live data visualization is a novel concept in the world of both signals analysis and mixed reality, and the Oculus Store is also primarily filled with games (though it’s becoming a hub for immersive productivity). Therefore, most new users won’t know what to expect after downloading this application. The solution is a tutorial or onboarding type experience, to introduce people to the idea slowly and manage expectations.

The first step was brainstorming the ideas and general story structure. This involved drawing illustrations, writing voice-over dialogue, and assembling a storyboard template document. See the original tutorial sketch below for an idea of one way to plan out a scripted storyline.

Early storyboard sketch of the pace, flow, and script for the onboarding experience

After many revisions, we boiled down the entire app experience into a single journey that started with the basics of wireless signals, explained the idea of using stick visualizations, and ended with user interface mechanics. Rather than recording this into a traditional 2D video play in a window, we turned the tutorial into an interactive experience using Unity’s Timeline component.

Challenge: How do you tell a scripted story to a viewer who can look or walk away at any time?

The key for timelines and general scripted content in AR/VR is to make the action revolve around the user. Keep the pace consistent, add slight interactivity every 10–15 seconds, and use spatial audio to carry their attention through different parts of the view. Don’t make everything static in front of them. Don’t make the content so long or slow that they start wandering. Keep the user engaged through a combination of color, sound, and interesting content.

Animation timeline made of smaller component-level clips, with the ability to skip forward and backwards to different chapters

One critical feature of a tutorial is the ability to skip it! Some users are very savvy or prefer to learn on their own. Other users might want to replay sections of the tutorial in case they missed something. Adding previous and next buttons is critical to maintaining a good user experience.

In our case, we rigged these skip buttons to jump to different marked trackers in the Unity Timeline. This required a little bit of special code to build a list of all trackers in the timeline and then navigate between them. The following code snippet shows the navigation part, where it simply iterates over all the markers until it finds the ‘next’ one and activates it. Could this be written more simply with a C# LINQ query?

public void PlayNextChapter()
{
float currentTime = (float)masterTimeline.time;
int nextChapter;
for (int i = 0; i < chapter_start.Count; i++)
{
if (chapter_start[i] > currentTime)
{
nextChapter = i;
SkipToTime(chapter_start[nextChapter]);
return;
}
}
}

Adding Spatial Anchors

The loss of tracking, even momentarily, is devastating for any application that depends on accurate world positions. Once the spatial reference is lost, the confidence in world-anchored data is lost and the session might need to restart. While the Quest does have excellent visual tracking, certain lighting and environmental conditions can trigger position loss. Fortunately, there is a new experimental solution called spatial anchoring to help solve this problem.

Similar concepts exist for other platforms, including Microsoft’s Azure Spatial Anchors, ARKit’s ARAnchor, Magic Leap’s Persistent Coordinate Frame, etc. These modules use a variety of secret and not-so-secret methods to figure out where the user for maximizing the stickiness of user-placed holograms.

Here are quick snippets of code showing how SeeSignal listens for OVRManager’s SpatialEntity methods to understand when existing position anchors are found and realign the data into that new world construct. In our case, several components need to be moved and one-time-recalibrated after an anchor is detected.

void OnEnable()
{
#if OCULUS_QUEST
//subscribe to anchor events in ovrmanager
OVRManager.SpatialEntityQueryResults += SpatialEntityQueryResults;
OVRManager.SpatialEntitySetComponentEnabled += SpatialEntitySetComponentEnabled;
#endif
}
void OnDisable()
{
#if OCULUS_QUEST
//unsubscribe from anchor events in ovrmanager
OVRManager.SpatialEntityQueryResults -= SpatialEntityQueryResults;
OVRManager.SpatialEntitySetComponentEnabled -= SpatialEntitySetComponentEnabled;
#endif
}

Technical Challenge: The Guardian

Regardless of whether anchors are used, a user can change their Guardian settings at any time and indirectly trigger a loss of position reference. This is unlikely to happen while the app is in use as it would require the user to temporarily exit, run a guardian setup process, and then re-enter. Still, there is an expectation that the app be smart enough to know when it’s entered a bad or non-functional state.

To mitigate this issue, special code was added to monitor and detect changes to the Guardian geometry and positions. There are both Oculus and OpenXR versions of this, and we stuck with the Oculus ones because they were easier to use and better documented. By tracking this value, we can more accurately communicate to the user when a new session is required, and soften the user impact by attributing it to their Guardian-related action. This both reduces the perception of bugginess in our app, and teaches the user that such a change while running the app may cause disruptions in the future.

Designing the Hero Image

After looking at other banners in the Store and App Lab, specific trends appeared. To both blend in and stand out, we decided to match the general composition of these images but use a stylized photograph rather than an illustration. Read below the concept banners for the thought process that drove the final image.

  • Logo in the center
  • Vector-style illustration
  • Minimal text beside the logo
  • Human or character’s face visible
  • One strong visual element taken from user interface
Conceptualizing the primary image for the App Lab store listing

To actually build the image, we first went into a nearby startup’s office and politely asked to borrow their largest room for a single photo. We knew exactly what we wanted, the simplest characterization of the app’s primary feature: a person seeing and grabbing a signal stick.

After snapping the photo, the background was desaturated so the colors and skin tones would pop, minus the busy person in the background. Using black and white for the world would better align to Quest’s passthrough feature, as it is currently monochrome. We suspect that the passthrough will be full color in the next version of the Quest (Pro?) but this concept is still sound.

The saturation and blurriness of the background help create a separation from the foreground subject. The stick is green because that’s the color of strong signals, and that’s what a user is figuratively and literally reaching for when using SeeSignal. The next step is adding the logo and painting several assorted red and yellow sticks around the user to fill the room with signals and ultimately draw attention to the single green stick in the center.

Final backplate concept, will work in the logo and other signals

Conclusion

Through the long design phase of the marketing imagery for the app, many specific questions and comments came up that didn’t have the most intuitive answers. In fact, for many questions the answer is to just wait and see how the users respond.

  • Will people think SeeSignal is a game where you hunt for red and green crystals and tap to collect them?
  • How to convey that this is a mixed reality-only application?
  • Should any user interface be shown in the images, is it really relevant for communicating the concept? What if it changes later, does the image need to be updated?
  • Who exactly is the audience for mixed reality home productivity?

During the course of the upcoming beta testing and launch, and in coordination with other app developers, we hope to answer these questions in a strategic plan. The mixed reality world is new and unpaved, and nobody really knows what it’s going to be when it grows up. Hopefully, with applications like SeeSignal, we can learn what to do (and not do) through hands-on experience and experimentation.

Download SeeSignal today — via the MetaQuest App Lab!

What’s Next

In the next major app release, we hope to address the following ideas and are seeking feedback on the general direction of this app. Don’t hesitate to reach out directly to leave comments on the blog posts. The whole BadVR family looks forward to engaging with members of the community so we can build cool and useful stuff and have fun while doing so!

  1. Implement shared spatial anchors, when available, so multiple people can discover signals simultaneously. Would you use this?
  2. Do you have any specific ideas about where to take this?
  3. Should there be a “Pro” version with extra features?

Useful Links

Interested in reviewing previous posts, or sharing them with your friends? Direct links to all 3 posts in our series about developing VR apps with Meta’s Mixed Reality API are below:

Part 1 — https://medium.com/badvr/mixed-reality-meta-quest-getting-signals-f5c53579b73c

Part 2 — https://medium.com/badvr/mixed-reality-for-meta-quest-user-interface-3b9a084198c3

Part 3 — https://medium.com/badvr/mixed-reality-for-meta-quest-finishing-touches-ffdc54590311

--

--