Developing A Mixed Reality App for MetaQuest 2 — Pt. 2: User Interface

Jad Meouchy
badvr
Published in
11 min readDec 29, 2022

What if a mixed reality application could give you a magical superpower? Ok, we can’t do that BUT we can allow you to see all your wireless networks in a single glance, and interact with the data, in real-time. Close enough, right?

Welcome to Part 2 of a series about building a mixed-reality application called SeeSignal for the Meta Quest. This second article digs into engineering challenges, outlining critical issues and their detailed solutions. The next and final article in this series will chronicle the finishing touches as the app makes its way into the App Lab store.

Download SeeSignal today — via the MetaQuest App Lab!

Sidenote: If you are new to this series, be sure to also check out our other posts in this series: part one and part three!

Modes of Operation

The SeeSignal app is exploratory, requiring you to get up and walk around to find signals in space. Along the way, there are built-in tools to help, like the Minimap and the Finder. There are also many options for customizing the look and feel of signals, to accommodate a variety of spaces and preferences.

Originally, the app had three main operating modes that could be toggled by pressing a button on the wrist below the palm. It would default to Explore where all the visual gadgets would be shown at all times.

  • Explore shows heads up display (HUD) w/ gadgets docked inside
  • Edit reveals user interface console for changing visual settings, hides HUD
  • Minimized hides both HUD and console for a clean view

After testing and feedback, the Minimized mode was found to be confusing, niche, and effectively unnecessary. Instead, toggling the heads-up display was moved into a setting inside the existing Edit mode. While this would make the hiding of the HUD into a “two click” operation, the overall app was simplified into two total states from three.

Technically speaking, there is also a user-invisible mode called Onboarding that manages the interactive help experience. All these modes were implemented using a coding pattern called a State Machine where, in this case, an enumerated variable that stores the current active mode is persisted across multiple invocations of the program. The persistence is achieved using Unity’s simple PlayerPrefs mechanism.

public enum SSQuestMode : int
{
/// <summary>
/// interactive training mode
/// </summary>
Onboarding = 0,
/// <summary>
/// regular operation mode
/// hud is visible by default
/// </summary>
Explore = 1,
/// <summary>
/// settings panel visible
/// </summary>
Design = 2
}

Wrist and Controller Buttons

Since this app supports both controllers and hands within the same play session, activation of the mode change needed to be simple, easy, and intuitive. On the controllers, it’s just a matter of pressing the “Primary” button on either controller: A on the right X on the left. When showing the controller inside the application, those buttons are painted with the menu icon while the other buttons are left blank, providing a subtle yet clear clue which one should be pressed.

Within hand tracking, there aren’t really any defined standards of interaction. Some applications use wristwatch style menus, some use buttons above or below the eyeline, some use gestures where you have to position your fingers at just the right angle to activate the function.

Animated button below the palm for switching operating modes

Initially, this application used a wrist watch style button, and it made for great screenshot. But people found it distracting and asked to move it to the underside of the hand, such that there is more intent associated with activating its function. The drawback is that if someone launches the app without knowing there’s a button there, they may not notice.

There’s no way to completely eliminate a learning curve, so we opted to make the button fade in based on its angle and then play a little animation once fully visible. This keeps the button from interrupting normal activity but also catches the eye once activated.

Minimap

As covered in the previous post, SeeSignal is best visually characterized by filling the room with “signal sticks” that indicate the presence and relative strength of the active wireless network. To maintain bearings while exploring, and to increase the navigational capability, a small map gadget was added. This would take a subset of the large sticks nearby and present them in a little miniature 3D grid anchored to the HUD or physical controller.

The Minimap reorients to the user’s direction and, when using controllers, can be moved and pivoted even inspected up close. It shows all the same red, yellow, green, providing a situational awareness similar to what is found in popular video games. When using hands tracking, the disc-shaped stick volume is docked into the HUD.

Miniature map shows a little version of the big stick field

Building the Minimap was an adventure in engineering and shader code. The initial thought was to copy the big room-size sticks and shrink them down. However, this would effectively double the triangle count of the whole scene and potentially negatively impact the framerate. So the mini-grid of sticks was re-engineered as a grid of quad rectangles using a “geometry shader”.

Honestly, this was more of an excuse to learn geometry shaders, which are a little beyond the scope of this article. Here’s a snippet of the tricky part, but read below to find out why we ended up scrapping this!

[maxvertexcount(4)]
void geom(point v2g i[1], inout TriangleStream<g2f> tristream)
{
float3 to_camera = normalize(i[0].worldpos - _WorldSpaceCameraPos);
float3 up = float4(0, 1, 0, 0);
float3 right = cross(up, to_camera);
//make it rectangular like a stick
up *= i[0].psize * _PointSize * 1.25;
right *= i[0].psize * _PointSize / 3.0;
g2f o;UNITY_INITIALIZE_OUTPUT(g2f, o);
UNITY_SETUP_INSTANCE_ID(i[0]);
UNITY_TRANSFER_INSTANCE_ID(i[0], o);
UNITY_INITIALIZE_VERTEX_OUTPUT_STEREO(o);
o.color = i[0].color;
o.psize = i[0].psize * _PointSize;
o.pos = mul(UNITY_MATRIX_VP, float4(i[0].worldpos - right + up, 1));
o.uv = i[0].uv;
o.uv_quad = float2(0, 1);
tristream.Append(o);
o.color = i[0].color;
o.psize = i[0].psize * _PointSize;
o.pos = mul(UNITY_MATRIX_VP, float4(i[0].worldpos + right + up, 1));
o.uv = i[0].uv;
o.uv_quad = float2(1, 1);
tristream.Append(o);
o.color = i[0].color;
o.psize = i[0].psize * _PointSize;
o.pos = mul(UNITY_MATRIX_VP, float4(i[0].worldpos - right - up, 1));
o.uv = i[0].uv;
o.uv_quad = float2(0, 0);
tristream.Append(o);
o.color = i[0].color;
o.psize = i[0].psize * _PointSize;
o.pos = mul(UNITY_MATRIX_VP, float4(i[0].worldpos + right - up, 1));
o.uv = i[0].uv;
o.uv_quad = float2(1, 0);
tristream.Append(o);
}

Suffice to say, the shader worked great… until it didn’t. Generating triangles on the fly involves math and geometry, and adding another middle stage into a standard HLSL shader, which could definitely be better documented. The big problem that we could not resolve was the requirement for multi-pass rendering.

Something about how the geometry shaders work on this platform makes it difficult to implement single-pass rendering. Everything worked great in the default multi-pass, but during the performance tuning, we moved toward Oculus’ highly performant Multiview rendering. And then the geometry shader is only rendered in one eye. This wasn’t an Oculus limitation specifically but did cause confusion for a while when the only thing we changed was a small little dropdown box inside the build player settings.

Make sure to test thoroughly after choosing this option

We debugged, traced, read the manual, etc. And ultimately gave up, reimplementing the feature as a variant of our existing stick visualizer. We suspect the way to properly implement geometry shaders on single-pass might be to detect which eye is currently rendering and effectively render the procedural geometry twice, but this revelation came a bit too late as our patience had expired. Please chime in if you face a similar challenge and are able to resolve it!

Heads-Up Display (HUD)

We’ve all seen movie about fighter jets and pilots, and been collectively fascinated by the digital holographic overlays that show altitude, bearings and even the anxiety-inducing red “missile lock”. Conceptually, the HUD is quite brilliant, bringing information into the field of view that would otherwise be distracting. Some modern automobiles are starting to offer this option, and it can reduce driver distraction while increasing awareness of surroundings and speed.

Inside SeeSignal, we fully embrace the philosophy of immersion. Radio frequency data is traditionally displayed in flat, two-dimensional interfaces like charts and graphs. But signals actually exist in 3D, all around us at all times. Thus, painting signals directly into the field of view can help contextualize the information better and increase understanding. Ambient information like connection status, quantitative signal quality, and other details are also brought into view through a persistent overlay that can be toggled on and off at the user’s whim.

Designing a HUD is challenging. You want just the right amount of information without blocking the view, with the whole interface almost becoming perceptually invisible. The information gets into your brain without having to actually look at it.

Sketches of potential HUD configurations with various modular components
Physical model of HUD made with a 3D printer pen, for testing depth perception and general shape

To help realize the physicality of how the HUD would feel, we bought a 3D printer pen and made a few plastic versions in real life. Then we held the pieces up to our face and cameras to gauge depth and distance, and fine tune the animation. 3D printer pens are often forgotten, but serve a great purpose for quickly prototyping concepts. In this case, the subtle angling and puzzle-piece fitting stylistic design were only obvious once the physical mockup was created.

Functional mockups inside Unity, with opening and closing animations, and live data connection

When holding the plastic HUD out with our hands, we tried different positions to find the most natural and comfortable movement of the top and bottom elements. The end result was an almost jaw-like opening and closing where the bottom would drop and fold down and the top would curve away slightly afterwards. Hopefully, this attention to detail paid off! Regardless, the process was fun, and we plan to incorporate 3D pens into future design sprints. Let us know in the comments if you find any great examples of this methodology.

Player Settings

Empowering the user with customization was a foundational part of this application. Signals are unpredictable and can vary wildly across residential, commercial, and industrial settings, even outdoors. While the app is only recommended for indoor use, allowing the display to be adjusted to any environment ultimately makes it more versatile and useful.

See below the initial concept sketch of a split-style settings panel showing system-level and display-level parameters. On the right, the sticks themselves can be changed to different shapes including cubes and dots. Can you think of any other shapes that might be nice to add?

Concept art for a split panel layout of settings and options

When implementing this concept into an initial prototype, the nice pivoting animation and sizzle factor was strong, but something felt off. It almost felt like peeking into the cockpit of a plane — lots of cool-looking blinky stuff but where would you even start? Then, there were the two info buttons on either side that would activate popup tooltips. So many buttons and things to click on; the interface was frankly intimidating and would only get worse if more options were added.

Animating the split panels into view from both sides

The decision was made to opt for a tab-style navigation interface. This would be familiar to people, and was similar to what was built into the Oculus operating system. As covered in the previous blog post, our preference is for physical touch buttons over laser beam pointers, so the user has to actually reach out to interact with this panel. In that sense, it still feels like a physical control panel, but is a lot easier to approach. Different options are grouped into categories and at any given time, there aren’t so many buttons/sliders/checkboxes visible so the interface does not feel cluttered.

Eventually moving to tab-style navigation to simplify the interaction with each group

From a technical level, implementing the settings panel was challenging. There were so many hooks into the application’ functionality, as well as the requirement for settings persisting between sessions and headset reboots. Everyone expects that when they launch an app it will just pick up from where they left off. This is not an expectation in gaming, but it is in productivity applications. Imagine if the email app on your phone lost its place every time you switched back… so the application had to be “smart” and remember all the user’s preferences.

Again using PlayerPrefs, blocks of code like the following were added into the Start method of the main scene manager script. These functions would listen for delegates/events fired when the user moved the sliders in the settings panel, and then kick off the adjustment to the relevant data structures. In this case, the stick structure is called stickMeshDrawer and has a member variable named nodeScale that controls stick spacing. Note the mismatch between double and float for the delegate parameter, an oversight that will be corrected in a future code cleanup pass.

stickSizeSlider.OnSliderChange += (double value) => 
{
stickMeshDrawer.nodeScale = (float)value;
PlayerPrefs.SetFloat(PREFS_NODE_SIZE, (float)value);
PlayerPrefs.Save();
};
stickSizeSlider.SetValue(PlayerPrefs.GetFloat(PREFS_NODE_SIZE, (float)stickSizeSlider.value));

The density slider worked in much the same way, setting parameters on all the different sticks and Minimap visual renderers to trigger reallocation and realignment of the visual objects. Stick drawing scripts follow an invalidation design pattern such that they poll for changes to the public member variables (like nodeScale) inside their Update functions and then trigger a recalculation of procedurally calculated geometry. Fun.

stickDensitySlider.OnSliderChange += (double value) => 
{
...
};
stickDensitySlider.SetValue(PlayerPrefs.GetFloat(PREFS_NODE_DENSITY, (float)stickDensitySlider.value));

The end result was a rewarding ability to live edit the stick parameters, seeing the results of the customization in real-time, on the fly. Press your finger into or across the slider, and instantly see the field of sticks adjust to your preference. Everything worked very fast and smooth, and yielded a clean user experience.

Live editing stick parameters including spacing, size, and density

Feedback

  • Do you like the split design or the tab design better?
  • When/Where would you use the spacing and size stick adjustment?
  • Have you ever used 3D printer pens before?

Download SeeSignal today — via the MetaQuest App Lab!

What’s Next? Part 3!

Interested in continuing to follow our development journey? Follow the link below to read on:

Part 3 — https://medium.com/badvr/mixed-reality-for-meta-quest-finishing-touches-ffdc54590311

--

--