SeeSignal: Technical Challenges of Visualizing Invisible Network Data

Brian Wong
badvr
Published in
8 min readOct 2, 2019

We often think that our human senses are all-encompassing and an absolute representation of the world. However, the visible spectrum is only a small sliver of the whole story. With BadVR’s SeeSignal app, we set out to give your visual system a major upgrade! With the overarching mission to visualize invisible worlds that are persistently around us, we chose to explore the 3 most relevant signals in our modern lives: WiFi, Bluetooth, and Cellular.

This is a technical breakdown of the challenges, lessons, and decisions we encountered while developing SeeSignal.

Spatial Computing: New Unique Challenges

The traditional approach of analyzing a network is to wave a radio frequency receiver device around and determine the signal strength at that point in space. This method gives very limited insight into a network, so we set out to visualize full networks in their entirety. We knew from the beginning if we wanted to create something compelling, useful and intuitive, we would need the perfect device and platform for our purposes. Magic Leap and Spatial Computing ultimately became our sandbox.

In both SeeSignal and Spatial Computing, the real world is just as important as the digital layer. Visualizing network data requires the context of its environment. To completely understand why signals are degrading or interfering with each other, it is useful to know the geometry of the environment. By gaining a full spatial understanding of a network, complex patterns could become obvious and apparent. This is exciting for room-scale and larger networks and has interesting implications for 5G and our growing digital infrastructure.

Keeping The Action Within View!

One of the most prominent challenges in Spatial Computing is working within the relatively narrow field of view (FOV) of current hardware. We had to keep this in mind throughout our design process and attempt to leverage the benefits of keeping content in front of the user. There were a few technical and creative best practices we learned. When used in appropriate situations, we were able to reduce the perceivable constraints.

The first FOV “hack” we discovered was to use an unlit black vignette on the camera. This is simple and easy to do and creates an incredibly effective illusion on Magic Leap. Black is the absence of light, so it can’t be rendered using light. Luckily, this can be used to a developer’s advantage in many ways, particularly with occlusion. An unlit black vignette reduces the perceivable edges of the screen and creates a fading illusion. This allowed us to put more content in the user’s peripheral.

Another trick to keep content in front of user is to use headpose and it’s relative forward direction as a reference point. This is especially helpful with user interfaces. Unlike traditional 2D screens where it was trivial where the edges of sight are, Spatial Computing can utilize the entire room and it isn’t always obvious where to place content. One trick we learned was to move content in front of the user at a readable distance. However, having a main menu constantly float in front of you isn’t very comfortable or a good user experience. So we moved content into view of user and locked it into place. Once the content was out of sight from the user, the content locks into it’s new relative position. This was helpful for main menus and message panels.

What Does Position Really Mean?

In a completely virtual world, global position is well defined. An origin point is designated and all virtual objects reference it. However, when you bring the context of the real world in Spatial Computing, how do we define that origin point? Is it origin the center of your room? center of the virtual world? center of the earth?

It turns out for Magic Leap One, the origin of the virtual world is defined as the position you turn on the device. This makes perfect sense since the device has no context of the room at boot up. From a Creator standpoint, this makes defining positions of virtual objects a little bit trickier than placing them based on “world position”. The best solution is to either positioning objects by referencing the user’s head position or anchors in the room.

I find myself re-centering content referencing user headpose often at scene start. A tip I often give to Creators developing demos for conference show floor environments is to make re-centering a public function they can access with Magic Leap Control input. This makes curating an experience much easier in a busy and unpredictable environment.

Another reference point when you’re interested in persistence and multi-user experiences are anchors in the environment. On the Lumin platform, these are called persistent coordinate frames (PCFs). Multiple of these anchors can exist in an environment and they remain consistent between app reboots and even device restarts when landscape is recognized.

The Wild West of Interface

I often think to myself, “What is the optimal interface for Spatial Computing?” For PC, it’s been well established that keyboard and mouse is the best way to interface with screens. In recent years of mobile, tap and swipe became the preferred interactions. For Spatial Computing, it’s still to be determined, which makes it an exciting time to be in the field. All we know for certain is that it will be immersive! Luckily, with the Magic Leap platform, they provide a suite of sensors and inputs (i.e. hand tracking, eye tracking, 6DoF Control, etc). When integrated together and with digital content, there limitless possibilities.

For SeeSignal, we wanted the user to be able to directly interact with their signal. So the first interface decision we made was to utilize hand tracking. There is something magical about grabbing a digital object with your bare hand and having it immediately react. Specifically for our app, we allowed users to get more details about their signal strength in that point in space.

We wanted to make the inputs as robust as possible, so we included a point and pull functionality with the Magic Leap Control. This was done using a raycast, Magic Leap trigger down event and a beam with a cursor for visualization.

One of the most exciting interface systems we included was our “Gadgets”, which gave users more technical ways to interact with their signals. We tethered these objects to the Control, so we could utilize precision 6DoF functionality. For our initial release, we included a Styler, Signal Meter and Signal Finder.

The Styler allows the user to change the color scheme of the signal visualization with the touchpad. The Signal Meter allows a user to determine the signal strength at it’s current position. The Signal Finder functions similar to a 3D compass allowing the user to follow the arrows to the best and worst signal in their environment. We have a few more Gadgets we are currently working on including a minimap of the scanned environment.

Timelines, Signals and Events

Spatial Computing is an incredible medium for storytelling. Luckily, Unity has progressively been making it easier to animate inside of the Editor. Timeline, a recent feature introduced last year is particularly helpful for constructing complex animations and tutorials. Similar to apps like Maya and Blender, you can now assemble multi-track animations by simply animating an object or drag and dropping animation clips.

From a team management and organizational standpoint, we discovered that story boarding the tutorial like a short film was a great approach for the team to gain one cohesive vision.

Once we gained a singular vision of what we wanted to create, our next challenge was determining how we could efficiently work on the tutorial concurrently and make the content easily adjustable. The solution was ultimately to use nested Timelines (aka Timelines within a master Timeline). Therefore, each member of the team could work independently on a tutorial section and then easily integrate it into the full narrative. This approach ended up working great for us.

Another important component we needed to consider was interaction. We wanted to take the user through the journey of learning their new “superpower” rather than staying a passive observer. The key to including interaction in our tutorial was utilizing Signals, a recently released feature in Unity Timeline. With Signals, we could fire an event at specific points in the Timeline, which we would use to pause the Timeline and wait for an interaction to occur before resuming the Timeline. This was a great way to involve the user and reinforce spatial and muscle memory for the main experience.

Persistence

In the real world, we take persistence for granted. If you leave your coffee cup on a table, you can be certain it will remain there when you return. Even in virtual reality, it’s relatively trivial to save a game object’s position and load it back in after reboot. But in Spatial Computing, this is much more of a challenge. Working with persistence and Magic Leap used to be much more difficult, but with each SDK update, they have been making it more developer-friendly.

Having persistence functionality in SeeSignal was an important aspect. We wanted to consider ease of use and reduce as much friction for reuse. Therefore, we utilized Magic Leap’s persistent coordinate frame (PCF) API to save spatial network readings, so they can automatically be reloaded during the following reboot. Once a user enters our “Discover Mode” and collects sufficient signal data, they are able to re-enter the app at a later time without having to re-collect data. Of course, we’ll continue to automatically collect more recent data, but the baseline measurements are already there and ready to explore.

Persistence is also the key to automated multi-user experiences. If we wanted to see and interact with the same digital content, we would need the same frame of reference. With PCF systems, we can sync our digital worlds together without having to manually define a marker in the room. This is an exciting next step for many Magic Leap apps.

Stay tuned for more from BadVR. Download SeeSignal (via Magic Leap World) if you haven’t already.

And if you’re interested in an enterprise demo, please reach out now!

--

--

Brian Wong
badvr
Writer for

Senior Engineer at BadVR, Inc. | Founder of Immersion Neurotechnologies