DevUp: Force-Directed Graph in VR

I recently released the v2.0-alpha version of Hover UI Kit, which is a tool for building VR/AR user interfaces. Of course, as an “alpha” release, there are bound to be bugs and other issues. The best way to test out a tool is to actually use it… which leads to today’s development update.

The video below shows a glowing, interactive, force-directed graph in room-scale VR. Users can reach their hands directly into the scene to interact with both the graph and the menu interfaces.


Early last week, I decided to get started on this new demo. My only real requirement was to use Hover UI in some interesting and varied ways. And, I had to keep the scope under control, because I wanted to be done with it in two to three weeks.

After a bit of brainstorming, I decided on the “force-directed graph” concept. I had done some prior work on this, which gave me a nice base to start on. My original graph work was focused on efficiently simulating large (1,000+) node graphs. As a starting point, I threw out a lot of that complexity, and kept the simpler (efficient enough for 100-node graphs) simulation.

So, my work on this demo started with a few major pieces in place:

  • Hover UI Kit for user interfaces
  • Prior work on 3D force-directed graphs
  • Leap Motion SDK for hand input (LeapMotion)
  • Unity 3D engine and in VR support (unity3d)


With this demo, I wanted to have a mixture of interaction types. Pushing and hitting the graph nodes is direct, familiar, physical — and the springy reactions (combined with glowing effects) is quite a satisfying experience. Hover UI interfaces are typically more indirect, abstract, symbolic — menu options and icons can be rather disconnected from their resulting action. Perhaps, as an in-between state, the per-node interfaces move the menu interaction very near to its target.

Selected clips from today’s DevUp video, showing various interactions.

Interactions with Hover UI Kit items are all based on a simple technique. An item is selected when a “cursor” (which might be a Leap Motion fingertip, a Vive controller, etc.) moves near to an item, and “hovers” there for a moment. This isn’t always the fastest interaction, but it is easy for users to learn, discover, and perform reliably. Items provide visual indicators to communicate cursor proximity, the time-based progress toward the “selection” event, and so on.

I have previously written in more depth about VR/AR interaction: see Interaction Philosophy (from the Hover UI Kit wiki) and Power At Your Fingertips (guest post at the Leap Motion blog).

Despite my initial goal to keep it simple, my primary concern is that there is too much going on in this demo. You can push the graph nodes around, select individual nodes to change their color and size, use the main menu panel to make graph-level changes, and use the “Hovercast” hand menu to make scene-level changes. Phew.

Hover UI Findings

As expected, using an “alpha” version of Hover UI uncovered some issues. I was able to fix many of them in parallel with my work on the demo. Those fixes are already applied to the Hover UI master branch, and will be included in the next Hover UI release.

One quick example: the Hover UI “layout” system is quite flexible, but I missed an interesting use-case. For this demo’s interfaces, I wanted the sliders to be turned sideways. It’s not a problem to rotate them 90 degrees, but this also requires the width and height dimensions (provided by the parent layout) to be flipped. With a simple “Flip Layout Dimensions” checkbox, a new Hover UI feature (fix?) was born.


There’s no way around it, interaction in VR is difficult. For future DevUp’s in this series, I’d like to discuss specific challenges that came up since the previous update, and how I approached the solutions.

For this DevUp, however, I’ll keep it simple: everything is a challenge. When you’re building a VR app, every little thing has an impact. It’s both tiring and exhilarating to make so many decisions (functional, visual, interactive, etc.) with so few “best practices” defined.

Just a taste: size and placement of menus, ideal item size, optimal color/contrast/transparency for the interface design, timing and style of interface animations, timing and distances involved with “hover” interactions, amount of lighting and glow, text size and legibility, handling scenarios where interfaces can become obscured, determining whether (and how) an interface should follow the user, and so on, down the rabbit hole. (These are all low-level concerns; as decisions here combine and compound, higher-level concerns arise.)

Here’s a look at the first two animations of the demo, which I posted to Twitter along the way. These show a bit of where the project started, and how it looked, compared to today’s DevUp video. Perhaps these can provide some insight into the decisions made during development:

The first node interface and physical graph interactions.
The main menu interface (and “stability” chart) is introduced.


I’m pretty happy with the way this demo is coming together. The menu positions and behavior sometimes feels a little clunky, but the interaction with the graph looks and feels quite good to me. I have plenty of possible changes and improvements to make, but hopefully they will all be relatively minor. Adding audio feedback will likely be a larger task.

I’d like to have a playable demo available by the end of the week. Once that happens, the real challenge begins — actual users trying (and quite possibly, failing) to use it for the first time. I have had a few successful user tests so far, including users aged four and six, so I’m cautiously optimistic.

Despite its humble start as a demo, I think this has the potential to become a fully-featured app someday. In general, I’m excited about the possibilities for data visualization in VR. This force-directed graph not only shows the data, but immerses the user within it. The data becomes a tangible thing to push around, manipulate, to study from different angles… it becomes something to feel. Values can be represented by more than just size, position, color — they can be represented by motion, bounciness, reaction to touch, reaction to gaze, texture, animation, and so on. This could open up new worlds of insight about a data set, where its complexities are seen, felt, and experienced in VR.

This is my first post at Medium. Please let me know what you think, and if you’re interested in reading more about VR, AR, UI, UX, and other related acronyms.