Planetarium

The Design and Science Behind Our VR Widgets Showcase

LeapMotion
20 min readOct 5, 2015

This week on Medium, we’re travelling through time and space looking at the design behind our internal development projects — from our groundbreaking creative Sculpting app, to our introductory Playground, to our internal hackathon projects and the VR demos Planetarium and Cockpit.

Planetarium was designed to showcase our various UI widgets as you explore the stars, traverse the Earth, and control the flow of time with a flick of your finger. This post was originally a 6-part series written by the Planetarium team.

Part 1. Background and Science

By Daniel Plemmons, Designer and Developer

The Planetarium started as a weekend side project of mine. I’m an amateur astronomer, and there’s a lot of limits to what you can see through inexpensive, ground-based telescopes. I got it in my head that there were some pretty cool things I could visualize and learn if I could look at stars through a VR headset. The most interesting of which was to visualize the actual distance of stars from Earth.

When you look up at the night sky, the stars all look to be painted on the sky, as if they’re all about the same distance away. In reality, some stars you can see on a clear night are nearby (astronomically speaking), while others are thousands of light-years away. Our “little” galaxy, the Milky Way, is about 100,000 light-years side to side.

To put that in a more human perspective, the fastest spacecraft humanity has ever made, Voyager 1, traveling at 62,136 km/h for 36 years, has only traveled about 0.002 light years. When you get to space, numbers start to get quite large.

I was really curious about seeing how the night sky would look if you could actually sense the difference in distances. We see depth in large part through the parallaxing of the images our two eyes receive. (The same effect, in fact, we use to detect depth with the Leap Motion Controller.) Stars are so far away that the difference in parallax is so small as to be meaningless to the human eye. But if our eyes were a few light-years apart, we’d have a really good sense for the varying distance of stars.

Discovering the Stars

I did some digging and found the HYG star database, a very cleanly formatted CSV database of the brightest stars in the sky (there are billions of stars, we’re rendering just over 100,000). There are a ton of catalogues of stars and ways of naming stars: the NGC catalog, the Hipparcos catalogue, the Henry Draper catalogue, the Yale Bright Star catalogue, Bayer naming, Flamsteed naming, the Messier objects, and so on. The HYG database nicely combines and cross-references a bunch of these catalogues along with providing data like 3D positions, color index, common names, spectral classifications, and tons of other science-y fun.

The HYG database gave us two ways of understanding the positions of stars. One is the set of traditional polar astronomy coordinates, right ascension, declination, and distance. The other is traditional 3D coordinates with Earth as the origin. I decided to use the right ascension and declination coordinates. They’re a bit like longitude and latitude.

Some quick vocabulary:

  • Right-ascension (RA) is measured in hours (with 24 hours being a full circle) and describes an angle rotated about the celestial equator.
  • Declination (DEC) is the angle, measured in degrees, north or south of the celestial equator.

These are fixed coordinates, so a star always has the same RA and DEC. However, the earth’s rotation means that if we look at a single point in the sky (relative to the earth) at two different times, the celestial coordinates of that point will have changed. To figure out where to place a star in the game engine, you convert the polar right ascension, declination, and distance coordinates to Unity3D space. With that knowledge, we can generate some procedural geometry to represent each of the stars. The application then rotates the earth and the viewer to have the proper view of the night sky. (More on how that works from Gabe later in this series.)

The Expanded Universe: Color and Constellations

This is all pretty cool, but there’s a lot more to stars than just little white dots. They’re giant masses of incandescent gas (errr… incandescent plasma) just like our star, the Sun. Different stars are different temperatures, and hence emit different colors of light. Cooler stars are going to be redder, whereas hotter stars are literally white-hot, and the hottest stars burn blue. If you’ve ever had to redo the white balance of a camera when you move from shooting indoors to outdoors, you’ll be familiar with this concept, known as color temperature. The HYG database gives us the color index of the stars so we can easily add it as a data overlay to the application.

The other major set of data we wanted to show was the constellations that make up the night sky. This is where we run into some vaguery. When most people hear the word constellation, they think of the illustrated lines connecting stars to form figures in the night sky, like Orion the Hunter, Cassiopeia the Queen, Sagittarius the Archer, and Scorpius the Scorpion. In the scientific community, a constellation is actually a region of the night sky. Every star in that region of the sky is a part of that constellation. The International Astronomical Union (IAU) officially recognizes 88 constellations covering the entire night sky. Their borders are well-defined and generally agreed upon.

What’s not agreed upon are the lines that define a constellation’s “asterism” — the figure made up by connecting some of the stars in the constellation. Asterisms also aren’t always associated with one of the 88 constellations. The Big Dipper is an asterism, but its stars are all within the constellation Ursa Major, which also has an associated asterism of the same name. The general shapes of the standard 88 figures are mostly consistent, but look at three different star charts and you’ll probably find three different sets of lines making up the asterisms.

Ultimately, this meant there wasn’t a nice clean database of asterism lines I could simply parse and import into the planetarium. In the end, I studied a number of different star charts and hand-entered all the asterisms into a spreadsheet, which I rigged up to export into a CSV format I could easily parse. (If you’re interested in the data, you’re welcome to dig into the spreadsheet. All the stars are referenced by their Henry Draper Catalogue ID.)

With all this data, we had a lot to play with in terms of developing a control scheme. Our ultimate goal for the Planetarium project was twofold — to build a demo for Widgets, which would in turn allow us to incubate and improve them in the context of a production application. Controlling the planetarium gave us a well defined set of challenges that could stretch to fit our timeline and resources. Even now, we’ve really only scratched the surface of what we could visualize and make interactive in the infinite depths of the night sky.

Part 2. The Evolution of Arm HUD

By Barrett Fox, Interaction Engineer

As Daniel mentioned earlier, Planetarium was designed in part to put our Widgets through the rigors of an authentic production. We needed to validate that they could be put inside a complex hierarchy, withstand being scaled, work in any orientation, and control a variety of events. Even as the Arm HUD is kind of a new meta-widget to us, we wanted it to spur the creation of new additions to our Widget library. And to be frank, we also wanted to have just a touch of unabashed fun and actually create the kind of interfaces we’re being teased with so often in the movies.

The initial idea to build an Arm HUD started with our co-founder, David Holz, and was given legs by our designer Kyle Hay. Bringing it into focus and realization was richly and fluidly collaborative. Our UX designer Jody Medich would wireframe proposals for functionality and layout, and Kyle would create PDFs of art direction and design details. And as I built to these specs, we would rapidly prototype, test, observe intently. We evolved our design significantly and repeatedly throughout the process.

Flexible Workflows & New Unity Powers

Rapid prototyping of the Arm HUD’s shape and motion was critical for us to be able to deftly explore broad visual ideas without painting ourselves into any corners. I used Maya to rough out a geometric form, animate it, and export to Unity. All of the Arm HUD’s graphics needed to be dynamic and data-driven, yet able to conform to any shape of surface. To do this, I created an orthographic camera, invisible to the user, that would render any graphic layouts I created and project them onto the Arm HUD’s 3D geometry.

During the production of Planetarium, the long-awaited Unity 4.6 GUI became available and provided us with a huge new, highly relevant toolset. We use the new World Space UI canvas extensively, throughout not only the Arm HUD but the rest of Planetarium as well.

Additionally, the Arm HUD uses Unity’s Mecanim animation state machines in conjunction with the Arm HUD’s own C# state machine. Using Unity’s existing systems for GUI and animated events inside our Widgets makes it easier for developers to not only use our hand-enabling toolsets but to easily build upon them as well.

Iterate, Observe, Rinse & Repeat

The unbroken thread of testing and feedback runs throughout the process of building Arm HUD. As we iterated, often with Jody proposing new UX approaches, we experimented with several ways to layout the various widgets and panels. We zeroed in on a layout that kept the hands somewhat separate. And we aligned the panels in a way that would prompt the user to tilt their arm up and in front for optimal tracking. We found that turning the wrist was an elegant and reliable way to use your body to switch contexts.

Witness the deep scrutiny and observational rigor of our process!

Part 3. A Brief History of Time Dial

Barrett Fox, Interaction Engineer

One of our new VR Widgets, the Time Dial, surprised (and indeed amused!) us at several special moments during our intense production push. The Time Dial Widget is our hand-enabled VR interpretation of a typical touch interface’s Date Picker. We built it with a combination of Wilbur Yu’s Widget interaction base, Daniel’s data-binding framework (more on those two later), and a graphic front-end that I coded and built — again using Unity’s new 3D GUI.

Immediately after adding the shiny new Time Dials to the Arm HUD’s flyout panel, the Arm HUD itself went essentially haywire. As we would change the Month, Day or Hour, buttons on the Arm HUD would begin to trigger by themselves, making panels fly open or disappear. Yikes! We soon realized that while our Widget physics for springy buttons being pushed by virtual hands worked reliably in typical situations, we were spinning the earth at spectacular, epic velocities. This made Unity’s physics engine cry, triggering the Widget buttons without the need of virtual hands.

Between epic velocities and accidental gyroscopes, we’re pretty sure we broke a few digital laws of physics.

Additionally, we discovered that the Time Dials would continue spinning indefinitely after we let them go… sometimes. This was one of those intermittent, tricky-to-solve bugs we love so much. But through dogged observation, Gabriel discovered this only happened if we were pointed east or west. Weirder. In the end, he deduced that while we were clamping the Time Dial’s rotation to its X-axis, if the Earth and Time Dial were in just the right alignment, the Earth’s momentum would transfer to the Time Dial. We had accidentally created a gyroscope!

Interestingly, these bugs emerged after quite prodigious amounts of rigorous user testing. Literally dozens of user tests have been run at our office, all carefully recorded and scrutinized. But running our Widgets through the crucible of Planetarium’s forces, ranging from the fingertip scale to the astronomical, gives us valuable insight on how to make the Widgets even more robust for developers.

Part 4. Traveling Around the Globe (and Under the Sky)

Gabriel Hare, Physics & Algorithms

One of the major features of Planetarium is the ability to travel around the globe using motion controls. While this approach is still rough and experimental, we learned a lot from its development that we’d like to share. Later on in the post, we’ll even take a look under the hood at the code involved with the movement and spinning physics that tie everything together.

Lessons in UX Prototyping

Our initial idea for navigation controls was a simple Joystick — in other words, a system relating hand position to player velocity. When the Joystick design began, we had already created a reliable palm-outward grabbing gesture that would pull constellations into the user’s hand. We decided to use the same grabbing gesture to initiate and end navigation, but would distinguish navigation from star grabbing by having the palm face the user.

We also added visual feedback in the form of a “Joyball,” or object that people could hold. If the user turned their hand over, they would immediately end navigation. This way, people would seem to be holding an object, and thus avoid turning their hand over unintentionally. Along with linear movements, the ball could also be twisted to rotate the player.

Early sketch for Joystick UI

Based on our user testing, however, this design failed due to a simple matter of ergonomics. We discovered that when users moved their hand around to move the globe, their hands naturally started to tilt downwards when they moved their arm to the left across their body. Because they needed to keep the palm facing upward, it was difficult for users to move their arm to the right, away from their body. There were also issues with motion sickness when the tracked hand would vanish without explicitly halting navigation, leaving the player moving around the globe.

After going back to the drawing board, we switched it to activate when the palm faces downwards. Now, just like grabbing a constellation, the line of sight through the user’s palm must intersect the earth. We added a gain curve to the linear and angular velocities, so that small movements yield precise control, while large movements cannot exceed a maximum speed. To avoid motion sickness, we made absolutely sure that the player would stop moving whenever the controls were not in use.

One very positive result from the user testing was that the palm-down navigation interaction was in many ways a significant improvement over a traditional Joystick. Simply pretending to fly with your hand allows for displacement and rotation that feel natural. All users were able to discover and reliably use the navigation controls, even though only one was able to articulate how the controls worked. With subsequent versions of the Joyball, we’ve implemented activation cues by highlighting the earth and presenting visual guides on the back of the hand. The representation of the Joyball is now a ship’s compass which always points north.

Here are some of the important UX design lessons that you can take from this experiment:

  • The user must always have control over their motion.
  • Good ergonomics is essential. Always be willing to modify other aspects of your interaction design to ensure user comfort across the board.
  • Line of sight is a very useful indicator for determining what the user wants to interact with.
  • Use visual feedback to make your application more intuitive.

How to Avoid the Hand Lag Feedback Loop

Now that we’ve looked at the navigation mechanics of Planetarium, there’s an important problem we need to resolve. With this navigation system, the player moves according to the position of their hands relative to their body in virtual space. By default, the HandController script in Unity creates unparented hands in the scene, but this can create a feedback loop when the HandController moves with the player.

Why is this a problem? First, imagine that the HandController updates the position of the hands, and then the navigation system uses this position to move the player. In this case, the player’s movement will be smooth and correctly controlled.

Now, suppose instead that the order of operations is reversed. First the navigation is computed, then the hands are repositioned. This would result in a feedback loop in the navigation, since each movement of the player effectively displaces the hands. Unfortunately, the hand lag is completely invisible — by the time rendering begins, both hands and player position will have been updated.

The problem is that, in Unity, either order of operations is possible! Fortunately, the solution to this problem is simple — make the player’s body the parent of the hands. This ensures that updates to the body position immediately apply to the hands, thereby preventing hand lag.

Let’s Get Mathematical! Orbital Calculations in Unity

The JoyBall displacement can immediately be used for navigation in space, or on a flat map. However, adapting the controls to navigation on the surface of a globe requires some additional calculation.

Along with the Joyball, Planetarium also showcases a TouchMap navigation model. The TouchMap uses a latitude and longitude coordinate system, with the azimuth fixed at zero. The problem with this coordinate system is that if forward/backward motions of the Joyball are tied to latitude, while left/right motions are tied to longitude, a small motion left or right near the pole will rapidly spin the player around. This is because the poles are coordinate singularities and are numerically unstable.

Fortunately, we have a straightforward solution — move along geodesics! Even more fortunately, Unity provides an implementation of the required math. When a player moves the Joyball, they are in effect saying “I want the reference point to move towards the Joyball point.” Since the player is rotating around a planet, this means that they want to move along a rotation around the earth center that will transform the reference point to the Joyball point. For more insight and full code samples, be sure to check out the original blog post.

(If you made it this far, here’s an Easter Egg: if you type ‘w’ in the main scene of the Planetarium, it will make wickets appear around the equator of the planet. Take the navigation system for a test drive!)

After all the necessary orbital location calculations are complete, there’s one little wrinkle left to let us show the proper orientation of the night sky. Time.

Part 5. Calculating the Night Sky in Planetarium

Daniel Plemmons

Time for another dash of astronomy! As you probably know, the earth’s rotation means that the stars appear to move through the night sky.

Given the orientation of the Earth’s axis, if you’re in the northern hemisphere, it will appear as though the stars rotate around the star Polaris (or North Star). Since the earth rotates once every 24 hours, the stars move across the sky once every 24 hours… almost! The 24 hour day is close to being accurate, but between it being slightly wrong (which is where we get theleap second) and the revolution of the earth around the sun, a 24-hour celestial day is not quite the same as a 24-hour terrestrial day. (It turns out programming accurate time calculation is hard, which is why all our times in Planetarium are simply in GMT and we decided not to work out time-zones.)

To understand why, imagine that you’re looking up at the midnight sky on June 1st. At that moment, you’re on the opposite side of the world from the sun. But if you look at the sky at midnight on New Year’s Eve, you and the Earth have since travelled halfway around the sun!This means that the stars will appear in different places than they did in June. In fact, the stars you can see throughout the year will fall a few minutes behind every night, and this tiny difference in each day adds up over time.

Astronomers solve this by measuring days in “sidereal time,” which measures accurate celestial time. The stars above you at midnight sidereal time on January 1st will be the same as the stars above you at sidereal midnight on June 1st, though that may be 2pm in the afternoon according to a terrestrial clock. The calculation to compute (relatively accurate) sidereal time is a bit verbose, but generally pretty simple.

Greenwich Sidereal Time = 6.5988098 + 0.0657098244 × (day number of the current year) + 1.00273791 × (Time of day in Universal Time)

(Obviously, this is a bit obtuse-looking and has a few nasty magic numbers. If you’re interested, you can look into how this time is derived here.) Once we know the proper sidereal time, we can rotate the earth and the viewer by the proper offset to finally display the proper night sky. Tomorrow, we’ll start digging into how we integrated the UI Widgets into the data model forPlanetarium, so that these two systems will play nicely together. Trust me, accidental gyroscopes were just the beginning.

Part 6. Designing the Widgets Event and Data-Binding Model

Daniel Plemmons

This time around, I’ll talk a bit about how we handled integrating the UI Widgets into the data model for Planetarium, and what this means for you.

The first iteration of Widgets we released to developers was cut almost directly from a set of internal interaction design experiments. They’re useful for quickly setting up a virtual reality interface, but they’re missing some pieces to make them useable in a robust production application. When we sat down to build Planetarium, the need for an explicit event messaging and data-binding layer became obvious.

We made a lot of use of editor fields to make customizing and connecting widgets easier.

There are two ways a developer might want to interact with a UI Widget. The first is detecting interaction events with the widget — like “Pressed,” “Released,” and “Changed.” Events are quick and dirty and they’re great for simple interactions, like kicking off sound effects, or buttons used to open doors in a game level. The other is connecting the Widget directly to a data model, having it always display the current state of the data model, and having any user input to the Widget be reflected in that model. This is the pattern we use when we’re controlling data like asterism opacities and false-color saturation.

Wilbur did a great job of building obvious interaction end-point functions into the original Widgets. There are clearly named, short functions like OnButtonPressed. In the original release, these functions are where developers would add their code detailing what the Widgets controlled. Making life even easier for us, C# has some simple patterns for generating and subscribing to events. I defined a few interfaces that we agreed every Widget would have to implement — ones that required definitions for Start, End, and Change events — and added implementations to the existing widgets. There’s a nice inheritance structure to the Widgets that meant we could implement the events once in classes like ButtonBase and SliderBase, and have them work in our more specialized versions of the Widgets. The events carry a payload of a WidgetEventArg object that wraps the relevant data about the Widget’s new state after the interaction.

However, when using events while trying to stay in sync with a data model that’s changed by multiple sources or requires data validation, problems tend to crop up where your UI and your data fall out of sync. To solve this, we developed a relatively light-weight data-binding layer to connect generic Widgets directly to the application’s specific data. This involved creating an abstract class for a DataBinder to be implemented by the end-user-developer. (Why abstract classes rather than interfaces? To allow for easy integration with the Unity editor which can’t serialize interfaces or generic types into accessible fields.)

With this setup, developers need only implement a getter and setter for the piece of data being interacted with by the Widget. Widgets have open, optional fields in the Unity editor where developers can drag in a data-binder, and from then on the Widget will automatically update its view if the data changes, and update the data if the user modifies the Widget. It handles all the pushing and pulling of state behind the scenes using the assessors that you define.

Having these easy-to-hook-up data-binders connected with the Planetarium data meant that I could work on building new features for the planetarium, while Barrett could work on a new feature in the Arm HUD. We had a well defined set of expectations about when and how data would flow through the system. When we’d go to have our code meet in the middle, we rarely had to do more than drag-and-drop a few items in the editor, which let us move a lot more quickly than if all our systems were tightly bound to each other’s architectures.

Part 7. Exploring the Structure of UI Widgets

Wilbur Yu, Unity Engineering Lead

In this section, we’ll look at how we structured Widgets to be as accessible and comprehensive as possible. For the purposes of the blog, we’ll look at the Button Widget in particular, but all of the Widgets follow a similar pattern. To see the full post with code samples, go to the original post on our blog.

Before we get started, here are the key points to keep in mind:

  • Readability is essential. Write understandable, simple functions.
  • Create base classes that contain physics and important abstract functions. Only one class or gameObject should be responsible for physics.
  • Minimize the number of colliders required for physics interaction

Prefab Structure

All Widgets are created using Unity’s Prefab feature. Here’s the prefab structure for buttons as an example:

Base Component: ButtonDemoToggle. The Widget parent contains no scripts and no components. This gameObject’s responsibility is to determine the world position and scale of the Widget.

Physics Component: Button. This gameObject contains a trigger collider used to determine if a hand is interacting with it and respond accordingly. When a hand is no longer interacting with it, the physics component will take over.

The physics component is designed such that all the important properties can be changed in the inspector. The script is only responsible for responding to the physical changes because it inherits from a base physics script that handles all the physical movements.

Graphics Component: OnGraphics, OffGraphics, MidGraphics, and BotGraphics. These components are optional. In this example, these components only contain scripts with graphical changes, linked by the Physics Component. This signals the Graphics Components when to change their states based on the Physics states.

Physics Structure

This is an inspector view of the Button physics structure. From here, you can specify the spring constant, trigger distance, and cushion thickness (used for hysteresis). The script references the graphics components because it will call their functions based on the state in the physics component.

During each physics update (FixedUpdate), the spring is applied first, then constraint is applied afterwards to constrain its x-axis and y-axis movement. I chose to use our own formula for spring physics because Unity’s spring hinge doesn’t work well when the displacement of the object is less than 1 (as Unity distances equal meters). Since we’re always working with a space less than a meter from the camera, this became a problem, so we had to implement our own spring physics.

The ButtonBase calls two abstract functions — ButtonPressed and ButtonReleased — when the button passes or retracts from a certain point. ButtonToggleBase overrides the previous two abstract functions, and whenever the button is pressed it also calls two other abstract functions: ButtonTurnsOn and ButtonTurnsOff. Finally, ButtonDemoToggle overrides the previous two abstract functions, and handles the graphics components during these events. As mentioned earlier, other Widgets follow a similar pattern.

Solving the Rigidbody Problem

The biggest problem we came across while using Widgets in Planetarium is that, when the Widgets are approaching astronomical speeds (e.g. flying around the Earth), the rigidbody inertia causes unexpected collisions. In turn, this causes unintentional event triggers. Ultimately, we decided that a major physics refactor with our own non-rigidbody implementation was necessary.

Thanks for following us on our journey through the stars with Planetarium! This demo release is just a taste of what we have in store for the future. Whether it’s gazing at the stars or soaring through space, we’d love to know what inspires you about VR. What kind of experience would you like to see (or build!) with Widgets?

Originally published at blog.leapmotion.com on February 2, 2015.

--

--