Georgi Atanasov and I got our data visualization framework for AR VR with Unity3d and went to build some actual apps. Line of business applications will incorporate a lot of the UI that is standard for a 2D application into 3D.
Speech-to-text is a king in AR VR, as long as you don’t have to yell usernames and passwords in public.
Line-of-Business app authentication in AR and VR will be quite an issue.
Going for a VR keyboard? Good luck, each headset we have at the office comes with its own vision of what a VR keyboard APIs are and how it should be displayed.
There are high quality virtual keyboards in the Unity asset store. But when you are building something more intricate you need a complex system of menus, windows, buttons and those assets from the store don’t quite understand each other. Visually at least.
At Progress Telerik we don’t compromise on UI. And how hard could it be to build a virtual keyboard anyway?
So the first thing we did was to ping Svetlin Nikolaev, a Senior Manager, User Experience, for his view on the design of one of the apps. It is a VR Twitter social graph visualization so it needs to display a search screen with some search options, it has to present some aggregated data and overlay information on a 3d graph.
— What can we use in VR? And what we can’t?— he asked.
— Design it to be perfect. We’ll worry about implementation details later. — that were my ‘famous last words’:
And just like that we got acrylic overlays, beautiful rounded corners and a good looking VR keyboard. Well, here are the implementation details.
We don’t suffer from the “Not invented here” syndrome, so this is one of the Asset store essentials we recommend every AR VR developer to use.
TextMesh Pro is so good that Unity actually acquired it and the TextMesh Pro text rendering library is now available through Unity’s Package Manager.
Why is TextMesh Pro a must have for AR VR?
Because TextMesh Pro builds atlas with signed distance fields for the text characters and renders them with shaders in a way they have vector like properties.
- Letters are antialiased
- Text viewed at close does not get pixelated but rather appear sharp
- Signed distance field provide means for effects such as glow, shadow etc.
So TextMesh Pro will provide for the input field? Not completely. TextMesh Pro comes with some drawbacks.
- Grayscale atlas — no sharp corners
- Image backgrounds
The grayscale atlas renders all characters “blurry” by encoding in each pixel the distance from that pixel to the nearest edge of the shape. Outer pixels have negative values (black), inner pixels have positive values (white).
The shapes are rather small but when they have to scale up the SDF is interpolated. The shader drawing using the SDF will still render a sharp image but the contour will be rounded near the corners. This Valve paper explains pretty well SDF.
TextMesh Pro, well they didn’t go for multi-channel signed distance field atlas. Here you can figure out the difference between a single-channel and multi-channel SDF. Please note that multi-channel SDF can not draw arbitrary vector shapes either so we are somewhat fine with the “Comic Sans” like appearance when looked up close.
TextMesh Pro renders backgrounds using an Image by default. А library that uses SDF to render texts could use a procedural SDF to render these rounded rectangle backgrounds. The good thing is the Image component from the TextMeshPro InputField can be removed and replaced with our own implementation of a Unity Graphic component.
Procedural SDF 2D RoundedRect
Most of the primitive shapes SDF can be described using a very simple formula. You can take a look at some cool examples here. While these examples demonstrate 3D shapes, the formulas are pretty much the same for 2D shapes.
Using the following round box in a Graphic component with a single quad yields the SDF for a RoundedRect component:
A pixel shader then uses the SDF to compute a color.
- Using the handy HLSL clip(sdf) function — pixels with negative SDF are discarded.
- Pixels with SDF in certain range (0–0.1 based on border width) are painted with the border color.
- The rest (0.1 and greater) are painted with the fill color.
- The fwidth(sdf) is used to calculate blending for antialiased edges.
Whether skeuomorphism will be back in AR VR I don’t know, but real life objects does use rounded corners, a lot. It only makes sense that AR VR objects will have a bit of roundness.
As a final touch images in UI often have borders. Especially when they are asynchronously loaded. So we would allow our RoundedRect to use a Texture2D for background instead of just a plain color.
Why is UI Antialiasing so Important for AR VR
Antialiasing is important because your head constantly moves. You have a pulse and the sensors are imperfect. Head tracking register subpixel movements even when you try to stay still.
Consider the figure that displays a nearly vertical aliased edge. Every frame you move about 10% of a pixel in random direction. Overall the edge remains on the same position but every frame is rendered differently than the previous. When the drawing is aliased it lights up or down several pixels in a connected line. In the example for 10% of a pixel horizontal movement 3 pixels in a vertical line are removed and that is perceived as vertical motion near the edge.
Aliased drawing renders the pixels binary — either on or off. Antialiased drawing will generate a gradient from fully covered to fully uncovered state. So if the shape covers 40% of the pixel, the color of the pixel will be a mix of 40% fill and 60% background. Given the same motion as before, each pixel on the border line will change its color with about 10%. The perceived motion will be as little horizontal instead of huge vertical.
Such extreme contrast as black on white will rarely be experienced in a textured 3d model. But when designing the UI components of an app maximum contrast is desirable for readability.
Handling the antialiasing in shaders for the UI part of the application will benefit low power self contained mobile devices. The kind of HoloLens 1, Oculus GO etc. can render 3D models without antialiasing while the antialiasing for UI will be handled in TextMesh Pro and RoundedRect’s shaders.
If you wonder if this was inspired by Microsoft’s fluent design system you will not be mistaken. When to use acrylic according to Microsoft:
We recommend that you place supporting UI, such as in-app navigation or commanding elements, on an acrylic surface. This material is also helpful for transient UI elements, such as dialogs and flyouts, because it helps maintain a visual relationship with the content that triggered the transient UI.
And this description fits perfectly with our application design. What we have is a kind of a solid 3D object in VR, that would be our 3D chart, graph, CAD model etc. It would be the main content. Our Unity acrylic implementation will be used as background for a Canvas containing commanding elements, flyouts, etc. that appear over that object and before the viewer.
The Acrylic Recipe
This is the original acrylic recipe, but our differs a little:
- background blur
- saturation boost
- exclusion blend
- color/tint overlay
In 2D you have ways to play with blending mode, but our application in Unity had to provide efficient way to do the above stack.
For background we considered using one of the Unity examples on CommandBuffers. The technique draws all the opaque objects in the scene. Then using a CommandBuffer on the camera it captures the rendered image of these opaque objects, apply blur and register the blurred version of the “Background” in a global texture. This allows us to use huge amount of acrylic surfaces on a scene.
Each of our acrylic surfaces will use the blurred global texture to fill its background and apply saturation boost, color tint and noise. What we do not have is the exclusion blend.
So when designing a toolbar or a navigation panel we would create a Canvas with the RoundedRect drawing the Canvas background. The RoundedRect will use the global Acrylic texture to pickup its fill color. In the canvas, we will use components drawn using RoundedRect to represent buttons, progress bars, checkboxes and the TextMesh Pro for the icons and texts.
All these components have been developed to also support the built in alpha and work well when controlled using a CanvasGroup. This can be used with Unity’s Mecanim animations. Now we can quickly build overlay interfaces.
Laser pointers. Laser pointers are the closest thing to using a mouse on a desktop. It allows the user to exercise her innate dexterity with very low power consumption — that is rising a hand to actually touch a button involves more muscle power than moving a wrist and clicking a finger.
So we’ve implemented a laser pointer. And that laser pointer uses Unity’s built in events like the IPointerClickHandler and the family. This is pretty solid contract. Widgets will subscribe for these events on one side and on the other our laser pointer can be attached to any of the HTC Vive or Oculus GO controllers. The interfaces are designed in a way that they allow multiple pointing devices to be distinguished. In future, since anyone can raise the events, we see an easy way to actually represent each of your fingers as a pointing device and raise these events as an implementation of a direct interaction. Or implement thumb typing on our virtual keyboard similar to HTC Vive’s built in keyboard.
I must mention Unity’s new prefab workflow. When you build that kind of UI widgets one would put a text in a rounded rect and group it into Button prefab. Then put several buttons on an acrylic canvas and group it into a Keyboard.
That kind of workflow used to break all buttons back into borders and texts when the keyboard is converted to a prefab. With the Unity 2018.3 (available as beta at the moment) the prefab workflow has been extended to allow nested prefabs.
Just in time!
The Twitter Social Graph App
Putting it all together back in the Twitter social graph app, here is an early preview of the app.
If you are happy with the results, stay up to date with our work by following us on Twitter.