Develop a Simple VR Golf Application

Marios Koutroumpas
The Startup
Published in
10 min readDec 31, 2019

A short guide to get started creating VR apps for Android in Unity.

Photo by Ludwig S on Unsplash

Introduction

VR provides an excellent opportunity for players and enthusiasts to immerse themselves into a sporting environment that can stimulate their senses much more effectively than the traditional image and sound reproduction devices, such as big, high-definition screens. And that effectiveness depends not only on graphics detail and physicality of motion, but also on how closely the virtual game matches the flow and progression of the real-world game, how accurately the actual setting is reproduced within the virtual scenery and how visual, auditory and even haptic feedback contribute to creating excitement to the player. In the case of golf, the very nature of the sport indicates that a significant part of the excitement comes from the accurate reproduction of famous golf courses inside the virtual scenery, because each course contains its own distinctive features (e.g. hazards, grass quality, lighting conditions, etc.) that directly affect player performance.

Keeping all that in mind, the goal of this article is to provide a compact guide on making a simple VR golf game for the Android platform, that will be able to run smoothly even on low-grade phones (EUR 100–150 price range). For this purpose, we will walk through the creation of a very simple demo app, which is primarily focused on delivering good operational performance and smooth VR experience at all times, rather than texture and gameplay complexity. We will use Unity 2019 and GoogleVR SDK for this task, and cover the aspects of three-dimensional modelling, scene creation, spatial audio, ball trajectory integration and performance optimization.

A link to the source code of this project can be found in the References section.

Set up

We recreate the driving range area of the golf course in our scene by keeping it simple and clean. To accomplish that, as few trees as possible are used. The player is positioned at a fixed point within the bay area of the driving range and shots will be taken towards the open area of the range. We also create a virtual switch to toggle Debug mode on or off. Debug mode is a functionality that is used to display live frame rate measurements in various critical points around our scene. Finally, we create the player, which consists of a “suited man” holding a golf club. The player’s head will be excluded from rendering, so that the Main Camera can be placed in its position instead. Now, the user can rotate his/her own field of view without being obstructed by the player’s head. The following screenshot provides an overview of the layout of the assembled scene.

Image 1: Scene setup

As in the previous article, the built-in Mobile Diffuse shader is used with a few simple modifications that enable the texture color to be changed. We rename this modified shader as “Mobile Diffuse X”. Also, a GvrReticlePointer component is attached to the Main Camera object, in order to dynamically indicate that the view pointer is currently pointing to an “active” GameObject; such as a golf ball or the Debug switch. By looking at that specific object, a predefined change to its state is triggered. This can be used to launch golf balls, toggle Debug mode, move a GameObject by a specific distance, alter its color, etc. A short demonstration of this is shown in the short video below.

Video 1: Triggering an active GameObject, such as a golf ball

By using this technique, the need for any physical controls connected to our game (for example, activation buttons on the headset, handheld controllers, etc.) is eliminated. Any of those activating mechanisms would interfere with gameplay and would cause a significant distraction to the user.

Create the models

The primary goal is to design our golf game with maximum performance in mind. So we use Blender to create simple, low-polygon meshes for trees, shacks and bays. Also we create a single texture atlas that accommodates all required textures. This can be done by using GIMP or any other image editing software, it allows us to maintain only one material for all GameObjects in the project, and offers significant performance advantages. Then, the texture atlas is imported in Blender and the UVs of each mesh are mapped to specific atlas areas. This is known as “UV unwrapping”. It is performed by selecting each face (or group of faces, if that is convenient) and moving it to the desired position on the texture atlas image. A screenshot of the final stage of this procedure is shown below:

Image 2: Mesh creation and UV unwrapping in Blender

Notice how the texture atlas contains all the possible color and texture combinations that could potentially be used by any Mesh that we create and place in our scene. Finally, we create a single material that uses both this altas and our Mobile Diffuse X shader. By assigning that single material to multiple GameObjects in the scene, the Unity engine is instructed to batch all those GameObjects statically and render them in a single draw call.

The player model

This is too complicated to build from scratch, since it requires rigging a humanoid model, defining its muscle model and skinning it. So a quick solution is to import a third-party humanoid model and just attach a golf club mesh (also third-party) to its hands at a specific position and orientation that look as natural as possible. The head of the humanoid has been hidden by unchecking its “HeadF” component in the Unity Inspector panel and the MainCamera has been placed in the head’s position instead, so that the player obtains an unobstructed 360-degree field of view. Using Blender, we also created a custom animation for this humanoid, that was later imported into Unity and becomes activated by invoking a script method every time a golf shot is taken. Additionally, a club hit sound is played back at that moment. A view of the humanoid model setup is shown in the following image:

Image 3: Humanoid representation of the player

Our scene

The scene is built as shown in the screenshot below, attempting to recreate the various artifacts of the golf club as accurately as possible, but only keeping the elements that are absolutely necessary.

Image 4: Top view of the Golf club scene in Unity

The dimensions of the scene and the exact positioning of the GameObjects have been calibrated to be close to the actual ones, so that the user gets a similar feeling of perspective, depth and object sizing as in the actual golfing range. Several frame rate counters have also been scattered within the scene and the Debug switching GameObject (which is used to toggle the visibility of the counters) is placed close to the player’s position. Some parts of the scene perform well at all times, while others present a temporary performance decrease as the user rotates the device. As a result, the positions of the counters have been specially selected in order to display live debug information precisely at those “troublesome” parts of the scene. A detailed explanation on debugging and performance optimization is presented in the upcoming paragraphs.

Shots, trajectories and distance data

In this demonstration application, the club does not actually hit the ball during the player animation of a golf strike. The shot is triggered by looking at the golf ball in front of the user and keeping the GvrReticlePointer focused on it for a duration of 3 seconds. Thus, the launch force vector of the shot is randomly assigned by code just before launch. Once a launched ball has landed and stopped, a new ball is automatically generated also by code (see method OnPostLaunch() in Snippet 1 below) and placed close to the starting position of the previous ball.

Snippet 1: The Launch functionality attached to every ball instance.

By assigning Friction parameters to the material assigned to each ball’s sphere Collider, the ball comes easily to a stop a few seconds after it lands. In the following image we can see a compact view of the configuration of each ball in the Unity editor.

Image 5: Ball configuration

To display the live trajectory of a ball’s flight and landing bounce, we use the method described in detail in the previous article. Additionally, the total and carry distance data are being displayed on the top right corner of the user’s field of view in a visually cumulative manner (see Image 6 below). This means that every time a new ball has landed and stopped, new incoming distance data are appended to the HUD.

Image 6: Distance data HUD

Now that the full setup of our scene and its components are fully explained, we can explore how our demo application performs on a relatively low-grade Android phone, such as the Nokia 5.

Performance optimization

To achieve maximum app performance on such a phone, low-poly models will be used (as discussed in “Create the models”), dynamic shadows, dynamic lighting and reflection probes will be deactivated and project quality settings will be set to Low. The effort to optimize as much as possible by utilizing low-poly models might seem a little “backwards” in 2019, but it maintains a low level of detail, it does not involve lots of code and it will provide a helpful foundation for building much bigger and immensely more demanding environments in the future. The image below shows how our app performs before any optimization.

Image 7: This definitely needs to be optimized!

The first action is to disable VSync in order to save CPU processing time. Also, as explained in “Create the models”, we create a single material with a single texture atlas in order to minimize the number of draw calls. By using a shared material, almost all of the non-movable objects in the scene are rendered in a single draw call, resulting in a considerable performance gain.

Image 8: Rendering effects of Static batching by using a shared material

The above image shows a capture of the Unity Frame Debugger while the app was running on our Nokia 5 phone and confirms that 75 draw calls have been batched into a single static batch for the specific frame. Movable objects such as the golf balls and the arms and torso of the player model, cannot be batched with the static ones. However, that does not affect app performance significantly, since these moving elements are small and only move infrequently.

Image 9: Much better, but with periods of decline

While app performance is now much improved, we still experience frequent intervals of low frame rate, which depend on the current position of the GvrReticlePointer (i.e. which object exactly the user is currently looking at). It was found that this is happening in parts of the scene where the amount of overlapping trees in the user’s line of sight is significant. This particular situation is explained in the following illustration.

Image 10: Definition of “overlapping trees”

To address this issue, we arrange several trees in the scene so that they do not overlap by more than two trees at a time within the player’s line of sight. Of course this also concerns the “forest” area, which is limited to maximum two rows of trees. Consequently, the mesh rendering complexity for the specific looking direction is significantly reduced and the app is now able to maintain a stable frame rate of almost 60 fps, as shown below, even when the user is moving their head relatively quickly while looking around the scene.

Image 11: Optimization results

In summary, we achieved good performance even on a low-grade Android phone by using the following practices:

  1. Disable VSync.
  2. Disable dynamic lighting elements.
  3. Use low-polygon models.
  4. Perform static batching via a single material.
  5. Careful selection of GameObject positioning and alignment in the scene.
  6. Perform thorough frame debugging in “difficult” areas of the scene.

Final thoughts and next steps

The app itself was fun to make and a good opportunity to explore a useful collection of fundamental Unity and VR skills, which were required to create and assemble its components. The next step will be to create a much more complicated VR game which will make use of advanced features of the Unity platform, such as its brand-new data-oriented stack (DOTS), which provides high levels of out-of-the-box performance. The goal will be to increase the app’s level of detail on Android phones (even on low-grade ones) as much as possible, without compromises in user comfort.

References

  1. Smooth Trajectories for Mobile VR in Unity
  2. Unity documentation on Sprite Atlas
  3. Unity documentation on Draw Call Batching
  4. Blender documentation: UVs
  5. Texture Atlasing: An Inside Look At Optimizing 3D Worlds
  6. UV map basics
  7. Source code of this project on GitHub
  8. Mobile Diffuse X shader source code
  9. OnStartPlayerAnimation source code

--

--