Augmented Reality Prototyping Tools for Head-Mounted Displays

Dylan Hong
9 min readSep 30, 2019


If you recently typed “how to build prototypes for augmented reality” into the search bar, you were like me two months ago. If you were disappointed by a lack of tools, resources, or tangible example projects, you continued to be like me. I had gotten so used to having an abundance of resources from vibrant communities across the internet that could help me with anything tech related. Want to build a website? Bam! Hundreds of webinars, YouTube videos, articles, open source repositories, and more clamoring over one another to teach you the ins and outs of designing and developing a website. The same can be said for mobile applications and video games. This stark contrast in resources for augmented reality (AR) and HoloLens was quite shocking, but also very exciting. How frequently do you get to be among the first to build out functionality for an entirely new way to interact with and consume information?

The team I work on has gone through the process of designing, rapid prototyping, and testing for an AR head-mounted display (HMD) project. We thought it would be a good idea to build out some of the documentation that would have helped us along the way. This article will primarily focus on the tools we used to create our designs and prototypes.

Traditional design tools

Moment from our usability testing

After doing some storyboarding for the different levels, we started thinking about how we should go about building out prototypes to test with. To create most of the traditional UI elements, we used tools like Photoshop, Sketch, and Cinema 4D (I explain how we used each in the following paragraphs).

While we didn’t use this method for our project, I found a great video from Apple’s WWDC archives that show how they do low fidelity prototyping. To simulate how some information might look like in 3D space, they drew out icons and information that was then taped on clear rulers. They then opened their camera apps to see how it would feel if this design were built out. This exercise lead them to discover multiple shortcomings of their initial design such as text legibility, scaling, and information hierarchies. Even though it might feel silly, this type of paper prototyping approach is a great way to catch glaring design issues early and save time overall.


Sketch is a classic design tool that is great for generating UI designs and layouts. Additionally, its export tool was great for making use of smaller components of your design like a panel with text on it or a parts of a heads-up display design. This allowed us to export all of the assets that made up a scene then use a 3D layout application to create our scenes for consumption in AR (or VR).

A good amount of our storyboarded designs were centered around floating billboards that presented information to a user, and Sketch was our tool of choice in designing those billboards. To give these UI elements the sense that they are meant to be displayed in AR, we laid out multiple assets overtop a blurred background image. This simple layout structure helped us get a better sense of how our designs would look like when they all shared the same space. Additionally, adding some tilts and warps to give the images a sense of depth and perspective also helps make these 2D screens more true to our design intentions. You can read more about our AR designs here. Using some tools later in this article, we will explore how we can take these UI elements from sketch and place them in a 3D environment relatively quickly.

Screenshot of our Sketch file

Cinema 4D

Cinema 4D was used to design 3D animations that might be viewed in augmented reality. By creating 3d animations, our team was able to showcase signifiers that we would be looking to build into parts of the application. These 3D assets are also great to have on hand for future use in programs like Sketchbox and Unity.


Sketchbox is a VR application that allows you to take 2D or 3D assets and lay them out in 3D space quickly. You do this while in a VR headset and with touch controllers, so it all feels as intuitive as placing your designs in the space around you.

Our team found that this was a great way to try to figure out how some of our more abstract design ideas might end up working out. The Google Poly integration provides a library of 3D models for you to use out of the box. This is really useful in that you don’t need to spend time creating assets because it’s more than likely that a model that is close enough to get the idea across exists.

If you have more than one VR headset in the team, there is a collaboration mode where you can load multiple people in the same design space working together. This lead to a super cool collaborative VR meeting between myself and a remote member of my team as we laid out interaction flows in the virtual room around us, talking, pointing, and drawing as we were thousands of miles away. Unfortunately, the “export to Unity” feature had some issues with packaging textures, so we were not able to use that feature in our prototyping workflow. Overall, Sketchbox 3D served as a quick way to experience and validate design concepts in 3D.

The process of creating designs in Sketch then loading them into Sketchbox
Bringing Sketch designs into Sketchbox for VR viewing


Photoshop was great for taking the AR designs from Sketchbox 3D and then contextualizing them for sharing. We found that if we took an in-app screenshot at head level, then overlaid that onto a photo taken at a similar angle, we could generate visual mockups extremely quickly (less than 30 minutes). To help with keying out the background and overlaying the AR content on the picture, I created a giant green (or blue) screen box. This helped us explain and explore different features or functionality that we had been discussing. Both to share with external stakeholders as well as to get a better sense of how implementable a concept might be.

Using Sketchbox to create augmented reality mockups

Unity and MRTK

Unity is the most complex tool that we used to create prototypes. Depending on your teams’ skillset, this might be a part of the process that you skip. However, if you’re building a HoloLens application, this will be the only way you can see your design in action in the HoloLens.

Unity/MRTK to HoloLens

Fortunately, Microsoft has built out a large library of development resources for beginners to HoloLens development to take advantage of. I would recommend spending some time setting up Mixed Reality Tool Kit (MRTK) using resources such as this, and then looking at the prefabs in MRTK to see if anything is already built out. For example, if you are trying to test out how people interact with the HoloLens when they use it for the first time, you could just run them through the default interaction tutorials. If you’re trying to lay out buttons of different shapes and colors to see which ones people tend to like more, you will save a lot of time by using MRTK’s example buttons as a starting point and making modifications from there.

If you’re looking to bring in UI elements or 3D objects that you’ve designed for your application, you can drag them into a scene and adjust the positions and rotations to build your AR layout. From there, you can use the game viewer to navigate your scene like a video game. If you have a HoloLens on hand you can build your project to the device following a tutorial like this one.

Taking this one step further, if you are equipped to do some C# programming, you can layout different scenes and group them together in empty game objects. You can then store these different layouts in an array and use prefab buttons to cycle through them. If you wanted to make the prototypes even more realistic, you can create invisible buttons then overlay them on top of where navigation buttons exist in your designs. These buttons can use functions to activate the appropriate scenes. This would effectively allow you to build out the majority of navigating your AR designs for prototyping in a way that is reminiscent to the style of prototyping that inVision or Sketch will afford you.

If you have the time and resources on hand to use Unity as a prototyping tool, it will be able to give you the closest look at what a functional product will feel like. Just make sure you are weighing out the costs and benefits to make sure it is an appropriate tool for your team, timeline, and project.

Three core steps for Unity/MRTK are:

  1. Follow guides to download and install MRTK, and then set up your first HoloLens development environment.
  2. Populate Unity scenes with and 3D or 2D elements that you built out. Group assets from similar scenes under parent game objects for easy visibility toggling.
  3. Build out your scenes using Visual Studio for deployment on the HoloLens. Now, you can view your designs in AR on the HoloLens!


When it came down the the prototyping software we actually used in our user testing, we used Powerpoint. While this might seem like an odd choice at first, Powerpoint had features that best matched what we were trying to test. One of the major features of our design was the UI layout screen. We wanted to see if we could use prerecorded videos or GIFs to explain the various bits of functionality in our UI. Neither Sketch nor inVision support embedding videos in the designs but Powerpoint does.

Using PowerPoint’s hyperlinking features, we were able to build a fully functional 2D representation of our design in just a few hours. To get around the fact that we were not delivering the information in an AR head-mounted display, we also had some users wearing the HoloLens and told them to pretend that they were seeing the screen in AR. We have an entire article about our usability testing methodology here, but in short, the 2D prototypes greatly increased the confidence of users in the HoloLens.

Fully functional prototype using PowerPoint

Moving Forward

Developing augmented reality interfaces and prototypes are extremely multifaceted projects. As a team, it has been interesting to see how differently we have to think about AR design when we shift focus from handheld to Head-Mounted Displays. Additionally, interaction styles, tracking abilities, audio quality, and even user fatigue vary greatly between the few head-mounted displays that are out right now. If you wanted to learn more about our project as a whole, check out this article that goes over our process and frameworks.

The future hardware direction of AR is not extremely apparent at the moment, but we hope that our broad approach to prototyping and design will allow us to easily adapt to what comes next. Regardless, the best way to prepare for the future of AR design is to start designing for AR now.



Dylan Hong

Interested in developing and sharing innovative technology. Specialized in consumer technology, AR/VR, media/marketing strategy, and video production.