Transitioning to a Career in AR/VR Design

Jake Blakeley
May 21, 2019 · 11 min read

By Jake Blakeley

A couple of years ago, I made a silly prototype that let people shoot virtual foam darts at their friends’ faces in augmented reality. Although it was a small and fun project, it was the start of my transition from designing 2D UI products for advertisers to being one of the first handful of product designers helping shape what is now the Spark AR platform. It was exciting to see such a simple experience spark joy in people when they used it. Working at Facebook, I can bring these types of experiences to scale on a platform that enables creators to build and share similar augmented reality experiences with their friends and followers. Two years later, I’m still designing for augmented reality and virtual reality — AR/VR — at Facebook, but now I’m working on Oculus products and learning how to design for all of the ways our brains perceive the world.

This transition wasn’t unique to me, and I see it as an industry trend. Based on the number of people reaching out to me recently, it seems more designers than ever are entering the AR/VR space as people realize how transformational this technology is becoming. Let’s take a peek at some key concepts, the general process AR/VR designers at Facebook use and how you can apply it to your own work, as well as how to choose the right tools to use, platforms to build for and how to mind the skill gap to avoid frustration when taking on this new challenge.

Key Concepts to Start Your Journey

The Basics of 3D

On top of that, we construct the rest of our object by adding textures, materials and shaders. This is one of the key differences many designers struggle with when learning a 3D design tool. Unlike with 2D design tools, we’re not applying an image against a flat screen anymore. It’s a texture, applied to a material, tied to a UV map, rendered by a shader. That sentence probably didn’t make much sense, so let’s break it down with imagery.

Say we want to model the “angry reaction” in 3D. We start with a simple sphere model, then unwrap the sphere mesh to create a UV map. Notice how all the edges of the mesh line up to a part of the UV map on the image so it can be realigned later:

Next, we take our 2D image of an angry reaction and apply this to a material on a shader. We then apply this material to a sphere mesh. As you can see, the texture wraps around the sphere nicely.

When it comes to 3D, shaders are probably the hardest component to wrap your head around but one of the most fundamental. Shaders are the instructions given to your device to tell it how to render an image. This is based on all the inputs we mentioned earlier: materials, mesh, vertices, color and light, among others. This happens in every frame to create an animation.

The easiest way to think about this is to think about your favorite 3D video games. You’ve probably seen a game styled more like a cartoon, such as The Legend of Zelda: The Wind Waker, and one styled more realistically, such as The Elder Scrolls V: Skyrim. These styles were determined by the shaders used.

Here is the “angry reaction” with three different shaders and the material we applied.

Just like in the real world, lighting defines the brightness, shadows and other properties of an object and surfaces. Lighting is very important for AR/VR as it creates grounding, believability and also helps guide users.

There’s a lot more to 3D, such as rigging, animating and the use of different material types, but this should be enough to help you grasp the basics before diving into a 3D tool.

The Tale of Two Spaces

Let’s look at typography as an example. A 12-pixel font in screen space is generally 12 pixels all the time, but if we wanted to put text in world space, it changes size and readability drastically, based on how close the user is to it.

What is AR/VR actually?

AR is about recognizing and understanding the world as seen by the device’s camera. It superimposes media onto the user’s view, combining the real world and a computer-generated one.

Because the system only understands the pixels seen by the camera, it doesn’t interpret the world like people do. Occlusion is an example of an AR constraint. It means the device doesn’t automatically interpret the depth of the world.

In this example, the system first has to understand a face. Then we track a mesh to it to occlude — or mask — the crown so the back side doesn’t show through the head.

While AR superimposes a new world onto ours, VR transports us into a digital one. It does this through a stereoscopic display and headset tracking to make your head into a virtual camera for a digitally rendered world.

The biggest constraint in VR comes from the fact that we’re tricking our eyes and brain into thinking we’re in a virtual world. We need the rules of this world to match our concept of reality.

To simplify, when there’s a disconnect between what our body is feeling versus what we’re seeing, user comfort can be impacted. For example, if you make someone fall in VR when their body knows they’re standing up, this can result in reduced comfort due to the disconnect. Here are examples of how to allow movement while maintaining user comfort.

From left to right: Teleporting by pointing and pressing a button in Robo Recall. Pushing yourself through space in Echo VR. Using your hands at a distance to pull yourself in To The Top.

A design consideration you’ve probably thought about for mobile but that’s exaggerated in VR is designing for the human body. Spatial interfaces use your head and hands to allow you to interact with the world, which is a magical experience and intuitive if done right. However, our bodies have limitations. Looking down, turning around, keeping our arms up — these become tiring over time.

There are numerous domain-specific AR/VR languages and concepts that are best learned while experimenting with the many tools on the market. For example if you want to tackle mobile AR, Spark AR will cover many capabilities and best practices, Oculus outlines concepts specific to VR and whatever video tool you are using will likely highlight how to do compositing to put objects in your real-world footage.

While the language of AR/VR is evolving, this outlines the basics. Now, let’s dive into what it takes to do the work.

Our Team’s AR/VR Design Process

If you’re a designer, ideation is probably familiar. It’s a quick and iterative way to generate lots of ideas to address a problem and learn rapidly. We use collaborative brainstorming, storyboarding to tell a narrative and — unique to AR/VR — bodystorming. For storyboarding, our team is fond of Procreate for creating digital sketches in 2D and Quill for sketching in 3D. For bodystorming, we use real-world props and activities to act out interactions and narratives. This is especially effective in AR/VR, because you get a spatial feel for objects and scale while iterating much faster than in digital prototyping.

Vision work is the second phase and occurs early in our process. It involves gathering our ideation and combining it in a tighter package, usually a video, to share more broadly within the team or cross-functionally. However, we can share a vision in other ways, such as style-boarding to agree on a visual language, or high-fidelity storyboards to discuss steps in great detail. Vision work helps our multidisciplinary team align around a north star, so we can also work fast and sometimes semi-autonomously toward the same solution. The vision may evolve as we learn more through prototyping and research, but it allows us to work in parallel instead of blocking other team functions.

For vision work, we generally use 3D modeling and animation apps, such as Cinema 4D, Blender or Maya, to render videos on top of recorded footage.

The third phase, prototyping, is the highest fidelity of the three phases and is usually reserved for smaller, more high-touch interactions or project details. Prototypes are also usually the best artifacts to bring into user research, since they allow participants to test our work and give tangible, direct feedback. AR/VR prototyping contains a couple of key differences compared to other disciplines. First, interactions take longer to build, as best practices have yet to be defined completely, and second, there are significantly more variables to consider when designing in 3D than 2D.

In this phase, our team usually uses a 3D modeling appthe same ones mentioned above — to create low poly assets for our real-time engines.We generally do interaction prototyping in the same tool we use for the end product so we can test, learn and iterate fast. This usually means using Spark AR Studio for mobile AR, adding interactivity through either visual programming or scripting with code and using Unity or Unreal Engine for HMD-based AR/VR for products like the Oculus Rift. Whether you select Unity or Unreal as your tool of choice is a hotly debated topic, so I’ll leave it up to you to decide.

This may seem like a broad skill set, but luckily I didn’t have to become an expert on all phases. Each of my team members has a strong domain expertise that helps raise the rest of the team up. I have a team member who is amazing at motion graphics and visualizing ideas, a coworker and friend who knows shaders and real-time engines inside and out, a teammate who is a master of design processes and practices, and, of course, there’s me. I’m more a generalist and know these skills more broadly but not as deeply in any one category. A multidisciplinary team like ours shows how broad and open the skill sets are for an AR/VR designer. The real magic happens when we apply our different areas of expertise to the challenge and collaborate to find a solution.

Now that I’ve shared one approach to designing for AR/VR, let’s dig into some unique learning methods.

The Skill Gap and How to Learn Effectively

A great framework for understanding the learning process is the four stages of competence, which describes how we learn and the struggles that come with the journey. My friend and coworker Emilia explored this in depth in her article “How to Feel All the Feelings and Still Kick Ass.” The role of conscious incompetence in learning particularly resonates with me. This is the learning stage where you understand enough to grasp how much you don’t actually know. It’s like feeling accomplished when you learn to play “Chopsticks” on the piano, then suddenly realizing how much more you need to learn before you can perform “Für Elise.” This is the stage where most people give up.

The biggest favor I did myself was treating learning as play — taking the pressure off by doing small, fun projects. This meant taking grand ideas, such as creating a fully immersive AR shopping experience, and breaking them down into smaller projects. I started with questions like “How do I signal to users that they can place their objects into the world?” or “How do we allow users to manipulate an object?” or even “How do I get a 3D model into the engine?” There’s a ton to learn from small projects like these, especially in an early industry like AR/VR, where patterns aren’t fully cemented. These small projects also helped me realize what excited me the most about AR/VR, what I excelled at and where I had skill gaps. What’s great about this time in our industry is that we’re all learning together, and people are eager to help and mentor. Especially at a place like Facebook, we tap into each other’s unique skills to help ourselves grow. If you’re looking for a helping hand, I’d be more than happy to pass the baton and help you get started. Reach out!

Summing It Up

Is there anything else you feel that designers starting in this field should know? Or is there anything you wished you knew early in your AR/VR career?

· · ·

Thank you to everyone who helped to compile this, and supported me in my design career transition; Matt S., Matt M., Hayden S., Emilia D., James T.!

Facebook Design

Designing for the global diversity of human needs.