Impact of VR and AR on Design

Matthäus Niedoba
10 min readMar 29, 2018

Looking at the history of Adobe Illustrator, you can see how computers changed the discipline of design. There was a time when computers were only used by scientists and engineers. They were basically only for people with deep technical knowledge. The human computer interaction was limited to typing commands and it required a milestone like the graphical user interface to make the computer accessible to other peer groups, e.g. designers.

In 1982 John Warnock and Charles Geschke founded Adobe. Five years later, in 1987 they shipped the first version of Adobe Illustrator. It was a commercial version of their in-house font development software. Even if it had a limited feature set, there was one thing it could do great: Drawing bezier curves — something that was easier and more flexible to do on a computer than with traditional graphic design tools.

Over the years, the computer became faster and more accessible to a wider range of people and design tools more feature rich. Now the computer is the standard tool for designers.

Design tools back in the days and now

What made designers use computers instead of traditional tools and workflows? And what would move designers from desktop computers to Virtual Reality (VR) or Augmented Reality (AR) for designing in 3D? Would this be the next evolution in technology? First, we should understand why the previous steps (moving from traditional tools to computers) were successfully adapted.

Humans behave intuitively. Computers behave rationally and are extremely precise. A human could never draw a perfect straight line as well as the computer does. Before working with computers, designers had to use rulers, stencils and pens to draw shapes. For example, it was hard to create a perfect square with rounded edges using stencils. If the right shape was not available as a stencil you had to create it by hand. Using a computer you can create these shapes in a few seconds.

Not only is computer graphics software precise, but it has the freedom of flexibility too. You can, for example, arrange objects on a canvas. You can duplicate them or change the colors of shapes, and, you have “Undo”! It removes the fear of trying something out without destroying your artwork, which unleashes creativity. You can basically work with a trial and error method.

I think what we`ve been able to do is just release the creativity of people
- John Warnock, co-founder of Adobe Systems Inc.

Precision and flexibility are in the nature of computers. Because of this, the challenge of hardware and software design is to create intuitive user experiences. E.g. user interfaces try to model real world scenarios. There is a reason why the desktop is called “desktop,” and why text editors have page layouts.

When you look at the current Virtual Reality (VR) creation apps, you can see that they are pushing this intuitive interaction to the next level. Tilt Brush allows you to paint strokes in the air in real world space. Oculus Quill lets you create intuitive 3D stop motion animation and in Masterpiece VR you can collaborate with your friends on the same artwork. They are showing the direction how VR and AR will change the discipline of design in 3D. We are focusing here on 3D design, which applies to areas like product design, architecture, games and of course computer graphics. VR and AR take place in spatial computing, which happens in real world space.

What is spatial computing?

Spatial computing (a general term to define VR and AR) does not use a screen but uses the area around us to display information and graphics. Basically, spatial computing happens in the real world — or in other words in 3D space. Also, interaction in spatial computing is done differently than in desktop or mobile computing. For typical desktop computing, you will use a keyboard and a mouse as input devices; for mobile computing, you’ll use touch screens with fingers and pens, but for spatial computing, there is no standard input device yet. However, there are a lot of experiments with controllers or hand/body gestures to interact with the computer. Ideally, there would be a standard device that would work with all systems in the near future. It would make further hardware and software development easier.

Back to the Question: How will VR and AR impact Design. Here are some bullet points:


Building artwork in 3D is close to your imagination. You don’t need to translate our ideas to a flat canvas, which requires skills in shading and perspective drawing. In VR and AR you can paint strokes in the air and draw everything from your imagination. Although, currently, you do need to have a technical understanding of how objects are modeled in 3D. You have to be able to deal with points and polygons because graphics cards need them to render images. Unfortunately, this has no analogy to the real world and is therefore unintuitive. However, in VR and AR you can work without all this knowledge, mentioned above.

Moving objects, which is the most common task you do in 3D, is handled faster in real space with a tracked controller rather than with a mouse. In one movement, you are using six degrees of freedom. For example: A computer mouse restricts you to use two degrees of freedom (moving horizontal and vertical). In real world, we move objects in three directions (in height, width, and length), and we rotate them in these three directions as well. A controller can track all of these movements. For this reason, a transform action (moving and positioning an object from A to B) can be done in one movement.

This means that repositioning a chair for an architectural visualization can take a lot of steps in a traditional 3D program using a mouse. However, in VR and AR you can just pick and place it. Just think about how fast you could design a virtual room and put the furniture in place, for example.

New creative freedom

VR and AR push a new illustrative style by drawing strokes in 3D space. Painted-looking 3D images provide a new aesthetic. Creating them in a traditional 3D program would cost a lot of time but in VR and AR it can be done very quickly.

You also do not have a camera, which has to be controlled explicitly, like in a 3D program. Due to the fact that we are creating an immersive scene and not a frame, it is natural to move a around our artwork to judge it. It is like creating a big painting using an easel and taking a step back to judge the big picture. Most of the VR and AR programs are vector based, so you can create tiny details or huge macro landscapes in one scene. It is a natural way to play with scaling like Goro Fujita does in his Worlds in Worlds illustration.

Furthermore, VR and AR can change the way a piece of art tells a story. A storyboard (or any sequence of images), which tells the story of a movie, can be basically done in one scene. An example of this can be seen in a Philips Carousel advertisement. In this two-minute video, the camera moves through a scene frozen in time, showing police fighting against a gang dressed like clowns. This is a perfect example how storytelling works in one scene. Similar to this, when using VR and AR you would draw characters and buildings first, then move around the scene with a camera to communicate the message.

Painting with Oculus Medium by Goro Fujita

Enhanced perception

IKEA’s Augmented Reality App “Place,“ is a good example of how VR and AR help us to evaluate objects. You can directly see if a sofa can fit into an apartment without measuring. Using smart glasses, a product designer can get better feedback for the design than starring on a screen. Evaluating how design affects the real world, leads to better decision-making, requires less personal interactions (of the back and forth between designer and client) and saves time. If you are an architect or interior designer, you can evaluate a building or a room in real scale. A good example is the visualization of a Radisson Red Hotel interior made by Soluis Group and Graven.

Visualization of a hotel interiour design. VR was used in this production

But what about technical workflows?

VR and AR are powerful in areas where we can practice a craft like sculpting or drawing to create faster and better designs. These crafts are basically freehand workflows. However, there are workflows in design which require technical and logical thinking.

Think about scripting. It requires mathematical thinking to write code. Solving mathematical equations, for example, is something you usually do on a piece of paper. Aside from simple primary school math (five apples minus two apples), VR and AR do not provide any benefit here because there is nothing to touch. You have to think in processes or abstractions, which can be hard to visualize. There are mindmaps, lists and spreadsheets to model them in, but this is something you will do on a 2D surface, where a classical screen is the best medium.

Graphical user interfaces represent real world scenarios in abstract ways. Think about file management where it is fast to drag files in a file explorer from folder to folder. You would never rebuild this scenario in VR with 3D models of virtual 3D folders. And again, when considering scripting, you would never replace a physical keyboard by typing in the air.

Visual Programming in NoFlo. Even visual programming (connecting blocks) is done better on a 2D surface, because processes from block to block can be displayed simpler

Why is it so hard to be creative in VR?

It has been said that people have the best ideas or discover the best solutions for problems when they are least looking for them. This can happen often when a person is cooking, walking or showering, especially because these are moments when you do not need to concentrate very hard and your brain cells can relax. Think about the last time you were composing a text, a drawing, or when you had a great idea. Were you looking at the screen when you assembled your idea together or somewhere in space?

When your brain relaxes by doing nothing or when you are doing boring repetitive tasks, it draws connections between the dots where our knowledge is stored. By connecting this information, you can come up with insights or ideas you would never come up with if you only concentrated on a task.

There are basically two attentional networks in our brain — a task-positive and a task-negative. Only one is active at a time. When you focus on something or want to get a task done, your brain is in the task-positive mode. This happens when you sit in front of the computer to accomplish a task. Then your brain retrieves memories and uses the connections, which are available in our head. When you try to do behavioral tasks, your brain turns into the task-negative mode and lets your mind wander around. During this process, it creates connections between information so that Mind-Pops occur. Mind-pops are fragments of knowledge, such as words, images or melodies that come suddenly and unexpectedly into consciousness. These Mind-pops can be a solution for a problem or a creative idea.

The problem is however, when wearing a VR headset you are constantly staring at a screen and cut off from a natural environment. It will be hard to get innovative ideas in this situation. That is why a lot of designers stick to pencil and paper when they visualize initial ideas.

Comfort is the key

Designers want to work comfortably. They are spending hours in front of a computer doing a lot of free hand movements. This is strenuous for the hand, especially when they have to be precise. The problem is that professional design software is rather complicated because it has a vast amount of features. In order to access them, they have to be displayed in an interface with many buttons, sliders and input fields. A lot of features result in a lot of interface elements. They have to be small in order to match them on a computer screen. According to Fitts’ law, small interface elements are harder to control than big ones. The same thing applies when these elements are far away from each other so that you have to make long movements with a mouse. Consequently, it is ergonomically strenuous to control complex design software.

In a production case you would not stand the whole day to paint; you would sit. A person would not design a 3D model of a motorbike like in the Microsoft Hololens commercial. They would need precise tools to draw curves and surfaces or to put objects in place. Oculus has done a good job with their touch controllers because they are light. However, there is room for improvement. AR would probably be the better solution to project 2D interfaces on a desk. For work on a 3D model as a hologram, a pen (which is probably the best tool for a designer since it is simple and precise) would draw in the air and create the 3D models. The pen would also allow you to tab the amount of buttons, which are projected as a 2D interface on your desks. You could handle complex interfaces and evaluate 3D designs in real 3D.

Could this be the desktop of the future? Hardware is displayed in yellow and virtual objects (holograms) in green.

Let us hope that hardware technology especially in AR will continue evolving and become available to consumers. VR is a great playground, but for the long term, AR is the superior technology in design. Ergonomics and comfort are the keys to bring this for a mass adoption of this technology among designers.

Many thanks to Sascha Eichler and Simone Niedoba for giving feedback to this article.



Matthäus Niedoba

I help people to build complex software products. Currently, as a Product Designer at, working on decision automation platforms.