Making as a way of understanding how narrative for VR works

Dylan Yamada-Rice
Published in
5 min readMay 16, 2018


In what are still “early days” for Virtual Reality (VR), experiences predominately are being developed in the following areas:

· Health and medicine

· Entertainment and Gaming

· Education

· Art/ cultural sector

Narrative design is important to all these areas, resulting in wide discussion of how narrative should be developed for VR’s unique attributes. Increasingly, companies with a tradition of immersive storytelling in non-digital forms are leading innovation in VR storytelling. The Royal Opera House, for example, created Audience Labs to combine historic stories with cutting edge technologies.

I’ve been exploring what makes good narrative in VR since beginning as lead researcher on Dubit’s Children and Virtual Reality project. I’ve also led a paper prototyping workshop on the subject with a group of MA Information Experience Design (IED) students at the Royal College of Art.

We began by looking at a slide by Sarah Ullman on the differences between 360° video and true VR content:

Differences between 360° video and true VR content by Ullman

Next, the IED students were asked to build a VR prototype version of a particular story. Eric by Shaun Tan is the story of a foreign-exchange visitor who turns out not to be what the host family expected. The students were told to consider telling the story in a non-linear way, and not to include images of the main character, Eric, as the user would take on this role.

How, then, should they create a narrative context that enables the audience to understand Eric — to embed the narrative into the set design. Eric (the story) is especially interesting to explore in relation to VR, because Eric (the character) is very small and not human. He experiences everyday items in the host family’s lives on a different scale. VR is a particularly good medium for designing perspective and scale that can be felt by the audience, compared to less immersive media.

Shaun Tan’s Eric

Drawing on findings from Dubit’s CVR research as well as the student’s models, what follows is analysis of how Sarah Ullman’s key areas of difference between VR and 360 video can be used to better understand narrative for this immersive medium.


Ullman’s diagram suggests that live action is best suited to 360° video, whereas VR best uses a digitally-created environment. CVR research found that children were most interested in low-modality images, where VR content looked less realistic, such as can be seen in the game Job Simulator. Applying the work of comic theorist Scott McCloud, less realistic images allow the audience to incorporate ideas and experiences from their own lives into the designed narrative. Photography, by contrast, can only pertain to a particular person, setting or event.

From this, it seems likely that one of the VR’s key powers, at least for children, lies in creating immersive fantastic narratives. For example, the National Theatre’s VR piece for the performance of allowed users to travel down a toilet to party with the Cheshire Cat — an experience that could not be replicated in any other medium, and therefore becomes magical.


IED student Ashley Zhang explored how VR could combine fictional materials with everyday objects, such as the world of glowing plants created by Eric at the end of Tan’s story.

Dubit’s global Trends data confirms that children are connoisseurs of the fit between content and platform. At present, many media can provide factual information to children, but very few allow for complete visual, physical and aural immersion in fictional narratives.


360° video is limited by the photographer’s filming perspectives; VR allows the viewer to move within the immersive space. As Eric is a very small character interacting with everyday items, the content creator can distort design and perspective so that the audience begins to understand something about Eric. The CVR report describes how children used their full bodies to interact with VR content, using fast and unexpected movements at times.


In VR, the narrative can progress through a linear timeline, or can be explored as an existing world. One group of IED students told Eric’s story using a single set, while another made a series of sets related to key events from the story. We could now test whether one way or the other makes for a better experience, or which children prefer.


As Ullman’s diagram shows, in VR the narrative creator doesn’t command the user’s visual focus as with a flat screen. As a result, they must capture attention using sound or other cues. During the CVR study, children used the VR app The Blu, in which the child-user was positioned on a shipwreck with their attention drawn to sea creatures through sound. As a whale approached the shipwreck, the sinister sound caused most children to display some fear. Children used different strategies to deal with the experience, such as looking away from the whale or trying to talk to it.

There is lots of research (commercial and academic) into how children respond to image design in digital play, but very little into sound. Sound is an even more integral part with VR narrative than it is with other digital platforms (e.g., mobile apps), so audio research is sorely needed.

The Blu

While the IED students only made prototypes, we could crudely assess how well they worked as VR, by using a 360-degree camera app to photograph their models from the centre. We then watched the results in a VR headset.

Carrying forward this experience, I intend to explore what can be learned by employing the model-making method directly with children.



Dylan Yamada-Rice

Associate Professor Immersive Storytelling.