Weeping Angels in VR: where will you run?

Anushikha Sharma, Anmol Singh, Khai Nguyen, Sierra Magnotta

The theme of this HCI Sprint was to ‘Design for Another World’, where our team was tasked with using Virtual Reality to create a scene of our choice. Our group’s mission was to induce the feeling of fear and uncertainty for the user. To do this, we used A-Frame and Google Cardboard to transport the user into a world where Weeping Angels are out to get them and there is nowhere to run.

Figure 1: This is a screenshot of our scene with a Weeping Angel in the frame

The scene is setup with two Weeping Angels hidden amongst other museum artifacts and the user looks around the museum, ignorant of the monsters that are trying to catch them. Both angels are placed on two ends of the corridor, allowing for a 180 degree experience for the user. This demo video below will further help demonstrate how to interact with our prototype for this sprint.

Video 1: Demo Video for our ‘Design for Another World’ prototype

Brainstorming

When we began brainstorming, the first goal was to narrow down our objective for the design. Looking at the many ways in which VR is utilized in Steven M. LaValle’s paper on Virtual Reality, we could design a world with the purpose of -

  • Inducing emotions (“Haunted House”, “Bucknell’s Graveyard”)
  • Education (“View of the solar system”, “Global Warming: Explore the Melting Arctic”)
  • Fun (“Find a treasure in the shipwreck”, “Explore the dungeons of Hogwarts”)
  • Exploration (“How deep is the ocean?”, “How high are clouds?”)
Figure 2: Sample image we found for Education (View of the Solar System)
Figure 3: Sample scene we found for fun (Gryffindor Common Room)
Figure 4: Sample scene we found for exploration (Underwater)

After this we decided to spend some more time exploring the technology and the online resources. We spent some time trying to upload random backgrounds, models and text onto A-Frame and realized that some of our ideas were too complicated based on the limitations of time and technology. Having watched the new season of Stranger Things recently, the idea of inducing emotions seemed more feasible and exciting! Thus, we decided to narrow our objective to creating VR scenes that could provide a sense of fear or awe and explore more just within this realm.

One idea was to use a familiar space like our HCI classroom or the Bucknell Graveyard to lure the user into a false sense of familiarity before we spring something scary on them. Another idea was to design a haunted house with famous ghosts and fantastical characters in U.S. pop culture. From this, came the concept of using Weeping Angels from Dr. Who. They are known as “Lonely Assassins” and when observed, they freeze like stone. However, in the blink of an eye, they can move vast distances and the touch of a Weeping Angel hurls their victims back in time. Thus, we decided to place multiple angels in a scary background in a way such that each time the user turned their head inside the Google Cardboard, the angels not in their view/focus would come closer.

Figure 5: A Weeping Angel from Dr. Who

The “Human Interface Guidelines page for Apple Developers” mentions using audio to make a scene more immersive. Also, in Jonathan Ravasz’s article, “Design Practices in Virtual Reality”, he talks about ‘introducing the user to the environment via soundscapes’ to set the tone. This inspired us to add scary audio to our scene to really give the user an uncomfortable, eerie feeling. We also decided to include a scream when the angel came up behind the user to help them understand that they had been caught.

As we searched for models and objects for our chosen scene, we came across the museum sample on the A-Frame site. The museum was the perfect setting because it had an aura of calm normalcy and we could easily blend the weeping angel models into the artifacts displayed within it. Jonathan Ravasz also talks about the ‘Role of the ground’ in helping orient the user in the VR environment. The museum setting had clear paths laid out that the user could use to explore and we just had to ensure that all the displays could be viewed on eye-level.

Figure 6: The museum sample that we used as our VR environment

Our Final Conception:

The scene: A few angels present blended in within the scene of the museum

The user experience: As the user looks at one angel and looks away, the angel comes up behind them. This is followed by a screaming sound and the screen goes immediately black to indicate that you’ve been captured by the angel.

Explorative development

Once we were done with brainstorming, we began to work on development. Before we actually worked on developing the scene, we had to resolve two problems.

The first was working with A-Frame. A-Frame, to all of us, was a new platform that we had never worked with before. Therefore, we needed to figure out the details of A-Frame, such as assets, entities, positions, shadow and lighting. We messed around with example codes, and looked at the API. Furthermore, since our idea used the museum demo as the base, we looked at the code for that demo to see how the creator had developed the scene. Due to time constraints, our goal was limited to understanding A-Frame enough to be able to successfully develop our vision.

Figure 7: Getting started with A-Frame

The second challenge was that none of us had experience with 3D modelling, and therefore, our ability to develop our vision was severely limited. We had to find models of weeping angels made by other people, and learn to incorporate them into our scene. It was difficult to find quality models that were also free. The rarity of good models also meant our vision was limited by the demo museum and what was available online.

Figure 8: The model of the weeping angel that we used in our VR environment

Once we found a model that we were happy with, we began to learn how to place them on a background. Our first attempt was to put a model of an angel onto a 360 image of London. After several trials, we managed to make it work!

Figure 9: Using experimentation to learn how to build scenes

First iteration

With more confidence in our understanding of A-Frame, we started to develop the museum scene. The process of putting the model in was rather smoother after our experimentation with the London demo. However, we needed to make some tweaks. First of all, the models of statues that used in the museum were already in room dimension, but our model was not. Therefore, we needed to reposition our model. Secondly, our model had no material (.mtl) file. The original format of the model was not the type of file A-Frame can swallow, so we had to convert the original file to a type of file that A-Frame can read, namely .obj file. However, the conversion didn’t produce a proper .mtl file, which was needed for the material. Therefore, we decided to use the built-in material capability in A-Frame for our model.

Figure 10: This was a chart of parameters for different type of materials that we found online, so we used it to coat the material for our model.

Once we got the model properly loaded, positioned, and coated, the rest of it wasn’t too difficult to code. We had experience with Javascript, so it didn’t take a lot of time for us to make the angel move and react to the camera of the scene. With the code for the actions of the angel done, we had an alpha version of our project!

User-Testing and Redefining Our Design

After our initial development, using the concepts from the brainstorming phase, we decided to test our design with a few users. We received positive feedback on the setting and the concept of the Weeping Angels. The familiarity with these Dr. Who villains was a cause of excitement. However, users also said that the experience ended too fast and they barely had time to build up any emotions or register what was happening. Though the blackout was a clear indicator that the scene had ended, the experience between the start and finish was a blur.

Figure 11: Aleks testing out the first iteration of our prototype

Taking this feedback into account, we decided to refine our design a bit. To extend the experience, we decided to slow down how quickly the Weeping Angels came up on the user. The scope of the setting was going to be a 180 degree view, with a Weeping Angel on each end of the corridor. The first time a user looks at a Weeping Angel and turns away, the angel model should move halfway closer to the user’s virtual location and the second time a user looks at a Weeping Angel and turns away, the angel model should creep up on the user to scare them. We also found haunting piano music that is not too scary but will give you the goosebumps.

Figure 12: Sketching out our ideas after user-testing

Final Iteration

After the changes we made based on the user feedback, our product was fairly complete. It had all the functionality that we wanted, and the project at this point was ready for the demo. However, we realized that the museum scene was too heavy for mobile; the number of assets it needed was too large. Therefore, we decided to selectively remove the objects that wouldn’t affect user experience too much. After we removed those assets, our demo loading time and responsiveness improved significantly on mobile, and we had officially completed our project.

Figure 13: Sierra having some fun testing out our project

Results, Feedback and Future Improvements

Despite initial difficulties on Demo Day, we finished this sprint with a fully functional creepy museum VR scene. As shown in the demo video, the user is able to look around and explore the museum exhibit while music plays in the background, with two Weeping Angel statues that move closer to the user every time the user turns away from the statues. Once the angels have gotten too close to the user, the screen fades to black to symbolize that you were captured by the angels.

Overall, our group fully accomplished our main goal of this scene, which was to confuse and scare the user. However, our design does have several weaknesses that we would like to address in the future. Many of our classmates said that they were not able to walk through the museum scene. We chose not to include this feature to make the coding of the Weeping Angels easier; with the user stationary in one spot, it is easier to create Angels that move toward the user. While it would be a great addition to have the user move through the scene and this could certainly be an addition for a future iteration.

Additionally, several classmates were confused by the concept and were unsure of what was happening, especially when the screen went to black. They asked why there was no way to ‘win’ against the angels and what they could do to prevent the black screen. Our original designs for the scene included vague messages to alert the user of the angels, such as “Don’t turn away” and “Watch out!” but in actually creating the scene, this was not a very high priority and as such was forgotten. Looking back, it would have been worthwhile to include a small introduction to the museum and the Weeping Angels, and is something that we would definitely consider if we redid this project.

Arguably, our biggest challenge in this project occurred during Demo Day, when we were unable to run our program for the first half of class.

Figure 14: Our inner turmoil during Demo Day

We had a working demo before class and wanted to make a small edit to move the user viewpoint. However, in doing this, Murphy’s Law went into full effect and we unintentionally rendered our project unusable for about 45 minutes. Though we were able to eventually revert some of the commits on Git in order to run our demo, it was definitely not an ideal experience for our classmates and some of them didn’t get to see our demo at all. The lesson in this mistake is to never, EVER try to make coding changes right before a presentation! What we thought would be a minor fix actually took an hour to complete, greatly affecting our demo and the feedback we were able to get from our classmates. If we were to do this project again, our main priority would be to allocate more time outside of class to work on actually coding the scene to avoid mistakes like this.

Conclusion

While designing for Another World, our ‘Another World’ consisted of a museum with different statues, including Weeping Angels. We created a scenario in Virtual Reality where users experienced fear and uncertainty. Looking at the complexity of the technology and the limitations that came with it, we were satisfied with our design and the final product. However, given more time, we could have added an instruction page informing the user of the characters to expect, more scary characters, and some more visual effects. Having used VR in this project, we now have a better understanding of this technology and would like to work with it more in the near future.

To explore the scene, click here.

To access the the code base, click here.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.