Having had the terrifying experience of being interviewed on TV, we knew that practicing for a media interview can be difficult:
- How do you simulate actually being inside a studio before the interview?
- How do you practice answering interview questions in a realistic way?
- What will you say when you are asked really tough questions by the presenter?
Virtual reality provides a great way to practice media interviews in a realistic setting. When you put on the VR headset, you’ll be immersed in a TV studio with a presenter asking you questions.
In this article, we discuss how we built a BBC styled studio, so that people can practice being interviewed and familiarise themselves with a typical studio layout.
Design inspiration for our VR studio came from a behind the scenes tour with Huw Edwards presenting the BBC News at Five. We used several reference images from this guided tour to act as a starting point for the virtual environment and planning how the scene would be set out.
Designing and building the BBC Studio
Objects in the scene
Several of the objects in the scene were available to purchase online and only required minor alterations, including the table, TV and seats. However some of the objects needed to be created from scratch. The largest of these were the studio cameras, which required specific modelling to create.
Polygon and triangles count
Our VR application currently runs on mobile devices so we paid close attention to the number of polygons in the scene. We aimed for less than 150,000 polygons for the whole scene, including the presenter and other avatars, giving us about 100,000 polygons for just the studio.
We performed polygon optimisation on many of the objects in the scene to achieve this number.
Adding lighting and smaller details
Small details, such as lighting and shadows, can make the scene look much more realistic. We spent several hours adding this lighting and other objects into the scene so that it matched the reference video more accurately.
Getting the required performance
Usually, when creating an interior, you try and stick to the original reference images as closely as possible to achieve a realistic look. However as our application was required to run on mobile based VR, we had to compromise between the performance, the look, and the amount of detail we could add to the studio.
One of the techniques we used to was look at each object in the scene and understand how scene reflections affected the object. A key part of this is whether the object was moving in the scene. Objects which weren’t moving, including the floor, walls and table, could have the reflections baked into the texture to save on performance. Other objects, such as the presenter and cameras, had to have real-time reflections applied to them so that when they moved, the reflections and shadows changed accordingly.
Another performance consideration is how much detail you have on the screen at any one time and the distance of the objects from the user. Some studio details can be low-poly meshes with everything baked in, like the spotlight frames in the distant corners of the room. Other objects which are closer require a higher poly count so that their surfaces are smoother and more realistic looking.
Making you uncomfortable
This scene was designed to make you feel uncomfortable, as you would (probably) be in an actual media interview. There are several techniques to achieve this which were implemented:
- Bright spotlights so that you are partially blinded at points
- Specific presenter behaviour, such as strong eye contact and aggressive body language
- Surround sound, at points we simulate some distracting background noise to make it more difficult for you to answer questions
Screenshots of scene creation progress
Here are several screenshots of the BBC studio in development taken from two angles:
- The position of the user and presenter (looking towards the back of the studio)
- Behind the main cameras looking towards where the user and presenter would be seated
Back of the BBC studio
Front of the BBC studio
The main task with designing this scene is getting the overall look correct. Our eyes are really good at spotting inconsistencies and anything that doesn’t look realistic. However, as we are using mobile phones to power the VR experience, it’s a constant trade-off between performance and look. We were constantly asking ourselves questions like “do we really need this glossy material on that object, or we can imitate it somehow?”.
It’s a constant challenge of spotting off colours, shading which is too dark in certain areas, incorrect placement and proportions of objects, and so on. It’s similar to the “Uncanny Valley” effect, but less pronounced.
Adding the environment to Unity
BBC presenter as a virtual avatar
For this, we created an avatar model using Adobe Fuse. The models from Fuse are fairly realistic but come with a high polygon count (20–60k). This was a compromise we took as the user is very close to the presenter so quality was very important.
The presenter (avatar) is one of the most important parts of the scene. She needs to have realistic movements, speech and behaviour, in line with what you would expect with a real presenter.
We’ve played plenty of videogames where bugs and poor game design completely break the immersion and make you realise that you’re just in a game. We were very keen not to fall into this category and worked hard to keep the immersion for the user (person practicing the interview).
The virtual presenter needs a database of potential questions to ask the user. We manually recorded a list of questions and utilised lip sync technology with spatial audio to give the impression the presenter was asking these questions.
Due to the current limitations of speech analysis and conceptual understanding, the avatar could not reply to questions in a meaningful way. We therefore had to compromise and create open ended questions which could be asked after each other. When the user has finished answering a question, they activate a ‘Next’ button and are asked another question.
Voice recording for analysis
We added a feature where users can record their own voice. Users can then save their answers and listen back to them at any time to review how they’ve done.
Viewing the environment in virtual reality
We ran our VR app on an Android Pixel 2 device (also tested on a Nexus 5 and iPhone 6) and tried out the questions with speech recording enabled. The scene ran smoothly on all these devices. Check here for minimum mobile specs.
You can practice in this VR studio with our immersive Media Training Course.
Originally published at virtualspeech.com.