Dave Meeker, Isobar U.S Vice President, discusses the world’s first VR measurement platform
With the launch of the world’s first virtual reality emotional measurement and analytics platform last week, we spoke to Isobar U.S Vice President and project lead, Dave Meeker, to find out more.
Isobar has launched the world’s first VR emotional measurement tool. Why hasn’t this been done before?
VR is still in its infancy and creating a platform like this isn’t trivial. Fortunately, we design digital products and services every day for clients and, therefore, not only had the skills to bring this to life, but also the insights necessary to understand the need in the marketplace and related requirements.
There are other elements of analytics in the marketplace. One example is the Unity Analytics offering that is part of their overall offering. This, however, is focused mostly on understanding where users go in 3D space inside of games or VR experiences and allows you to easily visualize that data. We use this ourselves.
There are also third party offerings that are more lightweight and intended to capture mostly basic interactions. At Isobar, we believe strongly in a design process that is informed by data — designing the user experience of interactive applications in VR is very different than flat/2D projects. When we determined what we thought we needed to be able to quantify these types of experiences, we just didn’t see any solution in the marketplace that checked all the boxes, so we decided to build our own.
Why is it important for brands to be able to measure people’s emotions during a VR experience in real-time?
Our industry relies on data to understand how content and experiences resonate with consumers. When we do almost anything “digital” for clients, there is an expectation that we can gather data about its effectiveness and compare that data to other forms of media. I am a big believer in the value of user experience and have seen the incredible impact that VR can have and so I set out on a mission to try and solve this problem. In fact, we’ve solved a couple of problems with this. The first is coming up with a new, faster and all-around better way to uncover usability/experience issues inside of VR applications.
This became clear for me when I was on a panel at a VR conference in Los Angles at the end of 2016. This event was heavily attended and many of the attendees were from brands, creative agencies and media companies. One of the inescapable themes woven through the daily conversation was that, as a new media channel, VR is interesting but still didn’t have any reliable way to be measured.
How does the new platform support brand challenges?
Our starting point was recognizing that VR goes “deep” for users. They put a headset on and can become immersed for quite some time. It’s also new and many users experience confusion or anxiousness when participating in roomsacle VR It’s no good to have those feelings cycling through a user’s head when you are trying to get them to enjoy the experience. .
So, first, this platform enables us to uncover issues with the user experience during the design and development phase of a project. We want things to be intuitive and easy and through this platform we can quickly uncover areas where users are having trouble, feeling confused, agitated or frustrated with the basic mechanics of how the experience works. This alone is hugely valuable and we’ve created the platform in such a way that allows us to easily share these insights with the design team so that they really understand what the user is experiencing and how the interactions we create should be adjusted.
Second, it provides a way for us to prove the value of VR as a medium over 2D content. Not to say that all VR experiences are better because of the medium, but we now have a way to understand if they are, and to compare different variations of the VR experiences against one another. This provides a deep understanding of how the experience impacts the user. While this isn’t revolutionary as a general concept as we do this all the time for Web and mobile apps and other types of content, there currently is no way to include VR or AR content into that comparative dataset. But now there is.
How did you, and the NowLab team, approach the project?
It was all very new and very bleeding edge. When we started doing initial research collaboration related to various aspects of VR with our friends in the Fluid Interfaces Group at the MIT Media Lab (where we are Members), we had a conversation about what the future could look like. We wanted to understand what our options down the road might be so that we could take that into consideration and could base future research activities on what outcomes we might have initially as we started using the analytics platform and collecting data.
This lead to a few aspirations: First, we wanted to collect enough data over time to be able to see if we could crunch it and begin to infer information about users based on what they did inside of VR and how they reacted to VR experiences. Could we tell where you are from, how old you are, what your gender identity might be or other things based on how you actively and passively respond to your virtual reality session? We knew this is something our primary collaborator at the Media Lab, Scott Greenwald, PHD, would find useful to leverage.
Second, given that much of what Isobar does globally is help brands innovate and find new mechanisms to reach consumers, specifically under the lens of brand commerce, we wanted to see if we could determine how we can use the data that we capture and the active and passive responses that users experience, to deliver experiences in VR that satisfy them, push and pull their emotional state into just the right place to lead them to a mental and physical place where they are most open to brand messages and transactions of one form or another. It is quite futuristic sounding, but all of the elements are in place to now do this. Now that we’ve got an MVP of the analytics platform together, it’s all about capturing data, massaging algorithms and testing to see how accurate we can be. It’s is such an interesting blend of marketing, technology, design and academic thinking, and it’s something only an agency that invests in innovation can pull off.
Were there any stand out moments during the project?
One that truly stands out is when Jeremy Pincus, a member of our team, showed us the first results of the biometric data that was captured while a user was using one of the experiences that we tested. It was magical. Everything we had intended the experience to mean to a user was dead on. As they moved through the VR app, we could see them become anxious, then we’d see their arousal start to spike then a little confusion, then excitement and finally a sense of accomplishment. Prior to this, the only real way to measure the effectiveness of the VR experiences Isobar and others across the Dentsu Aegis Network, such as Firstborn, was to ask users when they took the headset off. That can be riddled with inaccuracies due to bias and other factors. The biometrics are hard to track, especially when comparing all of them across the board.
What’s the goal of the platform?
The goal of the platform isn’t necessarily to be able to say that brands should just dive head first into investing in virtual reality content. Instead, what we’ve done is provide a necessary component that allows us and our clients to make a scientifically-backed determination on whether or not the VR content they are creating is optimized to get the results they are after and also to be able to compare the effectiveness of VR content against other forms of media. Having seen the results of the work we’ve done there is no question in our minds that VR can make a huge difference as brands continue to fight through the digital noise and maintain a relationship or create a positive impression with consumers.
Why is building life-like environments key for brands, and do you predict AI to be more prevalent in VR experiences in the near future?
As content creation tools and processes evolve, the ability to create photorealistic / life-like experiences will become easier and, therefore, more affordable. We still see challenges across the marketplace as it relates to creating really good VR content.
At Isobar, we’ve been recognized for our approach to content creation and have developed our own custom workflow and techniques that aid in the creation of photorealistic content. We know from our initial user experience testing and the feedback from users of apps we’ve created that photorealistic experiences resonate with them.
As I mentioned, Unity is investing heavily in AI which means we can expect the future of gaming to change dramatically. With all of that, VR applications obviously benefit also, but AI and VR aren’t directly tied together in any way. What is in our crosshairs is the process of feeding the data that we are capturing through the platform into algorithms that can then influence the narrative and content as well as aspects of interactivity.
Finally, with the advancements in VR data capture and machine intelligence — how close are we to seeing truly personal, branded VR experiences?
This is where things start to get really neat. We first ran this project out of the Isobar NowLab, our global innovation accelerator program. In doing so, we were able to break from some of the traditional limitations that a tightly controlled product roadmap or software development process might create. While we had a target for a minimally viable product, we also had a philosophy that change is inevitable and that we needed to be absolutely open to new ideas and enhancements along the way. Why is that important? — Because during the course of this work, there was a lot of movement in our AI and machine learning R&D as well as across the industry as a whole.
The second factor here is the advancement of AI in gaming and game engines, including Unity, the first platform on which we’ve enabled our experience analytics platform. Danny Lange (the head of AI at Unity) and Tony Parisi (Unity’s head of VR) are pushing the limits of how the game engine can and should be able to deliver content based on lots of different data — mostly around the dynamic creation of content based on things like location, weather, game-play, your opponents actions, etc. Because Unity (and other game engines) aren’t solely for game development anymore, and they’ve become the primary development platform for Virtual, Augmented and Mixed reality experiences, the use of AI/machine learning expands into all sorts of interesting use cases. As the use of these technologies expand further into B2C and B2B experiences, including commerce-enabled apps, so does the potential to use artificial intelligence to power them.
Anyone who knows a lot about machine intelligence will tell you that you aren’t going to be able to achieve much unless you have really good data to help feed into the underlying algorithms. As we started this project, the first thing we needed to do was to put together a baseline on what and how much data we’d collect. Remember, VR runs at very fast frame-rates. We are capturing data throughout the whole experience — not just where the user is positioned and what they are doing, but what they are looking at along with all of the data from the biometric monitoring — heart rate, respiration, galvanic skin response, and all of the brain activity. We are now working to take all of that captured data, along with what we know about the user (i.e. basic demographic information along with survey answers and using that as inputs to an AI framework). Our goal here is to be able to drive experiences that are tuned for a specific user based on how they act, or respond emotionally, or where they live or whatever else we ultimately end up knowing about them.