Getting wearable content just right.
This is the final post on learnings from the wearable prototype and it seems fitting to go back to the very beginning and look at how we are measuring success.
In my first post about a wearable future for the Barnes, I described a installation environment that seemed to be highly immersive — lots of objects on the walls with no information — combined with social behavior of guests — lots of talking among groups and people seen taking audio tour headphones off in order to have conversations. We backed up those observations with timing and tracking statistics and found visitors using the audio guide were spending, on average, 88 minutes in our collection galleries; visitors without the guide were staying, on average, 107 minutes.
Based on the above, it seemed as if the tool we had given users — the audio guide — actually flooded them. Not only was it difficult to be social with headphones on, but the content itself was dense on top of an already dense installation environment likely causing more fatigue and a faster exit.
As a result, we began this project to see if wearable technology and short form content could be a better answer.
The question had to be asked — what’s the average duration among those using the wearable?
I will tell you, I dreaded this exercise. We knew the content was social and that would likely increase duration, but this is short form content, after all. If we were lucky, I figured we’d see duration above the audio tour. What I didn’t expect is that we’d get a higher duration than both the audio tour (88 minutes) and those with nothing (107 minutes) and how considerable that difference would actually end up being.
In terms of the wearable, when 100% of the content was between 30–50 words, the average duration was 116 minutes. When 30% of the content was increased to 60–100 words, the average duration increased to 119 minutes. However, when 50% of the content was increased to 60–100 words, the average duration went down to 96 minutes. All of these averages were calculated when we were serving three pieces of content per room. We also carefully looked to see if these averages changed between weekend and weekday visitation — they didn’t.
So, we found our sweet spot. If we go too far, we hit the fatigue wall and it doesn’t take much to plummet quickly. I wasn’t sure we’d get here using the prototype, but we’ve got a formula for content that increases duration. This presents us with one metric of the many I’ve been blogging about that together have helped us evaluate the prototype’s performance.
On that note of multiple evaluation criteria, these duration answers surprised me greatly, but it’s what happened next that surprised me even more. During the time we were testing content with increased length, the front lines staff were reporting (anecdotally) that people seemed even more positive about their experience — a noticeable shift was taking place. At the same time, we were also seeing a 10% jump in the percentage of users saying “I like the idea of the content changing every time. That would definitely be an incentive for me to come back, even if the paintings themselves are the same, I’d still have a very different experience going through the collection.” This is key for us because increased repeat visitation is one of our project goals.
As we started getting the content closer to “just right,” it started to feel as if we turned a deeper corner in the user experience. At the end of the day, a production project is where we will continue to tweak content really get it right, but the multiple metrics and demonstrated trends in testing have given us a solid foundation moving forward.
Final Reflections
I’m grateful to our visitor services staff who worked tirelessly on this project to get the data we needed. Cassie, Lisa, Stephanie, Lindsay, Claire — you were all central to our understanding audience reaction and surfacing technical issues that needed solving. We simply could not have done this without your sharp eyes, your interest our visitors, and your curiosity about project. The consistency, too, of having a dedicated team of individuals monitoring the process throughout made all the difference in our data gathering.
Ditto to our technical team here — Steve and Deepthi — who worked tirelessly to get these wearables on the floor, dealt with beacons, and ended up in Michael’s stores trying to find the right way to hack together a solution for storing, charging, and distributing.
I end every post crediting the Barra Foundation for funding this effort. The opportunity to do this testing directly with our audiences was a privilege and the working prototype has given us the answers for a production project. We are grateful for the funding that made this happen; it’s just not a chance you get every day.
The Barnes Foundation wearable digital prototype is funded by the Barra Foundation as part of their Catalyst Fund.
Want more info? Read more about the Barnes Wearable on Medium and follow the Barnes Foundation publication, where we’ve got multiple authors writing about our projects.