HV Weekly Journal 3

Spherical video controls, volumetric design, & user testing

Andrew R McHugh
Humane Virtuality
5 min readJul 12, 2016

--

Each week, I’ll post something akin to a personal journal entry that gives an overview of what I did for that week. These posts will provide less-polished insights, keep me focused on producing material, and will allow for earlier feedback. Let’s jump in.

This journal is for weeks four and five. Though, five was mostly uneventful with America’s birthday and family time over the week.

For this sprint I started by working on a media player control for spherical video.

When we have a TV, it’s easy to put controls on a remote and give minimal visual feedback on-screen for each action. With a laptop, the controls move onto the bottom of the screen, out of the content’s center of focus. But what should we design when the content wraps all around you?

My work was inspired by Óscar Marín Miró who made a video controls component for A-Frame. Working atop of Óscar’s designs, I refined them a bit and pulled from existing design patterns.

Left: Óscar’s controls with spherical video. Right: a version of my curved, updated design.

More on the design process and insights in my upcoming case study. Until then, my work is always available on my prototyping site: armthethinker.github.io/webVR-experiments.

One of my design intuitions is to use volumes instead of flat layers in VR work. Additionally, curved mockups are easier on the user because each point is the same distance away from their viewpoint (in contrast with flat mockups where the center is fine, but the edges of the mockup are further away from the user). I learned that working with curved, volumetric mockups is really difficult in A-Frame — instead of working with planes, you’re working with cylindrical or spherical volumes, arc lengths, and tedious reminders of my undergraduate math education. Last journal entry I created a function that helps with layered, curved mockups from a set of flat mockups — but, volumes damnit, volumes…

For my product selection prototype and video controls mockups I worked to maximize my output. Reminder: a main goal of my internship is to learn to prototype more quickly, always at an appropriate fidelity to answer whichever question I have. To do this, I need to explore how much time it takes to create flat mockups, curved mockups, and volumetric mockups. Thus far, that’s the order of ease (flat, curved, then volumetric). I can make a flat mockup in a few hours, curved in a bit more time than that (with my helper JS functions), and volumetric varies wildly after that.

As (my) expertise increases and better design tools are created, we should expect the time and effort barriers to volumetric design to decrease. The barriers are frustrating because we should expect different results in user testing based on different designs. At this stage in the game, we designers, developers, and researchers still need to learn how user testing changes with different fidelity designs in virtual reality.

Speaking of user testing, I’m not exactly sure how to do think-alouds or usability tests in VR. First, it’s harder when you can’t always see what your user is seeing. (I’m working off of a phone and web based VR setup, remember?) Second, I’m not always sure what to ask users when I’m running these kinds of small tests (in contrast to an app with more depth). Regardless, I started user testing on my head-tracked transformations, product selection, and spherical video controls prototypes.

Thus far, it seems like think-alouds are the way to go. (By the way, think-alouds are where the user describes what they are looking at and thinking.) I’ve also opted to do my tests in small groups of people who know each other. Traditionally, if you’re doing a think-aloud or usability test, you want to do it one-on-one, researcher to user. In my work, I think the interfaces are novel enough to provoke conversation, which is had in a safe space and bettered if their friends/family are present. My think-alouds have become more of an exploration into the medium than strictly looking at interface-element placement.

I also figured out a setup that is mostly working for me. When the website with the VR experience loads on the phone, it automatically begins rendering in stereo; the phone’s screen is broadcast to my laptop, which is recorded; I record audio the whole time, also with my laptop; and I take notes with good, ol’ fashioned pen and paper.

I’ve begun playing with cursors, in part for user testing. Maybe one of my experiments will just be on cursors. Anyway, it seems like having a cursor is more useful than not, just to give the user feedback and recognition that they system understands where they are looking. The battle here is to have a small enough cursor to be out of the way, but a large enough cursor to be easily viewed. I don’t yet know where these limits are (and surely they are different for differently abled users).

In my limited experimentation, I found that the low-for-VR pixel density of phones really mangles small cursors. Due to the existing pipeline, a small square or circle cursor gets fuzzy around the edges and starts jumping around in the few pixels it gets assigned to. And, to further complicate the matter, because of how our brains and eyes fixate on objects — looking for similar patterns to match in order to fuse a three dimensional image — this incongruity reduces the likelihood of a user perceiving a singular cursor and increases the likelihood of experiencing double vision.

With a higher pixel density screen, this should go away. But, it might be a couple years before the general populace has those.

And, some one-liners:

  • I finished up my code comments in last sprint’s work.
  • I didn’t read that much this sprint.
  • Anyone know of any A-Frame classes? Is there interest in one?

Until next sprint,
Andrew

--

--

Andrew R McHugh
Humane Virtuality

Founder @WithVivid. Prev: Sr. VR/AR Designer & Team Lead @ Samsung R&D, The What If…? Conference founder, @CMUHCII , children’s book author.