HV Weekly Journal 6

SIGGRAPH (and the patriarchy), finishing transitions, and a slight redirection

Andrew R McHugh
Humane Virtuality
6 min readAug 3, 2016

--

My testing environment. Looking at an orb triggers a transition to it. Environment design comes from MagicaVoxel.

Each week, I’ll post something akin to a personal journal entry that gives an overview of what I did for that week. These posts will provide less-polished insights, keep me focused on producing material, and will allow for earlier feedback. Let’s jump in.

This journal is for week eight.

Transitions Prototype

http://armthethinker.github.io/webVR-experiments/#11-transitions

This was a busy week, starting with me staying up overnight in the St. Louis airport, continuing to the chaos that is SIGGRAPH, and ending again in an airport.

It was silly of me to think I’d get a bunch of prototyping work done while I was at a conference. That being said, I did get further with my transitions prototype by adding a complex environment to move around in, orbs that can be placed anywhere and use the given transition, and a settings object that can be updated while a user is testing. I also partially completed a rotation transition.

Using the skills I picked up while working on Humane Virtuality, I was able to work more quickly than I would have been able to months ago. Both my programming expertise and the Javascript functions I’ve created allowed me to work quickly and efficiently, of course coupled with my new found how-to-prototype-in-VR knowledge too.

Example of micro-movements. Think about it as taking a discrete number of jump cuts or steps between your current location and desired location.

All in all, I now have four completed transitions (jump, fade jump, micro-movements, and animation) that are available with different speeds. Four (and a partial) tests out of an original 20 feels adequate to me. There are ways to prototype the un-implemented transitions, but I won’t be doing them. It’s time for the next prototype.

Rotation Transition

I mentioned a partially implemented transition: rotation. I had this idea of a user rotating, turning into a new location. It was, in part, inspired by the upcoming four dimensional puzzle game, Miegakure.

In order to prototype it in a way I am capable of, I needed spherical images of the environment the user would move in. Additionally, I needed a way to blend the spherical image of where the user is now and the image of where the user will be, so as to create the rotate-into effect.

Here, let me show you:

One of the spherical images I created at a local restaurant my girlfriend, Abigail Katherine Stokes, manages. Since you’re viewing this on a flat screen, the image has been stretched to a rectangle.
The blended, “middle” sphere.

The development was taking longer than I expected and in user-testing, other simpler methods of traversing an environment performed better than complex methods (e.g. jump cuts were better than micro-movements). Given the time and need to move on, I’m putting this last method on indefinite hold.

SIGGRAPH 2016

I spent most of the week in Anaheim, CA at SIGGRAPH. It’s one of the top computer science conferences. In no particular order, here’s a splattering of what I’ve learned, followed by some of my photos and videos of the event.

  • Realism is still difficult, especially in game engines (in contrast to what is used for animation in Hollywood). Stylize instead.
  • I heard lots of skepticism and faith in spherical videos and guided narratives in VR from different parties. The faithful’s talks were more interesting because they continue to try to explore the medium. There was skepticism in the Lumière brothers’ time too.
  • I heard someone mention that we should talk about VR narratives not as “a movie about [blank]”, but as “first person experience of [blank]”, reminding us of the immersive quality of VR.
  • The Asian markets, especially China and South Korea, are becoming huge investors in VR companies out in Silicon Valley.
  • I say this with caution, but moving the user in VR isn’t as bad as I had thought as I learned from various cinematic VR experiences. Some motion, especially linear, constant velocity motion can be fine for users. (I added an animation transition in my own transitions experiment, and it’s kinda fun to zip around the environment. Even my girlfriend, who is sensitive to motion sickness, enjoyed it.) This point is another reminder to continually test your assumptions while working in VR.
  • Consumer haptic devices are still lacking. But, I was able to play with a kit called “Stereo Haptics” in a Disney Research workshop (media below).
  • VR design teams and spherical video production teams all seem pretty diverse and well-experienced. One team said that everyone on the team knows how to run the video equipment.
  • I’m sure the numbers are still skewed towards men, but there were a good number of women working on VR projects. It was good to see that the gender difference isn’t more skewed.
  • One of the most troubling things I saw was a Fove demo. The company uses an eye tracker in their VR headset. In part of their demo, you can change the focal point of the scene, in another part you shoot aliens down by looking at them, it’s the last demo that bothered me. To show off character interaction, they place you (virtually) in front of a female anime character. Looking at her face, she smiles. The man running the demo then told me to look elsewhere and see how she gets frustrated. I looked left into the cityscape behind her. I don’t think she changed. “Maybe look down, at her … shirt.” I was asked to look at a virtual woman’s chest to show that she can get uncomfortable. What? There are so many ways to show character interaction based on gaze … just what? Really? Is that how we’re ushering in the new medium? And, I can’t imagine how I’d feel if I were a woman sitting down to get a tech demo, only to be asked to stare at a character’s breasts.
This one is a bit hard to see, but there are electrostatic and mechanical pressure devices on the thumb, index, and middle fingers. When you go over an edge in the virtual world, you feel it.

What Does It Mean To Design Humanely?

At SIGGRAPH I saw a bunch of virtual reality work. Some of it was testing small interactions (e.g. changing between live video streams with gaze or using low-frequency sound waves to create haptic plates), some of it was artistic (e.g. an experience where you see a barrier reef age and die with pollution), while others were showcasing cinematic experiences (e.g. Into The Wilderness, Injustice, and Pearl).

I can tell you it was all very “neat” and “interesting”. But, what I’m still struggling with, in my refections on SIGGRAPH and my own work here, is the question of where is the deeper meaning? Do these experiences and interaction techniques help us be more human or less? Are we expanding our capabilities or restricting them?

Experiences like Pearl, where you watch a daughter and father age together through the vantage point of their beaten up car, help us be more human in the same way Pixar films do: you build empathy with others over shared human(or human-esque) narratives.

Interaction techniques are — in Pearl’s case — secondary yet helpful features to the experience. Pearl could have been told in many ways. But, some interactions make us feel more human, others reduce us to people who “push images under glass” as Bret Victor puts it.

I’ve been struggling with some of these thoughts and what it means to be a great UX designer, beyond fancy titles. I want to spend my last few weeks working on more robust applications. I’ve prototyped interaction techniques, learned more about how to prototype in VR, created a couple of simple applications, but I haven’t done a deeper exploration of robust interactions in the medium.

So let’s explore.

Until next sprint,
Andrew

--

--

Andrew R McHugh
Humane Virtuality

Founder @WithVivid. Prev: Sr. VR/AR Designer & Team Lead @ Samsung R&D, The What If…? Conference founder, @CMUHCII , children’s book author.