HV Weekly Journal 1

Head-tracked transformations, defined the internship, WSJ

Andrew R McHugh
Humane Virtuality
5 min readJun 10, 2016

--

Each week, I’ll post something akin to a personal journal entry that gives an overview of what I did for that week. These posts will provide less-polished insights, keep me focused on producing material, and will allow for earlier feedback. Let’s jump in.

This week I published the first article in the new collection Humane Virtuality.

An article about research I did last semester with other Carnegie Mellon students was published in the Wall Street Journal. It was a bit surreal seeing my name mentioned in such a large publication.

Before I started my internship, I intentionally spent two weeks letting my attention wander. During that time, I read the first six chapters of Eloquent JavaScript. I’ve written JS before, but still need some educating on how to write it efficiently. Since I’m creating functioning VR prototypes, it’s important for me know enough to work quickly (and not look up every other line of code).

During this time, I also completed one of the Unity tutorials: Space Shooter. Right now, I think I’ll need to stick with A-Frame and the web, but Unity will show up in my work before the end of the summer.

How do you look behind an object if you can’t track the position of your head?

My first question and prototype-answer of the internship is about head-tracked transformations. If you look at a Google Cardboard system, your phone can track head rotations but not position changes. That’s pretty limiting.

That’s a Google Cardboard on my face. Photo credit: Rachel Ng.

When I used an Oculus DK2 for the first time, the most powerful experience was me moving my head to see into a small stack of cards on a desk. It sounds cheesy, but that first head movement you make in VR can bump immersion up to 11.

Oculus DK2 demo scene.

So again, we’re limited with Cardboard. I started wondering what could we do for position-esque movement. Since the head rotation data is available to us, I wanted to tie head rotation to the transformation of an object in a scene. Transformation here could be a change in rotation, translation, or scale.

This week and last I built out a working prototype. There’s a lot more to go into, but I’ll do that in my full write up for the experiment (which is waiting for user testing).

Head movement controls the transformation of the cubes. This transformation moves inversely proportional to the head movement.

Speaking of user testing, I was trying to set up proxy-controls so I could step users through different tests. I can’t get it to work. If anyone has experience with the proxy-controls and keyboard-controls components in A-Frame, I’d appreciate an assist here.

In the meantime, you can mess around with it yourself on my Github project site. (It’s number 6.)

I’ve found that being connected to the Internet all the time like this is really really distracting. There are treasure troves of reading materials and interesting videos. Moving from a more structured work environment to the freeform nature of the internship is an adjustment.

Down this same line of thought, both coding and design are rabbit holes. I found myself asking multiple times if what I’m working on right now is of value to the final product or just extraneous polish. I need to make sure to keep focus.

Unsurprisingly it is helpful to have solid design direction before writing a line of code. I’ve found that when working I’m either in a design or programming mode, but not both simultaneously. The projects are at risk of feature creep if I design while in programming mode or program in design mode.

Some great things I read:

Future of Storytelling

I appreciate the breadth of Krystal South’s article, reminding us of the value of all storytelling mediums.

Storyteller’s Guide to the Virtual Reality Audience

The duo Katy Newton, Karin Soukup brilliantly explore the experience of being in VR through a form of experience prototyping. They enact scenes from a VR story using real-life participants and actors. Participants’ vision is constrained similar to how it would be in VR.

Adventures in Narrated Reality

I read Ross Goodwin’s 23min piece in one sitting when I was supposed to be doing something else. In short, he uses machine learning algorithms to generate text based on a training set. For instance, to generate Poe-based poetry, he would feed the algorithm all of Poe’s works. It made me start thinking about generated spaces based on a seed space. I’m doubtful we have enough accessible three dimensional data (real or virtual) to train algorithms like Goodwin’s, but it is nonetheless inspiring. Maybe we could take text descriptions and generate spaces from those?

Bonus: Goodwin wrote a generator that wrote a screenplay that turned into the short film Sunspring.

House of Leaves

I finally finished the core story of Mark Z Danielewski’s House of Leaves. What a trip. Danielewski is an expert at translating the content of the story into a graphic form by just using typography. User experience design goes well beyond screens.

Until next week,
Andrew

Side story: I was going to call these types of posts cliffnotes. Then I learned that (1) that isn’t a word and (2) CliffsNotes originated as Cliff’s Notes.

--

--

Andrew R McHugh
Humane Virtuality

Founder @WithVivid. Prev: Sr. VR/AR Designer & Team Lead @ Samsung R&D, The What If…? Conference founder, @CMUHCII , children’s book author.