Yoshiki
Yoshiki
May 31, 2016 · 6 min read

Since 1973 when the first user interface was developed, our lives have increasingly been revolving around computers and their peripherals. Though the nature of our relationship with computers is constantly changing, the human experience is becoming more and more entangled with our digital tools. We are now glued to our devices and in some cases interact with them more than we do with other people.

Before that though, people who had strong relationships with a single tool were usually skilled “laborers” such as musicians, artists, and craftsmen. These were individuals that depended on their tools for a living and were intimately familiar with them. Musicians especially have a fiery and passionate relationship with their instruments. To explore a little bit the relationship between people and their tools, I’d like to focus on the example of the violin and the violinist.


The design of the violin, and all other instruments, is a product of an evolution shaped by the demands of its players. The exact proportions, lengths, and various details were iteratively tweaked over the years till we arrived at its modern form. The particular manner in which one uses it maximizes the players expressive power: the precise and small movements of the fingers are concentrated on manipulating the pitch, while the strong and broad movements of the shoulders, arms, and torso work to control the tone of the piece, ranging from the delicately soft to the powerful. The player’s auditory, spatial, tactile-kinesthetic, and visual centers are all put to work to control it.

It’s obvious to say that the violin is the perfect interface for the kind of music you can produce with it — the two concepts are inseparable. The sound that emanates from it is part of the interface. When a musician plays with a violin, it becomes a part of them. There is an expressive link of super-dense information flowing between the player and the instrument.

So why am I writing so much about violins in a post about VR? Well, I argue that the expressive bandwidth between our minds and our computers is currently a tiny trickle of information compared to the superhighway of feedback a musician exchange with their instrument. That isn’t to say the potential isn’t there. The comparison isn’t really fair either, since the violin has a much longer history, and is a much more focused experience compared to the broad space of computable things.

And we’ve made some great strides in linguistic, auditory, and 2D visual interfaces from the early days of punchcards and terminals. But in terms of harnessing the power of our kinetic, tactile, locomotive, and 3D spatial reasoning it’s mostly been rehashes of the same concept for a long time. The violin finished its search a while ago, but we have just begun on the quest of finding the “violin” of human-computer-interaction.


Finding our Violin.

I believe VR is the next step in increasing the density of information exchange between humans and computers.

VR has gotten a lot of hype from the gaming world, and I am very excited for the entertainment and artistic experiences we’ll get to have in it. But a lot has already been written about that. In fact many people seem to have the impression that VR is for gaming. But this is far from the truth. The biggest impact of VR is going to be the tools we create in it for work and for art.

With VR you get the flexibility to create anything that can be described in code, within the huge space of an interactive 3D world. Try to imagine what careful combination of visual, acoustic, linguistic, spatial, kinetic, and tactile interfaces is ideal for say, working with statistical data, or 3d modeling, or even well established fields like art and music?

It’s an exciting project that I am sure will consume us for the next several decades. But to start, with motion-tracking headsets and controllers, we can begin start taking our first steps in this space to go beyond what-you-see-is-what-you-get, into a world where what you can walk in, what you can feel, what you can hear and see and touch — is what you get.

In fact projects as Tiltbrush(3D painting) and The Wave VR(VR DJ-ing) have already began exploring what some of those tools might look like.

These are tools that have never existed before. Because without VR they would be impossible to create. This is what I’m excited about.


Currently I believe the best platform for creating such tools is the HTC Vive(both Tilt Brush and The Wave are Vive exclusive). A key feature of The Vive is its support for “room-scale” VR, where you can actually get up and walk around in a predefined space, with motion controllers acting as virtual representations of your hands. The headset and controllers know exactly where you are, so the computer can render the scene to be synchronized perfectly with your movements. The immersive effect is pure magic.

Roomscale VR with the HTC Vive

After getting a Vive myself earlier this year, I’ve demoed it to over 20 people and the response is always phenomenal. Even people who aren’t very interested in technology had dozens of ideas for things they wanted to try and do in VR after trying out the Vive.


Experimentation

For the past couple of months I have been tinkering with a variety of prototypes to gain a better understanding of VR. I’ve written a tiny mesh editing tool:

Users can directly move and create meshes. https://www.youtube.com/watch?v=pWM-U2Wy3_k

A 3D keyboard, where instead of interacting with a flat surface of keys, you pass both of your controllers through layers of them to get to the key you want:

When you transition through keys you receive haptic feedback from the controller. Clicking the trigger types in the letter you’re “in”.

I’ve even been designing a pictographic programming language that would work better in VR than direct text manipulation:

Large, well-defined blocks will work well with point and grab based interaction. Imagine writing code by touching and moving it around!

The prototypes and ideas I’ve listed above are very rough, and I am sure people out there are already developing similar concepts at a much higher degree of polish. If you are a developer I highly recommend giving VR development a try. Walking into an environment created by code you wrote is nothing like writing a web site or smartphone app. It is so much cooler. And if you’re not a developer, you might be surprised at how badly you want to make something for VR after trying out a quality VR experience.


I don’t usually do much writing at all, but I hope I’ve been able to convey at least an iota of my passion and excitement for this exciting new medium.

There are people who say the hype for VR is not justified, and there are no compelling use cases for VR that would make it a household item. And they’re right, we don’t really have any killer apps yet(Tilt Brush is a killer app for me, but I digress). Whether or not this first wave of the VR headsets catches on depends on the ability of developers and designers to aggressively abandon their current understanding of UX/UI design, and start from scratch on a new foundation.

In the meantime, don’t buy into the hype. Go out there and try the Vive for yourself. There are lots of retailers and meetups doing regular demos for the public. Once you’ve spent a couple minutes in the VR world, your mind will be racing so fast to comprehend the possibilities of this technology, the hype won’t even matter.

Hyper Room

VR Collaboration

Yoshiki

Written by

Yoshiki

http://hyper-room.io

Hyper Room

VR Collaboration

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade