Moving away from the typewriter one step further, into virtual space

What are the options for text entry in virtual reality?

Keez Duyves
Scenario and VR Research Trajectory
6 min readOct 30, 2018

--

One of our goals is to design a scenario development environment in VR.

One of the major challenges of making a scenario tool within a VR environment is text entry. In a non-VR environment we need our well-calibrated haptic feedback in order to deliver text. For this we need to feel our keyboards, and see it from time to time, some of us can even type ‘blind’ without looking at the keyboard. We are well trained in Qwerty.

We normally sit while typing, in contrast to virtual environments in which we mostly stand or walk. The act of putting a text in a certain place in space contradicts typing on a keyboard in which we spatialize the text on a small 2D area. VR headsets do not yet have the same resolution quality as a computer screen concerning the distance we use it for typing.

Once we develop a suitable way of writing in VR, the options are endless. We can sketch an environment with words and sentences, form decision trees, use all the dimensions for organising multiple storylines in place and time.

For the study ‘Scenario Writing in VR’, we researched and prototyped various ways of inputting text in VR.

Phase 1. Testing existing virtual keyboard.

We tested the built-in VR keyboard on Steam, which is a digital distribution platform for video games. Here, one needs to point to keys on a big keyboard floating in space using one controller with some kind of laserbeam. A regular keyboard requires little movements of the fingers that move almost parallel to each other. The fingers can already reach for the next letter while typing the current one. This VR keyboard can be touched with only one virtual pointing finger, and is therefore slow, and typing becomes a physical workout. Another version of such keyboard can be achieved by the use of both controllers, which speeds up the process a little.

I only recently realized there is an alternative way of controlling this keyboard: You can use the touchpads on the controllers. I haven’t had much time inpractice, but I wonder if I will ever be fast on the touchpads, as they don’t come over as very precise. You have to follow the cursors with your eyes to see if they land on the right spot. I guess there won’t be a moment when this proces becomes automatic.

Steam Keyboard text entry with touchpads

Phase 2. Virtual Letterballs.

We designed a special virtual input device in Unity, which uses two VR controllers according to the metaphor of a letterball, the letterballs retro printers used to have. The layout of these ‘balls’ resembles a normal keyboard with a Qwerty layout. One has to rotate the controllers in two axes to select a character. This allows the user to walk around in space, rotating the controllers in parallel to the body while doing so. In other words, the user just keeps the controllers in his/her hands, and can type by rotating the controllers sideways (rolling) and tilting the controllers. This gives freedom of movement and more control for the user. Another advantage is that when the characters are not located on the same controller, one could have a faster input.

PIPS:labs TextballInput experiment

This device can operate with subtle movements of the user, but it needs a lot of training. The roll and tilt movements are not part of our regular typing movements, so I am not yet sure if it is effective to train them. We guess that providing vibrating feedback would be beneficial.

In order to work with students, this solution, with the amount of training it required, would never be able to compete with a traditional keyboard input.

Phase 3. An ordinary keyboard with a virtual representation using leap motion.

A leap motion is a small infrared device that can be mounted on a VR headset, resulting in real-time visualisation of the users hands in VR. Leap motion offers the user a visual feedback of the keyboard, the position of his/her hands, and haptic feedback of the real keyboard, which is positioned at the same place as where the user perceives the VR keyboard. One could think of a ‘candy tray’ containing the keyboard (a ‘tracker’ could provide the simultaneous positioning of the virtual and real keyboard).

This option works pretty well. The visualization of the hands became erratic when close to the keyboard, as the leap motion system experienced difficulties differentiating the hands from the keyboard. As the visualisation is just visual feedback, and the typing is done with the real keyboard, the errors should not interfere with the typing process too much. As the leap motion is not an open system, we did not have an option to improve the detection.

Logitech has developed on a similar system which works with a camera instead of the leap motion sensor. This could be promising.

Narrated by Rajamanickam Antonimuthu

Phase 4. Hybrid system: Collaboration.

Another option would be to have one user typing on a traditional keyboard, while another user immersed in VR would place the typed texts in space. One user can be the master and the other user some kind of slave; or the person at the keyboard functions as a human transcriber of what the person in VR is dictating out loud, or the person is VR is a sculpting the words provided by the user on the keyboard in 3D. While being the most social of the options, it didn’t meet our demand to have a input method for one person.

Phase 5. Hybrid system: Paperol Vaperol

We developed a system which uses the screen, the 2D space, of an ordinary laptop or computer, as the base/ blueprint for a 3D text.

We guessed that a common use scenario would be that a class of students has access to a limited amount of VR systems. Paperol is capable of text input, and the resulting texts can be freely placed in 2 dimensions. A user can place the items on the 2D space with a conversion method in mind. The Paperol file can be imported in Vaperol, and all the texts can be placed in space.

Text entered in Paperol
Paperol file imported in Vaperol
Texts after repositioning in Vaperol

Phase 6. Dictation.

One can use a microphone and record sentences, which after the automated transcription process, appear in the 3d space. We used the Google Speech to text api. Voice input is very intuitive and fast. One has to trust the transcription as it will appear with a little delay. If transcribing mistakes occurs, you still have to fix them, and having to solve them can kill the workflow. To use voice as an input, one needs to be in a separate space, as multiple voices would interfere with each other, and others can hear what you are writing. In the uncertain process of scriptwriting, this is not a pro. Researchers at MIT developed a system which could deal with this issue:

http://news.mit.edu/2018/computer-system-transcribes-words-users-speak-silently-0404

Conclusion:

In the near future, dictation is the only entry method which potentially could replace keyboard input as the speed of input is sufficient. Other systems like virtual keyboards need a lot of practise, and lack the semi parallel way the fingers touch a keyboard. The hybrid system of combining 2D text input with placement in 3D works well, but it is a bit of a cheat, as it isn’t really text input in 3D.

This blog forms part of the Scenario and VR research trajectory, a collaboration between the Netherlands Film Academy in Amsterdam (AHK), Amsterdam University of Applied Sciences (HvA), and PIPS:lab, an Amsterdam-based collective creating multimedia installations, performances, and inventions. The research sprouts from the 360° VR movie Anyways (PIPS:lab, 2017) and includes audience research, design and development of two interactive scenario writing tools Dialogus and Paperol, two use cases regarding Paperol, and three workshops with scenario students of the Film Academy to test Dialogus. The blog series documents this research trajectory. The research is supported by RAAK.

--

--