The Sneak Peek into My Brain: Can We Push the Boundaries of Communication 
in VR Space using Brain-Computer Interface?

by Dmitry Ratushny at unsplash
On the second day of Facebook’s annual developer conference F8 held on April 18 to 19, Regina Dugan, Facebook’s vice president of engineering, got on stage as the last speaker and asked a thought-provoking question ‘So what if you could type directly from your brain?’

Regina Dugan was the first woman to have served as a director in the Defense of Advanced Research Projects Agency(DARPA) responsible for R&D of United States Department of Defense (USDoD) and moved to Google as the vice president of ‘Advanced Technologies and Projects (ATAP)’. Last year, Facebook had announced that Regina Dugan joined Facebook to lead its secretive R&D team ‘Building 8’ kicked off in April 2016. She is responsible for making Facebook’s vision of the future real to develop Brain-Computer Interface (BCI) as a communication tool that helps people be interconnected with each other in the coming VR / AR era. In order to achieve the vision, the ‘Building 8’ composed of 60 neuroscientists, machine learning professionals, and system integration engineers has been developing computer interface powered by human brain to type 100 words per minute by decoding users’ neural activity using optical imaging. In addition to Facebook’s ambitious plan to text by thinking, she unveiled Facebook’s so-called ‘silent speech interface’ that could enable people to ‘feel’ sound through their skin and understand the sound in their brain to push the boundaries of communication beyond languages around the world. How can we make those futuristic BCI ideas — transforming our thoughts into text messages and hearing the sound through the skin — happen?

Facebook’s brain-to-text initiative at CNN Money

Prelude to BCI: Delivering Information Directly to the Human Brain

In human brain, there are 86 billion neurons that fire a thousand times per second — 1 kHz/s for each neuron. Since these neurons can’t be activated all at the same time, the times of firing for the neurons are divided by 100. In other words, our brain can produce about 1 TB of data per second enabling our brain to stream 40 HD movies every second as a result. However, when we pulled out the data out of our brain and converted them into sound, the streaming speed is slowed down to four HD movies per second similar to the speed of 1980s dial-up modem. Here’s a problem to make speech as an inefficient communication tool. Per Regina Dugan, that’s why Facebook has been thinking of the futuristic ideas in which people can text a friend by directly interpreting their brainwave and hear through their skin as seamless interfaces for VR / AR environment. This BCI technology would power the speed to retain and transmit much more information. To illustrate, the direct brain-to-brain communication enables Chinese people to think in Mandarin and then Spanish people to feel it instantly in Spanish by transforming the Chinese information extracted from Chinese people’ brainwave into Spanish without the use of speech. At F8, she introduced a demo videoclip as a recent outcome of the Building 8’s initial research. In the video, Facebook engineers showed the experiment about hearing through skin using special actuators to deliver specific frequencies for a person’s brain and then translate actual sound by letting the person’s skin mimic the cochlea in the ear instead of hearing the sound directly.

Brain-Computer Interface at Pixabay

Transforming the World by Thinking and Creating Rapport between Human and Computer

The Matrix, a Sci-Fi film released in 1999, reflected the worldview ‘reality is actually an extremely complicated VR program’. In the movie, while trapped inside their pods, the humans’ brains are connected to computers where a virtual world exists known as the matrix. Whereas the movie illustrated invasive BCI technology to connect human brain into the virtual world, recent research trends in BCI technology have tried to leverage non-invasive uses. For example, MIT currently built a robot that can read human thoughts non-invasively through an EEG helmet and then perform advanced tasks to pick up and sort objects. In addition, a team of researchers from the Technische Universität München developed the ‘Brainflight’ technology to fly planes with pilots’ thoughts alone with a cap connected to EEG electrodes. BCI lab at Graz University in Austria also explored the possibility to control and grow a character in popular video game World of Warcraft only with EEG. As wearable devices that read users’ brainwave such as Emotiv and Neurosky are emerging, functionality of BCI controller is getting more attention. In particular, those who are disabled or difficult to communicate mainly want to use the wearables to control other devices or services with their thoughts. Furthermore, this sort of BCI technology is capable of being applied in various industries by quantifying the states of stress, emotions, moods, concentration, etc.

Matrix at Pixabay

Looxid Labs Provides Brand New Interaction in VR Space using Eye and Brain Interface

Looxid Labs is seamlessly integrating non-invasive BCI with VR space using users’ biological signals including EEG, eye-tracking, and pupil size. Since VR is a space where users can enter into by moving from real to digital world, it is difficult to make a big difference from the conventional PC or mobile based user experience by simply watching the extra large display in the users’ view. Thus, in order to make users become more immersed and enhance users’ feeling of presence in VR, should it be considered to implement adaptive interaction for VR as well as provide users with comfortable experiences in terms of the advanced hardware and compelling VR contents. In other words, users’ ‘immersion’ and ‘feeling of presence’ are the most important factors to implement VR, and the users should become both a main character and a participant of the VR contents. Therefore, as the PC-to-mobile environment contributes to switching user interfaces from keyboard/mouse to touch interface, VR requires a brand new interface that reflects the users’ real-world experience. In order to provide users with adaptive interactivity in both real and virtual world, we consider emotional connection between the contents and users as a seamless interface for VR. That’s why we are developing the emotion recognition system in VR using eye and brain interface that directly interprets users’ emotions once users wear the VR headsets. Our goal is to implement ultimate BCI in which the users’ eye and brain information are seamlessly transformed into emotional data and enables users to emotionally engage with VR contents.

Interactivity at Pixabay