How We Can Connect Virtual World to Real World

The key message at Google I/O conference, held on May 17th to 19th for 3 days in Mountain View, California, was Artificial Intelligence (AI). At the event, Google’s CEO Sundar Pichai unveiled brand new services including ‘Google Lens’ — an AI camera app using computer vision technology, ‘Google Home’ — a speaker with AI assistant ‘Google Assistant’ and ushered the transition to ‘AI First’ era. The essence of the vision ‘AI First’ is to develop products and services that interact with people more seamlessly in real world. ‘Google Assistant’, a service soaking Google’s ‘AI First’ vision, is a representative NUI (Natural User Interface) based service enabling users to search information and execute applications with voice interfaces to command computers without mouse or fingers.

NUI, Next Generation Interface for ‘AI First’ Era

A ‘NUI’ is a user interface technology that controls digital devices using modalities that interact with computers such as sensory, behavioral, and cognitive abilities. Typical NUI mainly refers gesture interfaces recognizing human actions, multi-touch interfaces recognizing various human touches, and sensory interfaces recognizing human intention, etc. As VR is more real than reality itself and its market is growing fast, NUI is becoming more and more important as an interactive element with ‘VR contents’ that will provide a valuable experience for users. Since an HMD (Head Mounted Display) is mainly used as a device to implement VR and AR, NUI is an interface to provide high degree of freedom and optimal usability for users.

NUI Technology Available as a VR Controller

Traditional users of Oculus Rift and HTC Vive, which have been widely used as VR hardwares, mainly use touch controllers like human hands, remote controllers, and motion controllers like gloves. In recent, one of typical NUIs for VR controllers is gesture based interfaces which recognizes a user’s hand movement. MS Kinect and Leap Motion using cameras are the most widely used products using gesture interface, and Myo is a gesture control armband developed by Thalmic Labs. In particular, Leap Motion’s technology provides users with fast recognition speed and accurate motion recognition. Even though voice interfaces have not been widely used as VR controllers, AI assistant services such as Apple’s ‘Siri’, ‘Google Assistant’, Amazon’s ‘Alexa’, and Samsung’s ‘Bixby’ are adopting voice interfaces. Voice interfaces are growing beyond a simple keyword recognition technology to provide IoT services enabling people to interact with a variety of several devices as well as artificial intelligence services including speech recognition, meaning recognition, and contextual reasoning.

World-leading Companies are Working on Brain Interfaces

Recently, global tech companies such as Tesla and Facebook announced their plans to develop sensory interfaces that allow people to interact with computers just by thinking. In March, Tesla’s CEO Elon Musk firstly opened the door by revealing to launch Neuralink, a company to connect human brains with computers, and introduced ‘neural lace’ plan, which is a technology that allows computers to understand human ideas as well as upload or download them by connecting the brain with computers in an invasive form — putting a microchip in the brain. Since then, at Facebook’s annual developer conference F8, held on April 18th to 19th, Regina Dugan, head of Facebook’s R&D division ‘Building 8’, unveiled that Facebook is building brain-computer interfaces for ‘brain typing technology’ by scanning users’ brainwave. She also added that Building 8 has developed bone conduction-based hardware and software that enables users to hear through their skin. For Facebook, which considers VR and AR as the next generation platform to replace smartphones, it will be a major challenge to be a pioneer of direct brain interface technologies that can directly control the VR and AR environments and interact through human brain.

Looxid Labs Creates an Interconnected Channel between Virtual World and Real World

Looxid Labs is seamlessly integrating a user emotion recognition system with VR environment by using eye interfaces as well as brain interfaces — catching the attention from global tech companies including Tesla and Facebook. Our emotion recognition system for VR users is composed of miniaturized embeddable sensor module to detect eye and brain activities, the emotion recognition API to deliver robust eye and brain signals in real-time, and our exceptional machine learning algorithm that detects and classifies users’ emotional states into business indexes accurately. We plan to take advantage of our machine learning algorithm technology to transfer users’ eye and brain data to the VR contents and enables VR users to interact with VR contents in various fields. Our goal is to be the world leading company which creates interconnected reality between virtual world and real world through emotional interaction.