Natural Computing - The Convergence of Augmented and Virtual Reality

Peter Wilkins
4 min readMay 3, 2017

--

Ready Player One Fan Art

With Facebook’s recent announcements around augmented and virtual reality, AR/VR has truly begun to go mainstream. The industry is still so early, however, that we still haven’t fully figured out what to call it. Mixed Reality? eXtended Reality? Do we just keep the “/” in AR/VR? What are the concepts that these technologies have in common, how are they different, and what will they become in the future?

Virtual Reality enables full immersion inside digital worlds and simulations. With VR we can transcend the physical limits of time and space. We can relive moments from our past, or experience worlds from our favorite stories. VR makes it possible to be anywhere, anytime, and to experience anything. In the beginning, this means more immersive gaming and 3D movies and media, but ultimately it becomes about removing the physical limitations constraining our senses of sight, hearing, and touch. Currently these senses are limited to the immediate physical environment around us, but the VR industry is working to remove these limitations, allowing our perception to exist inside a computer.

Augmented Reality, on the other hand, brings the digital world into the real world in a natural and intuitive way. We are 3-dimensional beings navigating through space and time, interacting with each other and the world in a manner that has evolved over millions of years. The manner in which we perceive and interact with our environment is hard-wired into our biology. Rather than trying to adapt ourselves to fit the limited interactions supported by our existing technology, our interactions with computers should be natural, intuitive, and well-adapted for our biology. This paradigm shift is inevitable, and the start of it is already apparent with VR displays like the HTC Vive and AR headsets like the Microsoft HoloLens.

While there have been lots of increasingly-complex terms to describe this new paradigm, one in particular makes a lot of sense: natural computing, where human-computer interactions no longer have to adapt to fit the limited form factors of the computer (mouse/keyboard, touchscreen). Instead, our computers will adapt to fit the human form factor, allowing us to interact with the digital world in the same way we interact with the physical world. The benefits of this mode of computing include but are not limited to:

  • Reduced health problems from unnatural and repetitive motions
  • Less distraction when accessing the data we need
  • More timely and relevant information presented to us in an immediately intuitive, natural way
  • Increased bandwidth to and from the digital world

Within 5 years, AR/VR devices will combine into the same form factor. We can hope that we will have moved past near-eye pixel displays, as these are notorious for causing eye strain and discomfort, and on to more natural projector-based display technologies such as retinal projection. Along with the advancements in ergonomics and display, input and interaction will become much more natural: gestures, voice, gaze, and your physical context will be the primary interaction methods, vs keyboards, touchscreens, and text.

In 10 years these AR/VR wearables will begin to approach the level of consumer adoption and usage as smartphones enjoy today due to the convergence of three factors: comfortable and stylish ergonomics, a seamless integration with our daily lives and the world around us due to the more natural and intuitive human-computer interaction they provide, and dramatic improvements in artificial intelligence that power hyper-aware applications that understand our environment and ourselves in order to provide timely and contextually-relevant information and guidance.

As we move into the inevitable future of natural computing, we’ll slowly but surely leave behind our vestigial appendages of keyboards, mice, and monitors. Why spend thousands on an ever-larger television or monitor, when I can place my AR movie app on a blank wall in my house, providing a screen of effectively infinite size and resolution, anywhere and anytime I need it, entirely in software? Why would I use a mouse to click through photos on a hotel’s website, when I can ask my wearable to “show me the interior of the Ladera Resort in St. Lucia”, and immediately be transported to a 3D interior that I can explore? It’s time to interact with our data, stories, and information the same way we interact with the natural world around us.

--

--