Google’s Grand Vision of Immersive Computing

Jan Daniel Semrau
5 min readMay 29, 2017

--

Source: https://www.youtube.com/watch?v=L6-KF0HPbS8

The first time I heard the term ‘immersion’ was when I was working on Lord of the Rings — Return of the King for Electronic Arts back in 2003. Those were fun filled days. I was working in a large room full of friends the same age making video games better, having joint lunches at the local canteen, and occasionally enjoying an after work party at the 10:15 or another club in San Francisco to the courtesy of our common DJ friend.

When I left the video game industry after this short stint behind for good, I went for my Masters Degree in Finance and it wasn’t until this year’s Google I/O conference that I heard a company trying to popularize the term again.

“Immersive computing.”

Back in the day when I was working on video games, I might have not fully understood the gravity of it, but I interpreted it as the state of being deeply involved in the game rather than the perception of being physically present in a non-physical world. In video ganes, immersion is measured by the ability of a game to create the sense of flow. Immersion is the highest honor a game can achieve but it is wrong to assume that perfect replication of our real world is the key to immersion.

Surely, Google mentioned the term ‘immersion’ frequently during their various presentations. As far as I observed it, they interpret it as the convergence of VR/AR, Internet of Things, and the real world we live in. Or in other words, the great convergence between the real- and the digital realm. By the way, in our tenqyu nomenclature we call this the ‘stroma’ since we launched ‘In Shadows’ in 2013.

Visual experiences can be aligned on a spectrum. On one end of the spectrum are real experiences. Kissing your wife goodbye before you go to the office, the bright sunshine on your skin, the vapor of a warm cup of tea on a cold winter day.

Right in the middle of the spectrum is Augmented Reality (AR) leveraging technologies like Bluetooth LE (iBeacons), IoT sensors, and GPS to extend the perception of reality.

© Techcrunch ; source https://tctechcrunch2011.files.wordpress.com/2016/07/3-circles.gif?w=600

Pokemon Go is the killer app here. We have seen Microsoft’s Hololens, Google Glasses, and Snap Spectacles as products in the hardware space. In this timeline virtual and real worlds blur but are still noticeably distinct and support context in the real world.

On the far side of real experiences is virtual reality (VR) where every experience is a pixel. Google, Sony, Facebook’s Oculus, as well as several others are hyping virtual reality with a various headsets. But amidst this massive hype, virtual reality has been slow to take off. And in my mind for a good reason.

Fast forward to Google I/O 2017. Google now tries desperately to get designers, developers, and the rest of the world excited for their new VR products and hopes to make us understand that they are in reality on the same place on the spectrum. But I believe this view is very technology driven.

Pokemon Go, the augmented reality game by Alphabet’s Niantic Labs, was a success because it was

  1. Exciting to play while walking around in physical spaces,
  2. Not obscuring the view fields,
  3. Overlaying the real world with a well-established world of Pokemons, and
  4. In the end the game created a real, in the touch-and-feel kind of sense real experience for a large group of people.
Vaporeon Stampede in New York’s central park

Of course, there is a physical product Google wants us to get excited about. After initially tipping their toes into the technology with the cheap, hand-assembled viewer called Cardboard, Google is now working on a VR headset that does not require a phone or another accessory. They claim you could use it anywhere and theoretically walk around with it using their new ‘ Visual Positioning Service’ technology using only accelerometers and a camera. I guess this is probably a by-product of their autonomous vehicle research.

Google hopes this technology could be the convergence of VR and AR and thus be the long-awaited iPhone moment for VR. It could be a solution to create the feeling of immersion and flow outside of real world experiences like playing football in the park and traditional video games.

The fallacy is, that there is a gap between reality and every representation of reality. But this gap is there for a reason and it is important. This is where the representation of reality draws it’s power from. It is how we understand the world. If Grand Theft Auto had photo realistic artwork. Still going nuts in this environment would be fun, but only for a short while. It’s the goals, the story line, the content, and most importantly the context, that ultimately defines ‘immersive’. Immersive is chasing your friends playing In Shadows. And it was fun even without any technology. Technology should augment human experiences, not replace them by an artificially created version of the same thing. We should work to make reality better, not the artificial replica of it.

Neo is waking up from the Matrix © Warner Bros

Thank you for reading this far. I hope it was of interest to you.

This article was brought to you by tenqyu, a startups making urban living more fun, healthy, inclusive, and thriving using big data, machine learning, and LOTs of creativity.

--

--