The big question in Augmented reality

Jatin Arora
Bullshit.IST
Published in
5 min readNov 17, 2016

This is the age of digital data. We all are dependent on digital data and can’t even imagine our life without it.

Why?

Simply because it let’s us do more, makes our life easier, entertains us and what not!

But, are smartphones the best way to access digital data? Does your neck have to pay the expense of reading this article or can we do something better ? Can we actually transcend the notion of “ carrying a device in order to access digital data”?

Well, this illustration has a lot to say. For now at least we can try removing the painful text neck from the equation.

Going on the lines of physical ubiquitous computing, the most natural way of accessing digital data (going digital) would be when all this information actually comes to the space where our bodies physically belong and we hardly have to do anything to access it. This exactly is the notion of augmented reality. Well just now my iPhone showed a spelling mistake in “augmented”, and again. Obviously it doesn’t like this new technology. Poor iPhone. Sigh.

Oh shit. I entered the wrong spellings. Corrected. Poor me.

Anyways, moving forward, the big question in Augmented reality is “ where (at what level) do you intercept the reality and superimpose it with relevant digital data?” Multiple answers to this question lead to different approaches of implementing Augmented reality. Lets look at some of them and try to evaluate them on the following three parameters in order to determine the “optimum” one. At Least for now.

  1. Completeness of the experience as per definition of augmented reality.
  2. Technology & business feasibility.
  3. Socially acceptable, less intruding.

To start with, let’s look at Pranav Mistry’s SixthSense augmented reality system. Pranav took the concept of superimposing digital data quite literally. His system uses a projector to project relevant digital imagery, videos on any physical object one can find.

Notice the projector on his head.

You can draw a circle on your wrist to see a virtual clock appear that shows you the exact time. You can use your palm as a keypad to dial a number and make a call.

Imagine, when you meet someone new at a party, highlights from his social networks are projected on his t-shirt. So you can know if he likes game of thrones or how many pokemons he has. Mind = blown!

One obvious problem with this solution is that all of us “share” a common physical world and sometimes we do require our own “personal” (digital) space. For example, I just cannot use a wall at a (busy) metro station as a screen to watch an adult video stream being projected from my sixth sense (though, I might like doing it). Also, a little girl standing beside me might like to use the same wall (space) to make her barbie look prettier.

And I don’t want to mess with her. Never!

Tech star google was the first one to solve this problem by intercepting reality at a different, much more personal level by introducing their augmented reality system: the google glass in 2012.

However, the current implementation of glass ( the explorer edition) is far from providing a “complete” augmented reality experience. A small display is basically “glued” to the top-right of your field of view and it presents you with relevant, context aware digital information. But, the digital world never really enters the physical world our bodies belong to, thus violating the founding principle of augmented reality.

Surprisingly, Microsoft did a better job here with the Hololens. This augmented reality headset intercepts reality at the same level the glass did and this is where the similarities end.

Hololens uses the holographic imaging technology to project three dimensional images ( called holograms) on the transparent lenses through which the user sees the world around him. As the user moves around, the holograms are updated in real time to accommodate the changing perspective (this is something that the VR headsets have been doing for quite a long time now). Unlike the google glass, the digital world is actually superimposed on our physical surroundings, thus justifying the “completeness” of the augmented reality experience.

So, is Hololens the perfect augmented reality solution? Of course not.

The problem with Hololens is that it makes you look like a RoboCop faced nerd. I think, this is a problem with all the augmented/ virtual reality headsets currently available (Oculus rift, the star VR kid is essentially a “box on your face”). They are so “intruding” and totally unacceptable in the social space. I really don’t envision a future where humans walk around with a AR/VR helmet enveloping the entirety of their face.

What you looking at? It’s the future!

Oh wait! We do have one solution. Just make my helmet disappear when you look at me “through” your helmet (just kidding).

The point is that the augmented reality ecosystem is just not mature enough to provide always available “wearable” computing. Thus, at least for now, the AR headsets can be used only for specific tasks. Microsoft, accepting this, is targeting industrial and enterprise computing markets ( and not personal computing) with their Hololens. A good move MS!

So, do we have a solution? As of now, we don’t. But, the pace at which things are changing in the technology space, you never know, soon we might have a “real” augmented reality device that will “totally change how we live”.

In April, 2016, tech giant Samsung was granted a patent for (what they call) smart contact lens. Basically, they have tiny display that can project images straight into the user’s eye. This might well be the future of augmented reality also the first ever contact lenses that can be used as “mini-bombs” #Note 7.

Anyways, how did Samsung do this? They intercepted the reality at the “next” level.

Going by the trend, the next big thing in AR would be a device that sits inside your head and puts the mixed reality signals straight onto the optic nerve ( the “next” level) and we will turn into true “mixed humans”.

--

--