Image from Magic Leap

Reality, Upgraded

How upcoming technology from companies like Magic Leap and Microsoft could augment our lives

Jordan Palmer

--

If software is “eating the world”, then augmented reality (AR) is making how we experience it a whole lot more ‘edible’.

By placing digital graphics over reality, it’s reinventing gaming, marketing, education, and many other industries — but the true explosion of AR isn’t waiting for a killer app: It’s waiting for a killer experience.

For a summary of AR’s history, aspirations, and concerns, see this article.

The tech of today

Most present-day AR happens through smartphones, but they can be tedious and restrictive: One often needs to download and navigate to an app, then carefully point the device at an object to view AR content. The screen is only a fraction of our visual field, limiting user engagement.

Image from IKEA’s AR furniture preview app on Youtube

Google Glass, on the other hand, is ‘always on’ (assuming the wearer doesn’t feel too goofy or awkward). This handsfree interface offers many useful consumer and enterprise apps, involving object recognition, navigation, voice control, a point-of-view camera, etc. But, being only a small screen at the corner of one eye’s view, it’s very limited in making the digital seem real.

Image from a navigation demo of Google Glass on Youtube

Things may never take off with these technologies alone, but they are crucial steps toward a thriving AR ecosystem. The industry is now in its necessary ‘beta’: the clunky stage in which makers can wonder, experiment, and standardize, all while growing a library of content.

Most of the hardware designed for AR comes in the form of glasses (ie. Meta, ODG, Atheer, Epson, Sony). Other wearable forms include smart helmets for work or motorcycling, and even ski goggles. Much of the technology targets enterprise applications in industries like healthcare and manufacturing, but lowering costs are beginning to let it bring value to consumers. Though most glasses are still bulky, cover a limited field of view, and have trouble simulating realistic objects and movement, they are starting to break into stereoscopic video and audio.

Image from Augmented Reality Company

Google Cardboard — which piggy-backs on smartphones, and is primarily used for virtual reality—is beginning to see AR apps like this game demo. Since the upgrade cost is almost nothing for a smartphone owner, this AR approach may grow — especially if phones gained dual wide-angle cameras for 3D peripheral views. But it still requires having a box on your face.

The next wave of tech

Devices must become cheaper, sleeker, and more engaging, for AR’s adoption to surge. Two big companies seem to be working on this: Microsoft, with their recently announced ‘HoloLens’, and Magic Leap — which has raised almost $600 Million in funding, led by Google.

From the previews, it looks like these two firms might be the first to trick the brain into believing the AR it’s seeing (and hearing) is real.

Their devices use 3D sensors to see the wearer’s physical surroundings, as well as eyeball and head tracking to adjust the focus and perspective of virtual objects.

Microsoft’s HoloLens projects light onto two holographically printed lenses (the inner lenses in the picture below), and into users’ eyes, to achieve depth. Magic Leap’s tech also uses light projection in the lenses, while each lens has a built-in occlusion mask to selectively block real-world light where virtual objects show. This may allow opaque graphics — hopefully without blocking the wearer’s eyes from others. It’s unclear whether this is how HoloLens’ outer visor works, but simulations do appear semi-opaque.

Microsoft’s HoloLens has been dubbed a ‘nerd helmet’ for its large wrap-around shape, and may be primarily worn indoors at first — for work and entertainment.

Microsoft’s HoloLens

The prototype at Microsoft’s private demo event on Jan 21st wasn’t quite the wireless headset pictured above: It involved a small but heavy computer with sensors hanging around the neck.

While glasses are the form factor Magic Leap’s patent suggests, diagrams also show a waist pack attached by wire, likely for a battery and computer — though the product may completely fit into wireless glasses, once released.

Image from Magic Leap’s patent sketch

Shrinking hardware may one day allow smaller forms like contact lenses. For now, let’s hope Magic Leap goes for a foldable design to encourage mobile use.

3D Audio is another key piece for believable AR. Microsoft’s HoloLens uses ‘spatial sound’ in the headband, for users to hear virtual surround sound, while staying aware of audio in their physical environment. It will be interesting to see if Magic Leap and others take an outside-the-ear approach. Active-noise cancelling — involving in-ear or ear-isolating headphones — would allow fully immersive audio. This may even let apps selectively remove or alter real-world sounds, if combined with 3D microphones and smart listening algorithms. Once AR headsets shrink in size and are worn for much more than just work and entertainment, the audio component may look and act like even smaller versions of these wireless, waterproof, health/activity tracking earbuds.

After sight and sound, touch may be next in line for AR. Haptic feedback could be integrated through gloves, vests, air vortex rings, or, eventually, directly at the nerve level. But touch isn’t the only haptic sensation: Pressure, heat, cold, and pain are also up for grabs. Simulating a virtual object’s ‘weight’, however, could still be years off. Meanwhile, technologies in smell (ie. Vapor Communications, AromaJoin) and taste replication might bring these two senses into AR sooner than expected.

The integration of various sensors will allow a much wider range of applications. Some may initially be borrowed wirelessly from smart phones and watches, like GPS or heart rate tracking. But many will gradually be built-in as AR headsets become stand-alone mobile computers.

Software-wise, artificial intelligence (AI) like machine learning can greatly enhance the experience, using things like object and face recognition, user interests/preferences/moods, and eventually our thoughts.

Even just with sight and sound, many hardware and software challenges exist in creating engaging and believable AR for the user:

  • Getting device size and power use down so the wearer forgets about it.
  • Knowing environmental light origin, strength, and temperature, to create realistic virtual objects and shadows.
  • Simulating light reflections in darkness, from bright virtual objects.
  • Recognizing reflective surfaces and displaying graphics accordingly.
  • Creating a sense of shared view amid highly personalized experiences.
  • Understanding a room’s physical space so virtual objects know where they can move or rest — even in spots invisible from the glasses’ angle. HoloLens is said to use Kinect’s sensors, and Magic Leap is using long-range 3D depth sensors — perhaps integrating Google’s Project Tango (which can quickly map environments, objects, and faces in 3D). Other tech may help, like active acoustic location, special radar, and AI.
  • Predicting and simulating object characteristics: If a virtual hammer is dropped on a real couch, it should make the right sound and bounce.
  • Adapting to dynamic surroundings: If someone plays HoloLens ‘Minecraft’ in their living room and builds a village on a coffee table, and later replaces or removes that coffee table, then gameplay should adjust.

Merging augmented and virtual reality

…not that there’s anything wrong with reality.

While AR adds layers onto the real world, virtual reality (VR) immerses the user in a fully simulated, digital space. But the line between them is starting to blur.

Over time, VR headsets like the Oculus Rift will shrink in size, and some already have external cameras or sensors to map the 3D space around the user — bridging the physical world into the virtual, like this game does.

AR headsets may also gradually cover full peripheral vision with opaque graphics, making VR possible anywhere, anytime — like switching to a tropical beach while riding the bus. The visual frame of the HoloLens covers approximately an ‘iPad at half an arm’s length’, while Magic Leap isn’t disclosing how much of a wearer’s peripherals its prototype will fill.

Many big players in either industry are bound to explore a combination of AR and VR, and a handful of terms aim to merge the two. Mixed reality (AKA ‘hybrid reality’) might best define the AR-VR spectrum, while related terms include alternate reality, artificial reality, infinite reality, and interreality.

Some companies in the AR space have adopted both new and existing names for their offering:

  • Magic Leap has coined and trademarked ‘Cinematic Reality’, which may inspire those developing content for it.
  • Microsoft is popularizing ‘holographic computing’, describing it as your digital world ‘blended with your real world’.
  • HP usesblended reality’ in an AR context, for enterprise apps like object imaging and 3D printing.

Multi world’ might also marry the concepts of augmented and virtual worlds, along with all the ways — personal or shared — to experience either.

Curiously, when the ‘A’ from Augmented and ‘V’ from Virtual are layered together, they form an M and W on top of each other:

Creating a logo for ‘Multi world’ (ie. multi world reality, multi world headsets, multi world gaming) *not to confuse MW with ‘mirror world’ (which represents the real world in digital form)

I digress… but is there a good verb for using ‘mixed reality’? A few might help with the spectrum from AR to VR (ie. augmenting > mixing / blending > immersing). For example, “Why is Fred so excited about that wall?” …. “Oh, he’s probably blending.”

Life and apps

Unlike VR, it may be hard to guess when someone’s even using AR (except for the hardware, at first). But sometimes it may be painfully obvious…

The whole ‘dancing with headphones in public’ thing is about to blow up…

Images from various sources

…and get infinitely weirder.

The applications are never-ending — ranging from silly to life-changing. New tech will enhance existing AR, while opening new worlds of possibilities. Below is a handful of what we might experience.

Tools and data

Everyday consumer AR applications will feel like superpowers:

  • Text applications might include spell-check (or spell-fix), translation, Comic Sans font replacement, and my favourite: keyword find, ie. ‘Find on Page’ for everyday life — to highlight or alert a user to specific text.
  • Measuring and comparing objects and distances will be quick and easy.
  • Education will see endless benefits. For example, chemistry students could visualize molecules or atoms 100 million times their size over top of textbooks or materials/reactions.
  • Apps could change our mental states with graphics to help achieve flow during tasks, visual biofeedback for brain-sensing wearables like Muse, or serenity-inducing sounds and visuals.
  • While driving a car, get heads-up info on speed, directions, road conditions/patterns, etc. …So much for becoming a rally car co-pilot!
  • Many machines and devices may no longer require dashboards, if the data can be communicated through AR headsets.
  • Imagine live object/place/concept recognition, like an instant version of Amazon’s Firefly.
  • Food labelling may become obsolete. Instead of the simplistic and binary ‘No GMO’ or ‘Organic’ labels, just look at a product’s scan code or packaging to learn more: how each ingredient is made, environmental or health effects, whether it fits into diet goals, etc.

Statistics can also be gathered from the world:

  • Nutritional tracking apps could seamlessly record consumption based on food type and quantity recognition, without tedious input. Notifications may include ‘You might be low in Vitamin A. Have some carrots.’
  • How long was your longest pee? Was Sunday morning the biggest volume on record? How hydrated are you? Finally, urination gets gamification.

Social

AR will provide new and enhanced ways for humans to share and interact:

  • With face recognition, you could look over at someone in a coffee shop and see bits of their public info pop up over their head, like your mutual contacts or interests, or their most recent tweet.
  • Emotion recognition — which computers are getting quite good at — could help anyone, especially the autistic, with social cues from the people around them.
  • Instead of simply sending a text or pic to a friend, why not leave a private, virtual note hanging in the air, just for them, at the coffee shop you both frequent? Location-based communication apps like Yik Yak can also become so much more visual.
  • Imagine seeing a stranger immersed in some kind of entertainment. Your glasses recognize it as an open AR experience for download, so you load it up and comment on it with that person.
  • Social media sharing will gain in depth, quantity, and bandwidth. What if someone could easily create and post a point-of-view GIF/video of a fiancé’s proposal, with a caption and their visual heart rate throughout?
  • Surprise experiences, from pranks like, “Come try my new (fake) chair”, to going on a hike with a buddy and trusting them with AR content that they stumbled upon the last time they were there, but which is still a mystery to you. Apps may also intend to surprise users with unpredictable content, without a friend’s involvement.

With the subtlety of AR glasses, and technology like face and emotion recognition, privacy and ethical issues will only grow in importance — which this satirical piece on using AR to get laid alludes to.

Marketing

This area will continue to see big changes from the evolution of AR:

  • Truly relevant and contextual ads could become a hallmark of AR — tailored to individuals and their locations, activities, reactions, etc.
  • It may be possible to replace, blur, or even hide existing ads (analog or digital) — based on learned preferences.
  • Brands will have a lot more space to work with when convincing in-store shoppers to buy, using things like book review pop-ups, coupons from cereal box mascots, games, and store transformations.
  • IKEA has used smartphones to let shoppers preview furniture in their home. Upcoming AR could make this type of thing even easier. Soon we may only need a mirror, or a connected camera, to ‘try on’ fashion.
  • If buildings can be previewed on location, at scale, using AR — or elsewhere, using VR — then selling people on architectural models will no longer involve these situations:
From Zoolander clip

Phobia therapy

For a fear of heights, a VR approach might make sense. Other fears such as public speaking or social anxiety could be handled with both VR and AR. Arachnophobia is already being tackled using touchscreen AR:

From AugmentedStories.com

If this is effective, imagine what head-mounted AR could bring to the table. Combining the sense of touch through physical objects (ie. rubber spiders) or future haptic tech could reduce fears even quicker.

Storytelling and gaming

Movie-like experiences will get smarter and bring an assortment of new viewing options:

  • Watch movies or TV shows anywhere — on virtual screens that look like drive-in theatres, or just a few yards in front of you while out for a jog.
  • See location-specific films where you must keep up with the action.
  • Try mini-experiences for certain location types, like oceanside: Picture laying on a beach when a giant sea monster in the distance starts thrashing, creating waves, and getting closer.

“Who hasn’t wondered what a monster apocalypse might be like in person?”

  • Purely audio experiences might also be fun, like funny/scary passenger announcements from the ‘captain’ while on a ferry, or strange animal sounds while sitting around a campfire or walking through the woods.
  • Choose-your-own-adventures could be more personal and complex.
  • 360 video — which YouTube plans to support — will let us view in any direction. It’s up to filmmakers to guide us where to look (and move).

Games are becoming a huge storytelling medium, and AR is already providing new and innovative experiences. But imagine what developers will come up with when they aren’t limited by traditional screens:

  • Environments can be created for users to play real-world laser tag and rid the streets of zombies. Glance at virtual objects like wristwatches or guns for info on vitals, ammunition, etc.
  • Video and board games may gradually merge into Jumanji-like experiences, with help from things like object recognition. Minecraft is one example of the many possibilities for AR gaming in the living room:
Image from Microsoft’s Minecraft-style game demo

Digitization tech can make toy clean up time instant, and allow several users to build play setups in the same room without necessarily getting in each other’s way. It can also cut out manufacturing/shipping costs: monetary and environmental.

  • AR can also increase our skills by adding gamification to virtually any activity; mundane tasks such as chopping vegetables, or staying in the lanes while driving, can be given a little ‘edge’.

This futuristic short film includes a few of AR’s potential games and activity gamification (in addition to some of its unethical uses):

Art

AR can change how artists and appreciators create and consume:

  • Preview art on your walls without holding up a phone or tablet.
  • Have a subscription for the art displayed in your environment, based on favourite artists/styles/works, time of day, setting, audience, mood, etc. Or secretly ‘replace’ the art in a friend’s house if you don’t like theirs.
  • View public art without time limitations: Imagine flipping through past art installations with your hand. Graffiti could be created digitally — letting many artists use the same wall without spray paint or clean-up costs — and appreciated, based on popularity, recency, preference, etc.
  • Intuitively create massive digital sculptures in small spaces by zooming in to add details. Or, create smaller sculptures to 3D print for the home.
  • Paint or sculpt without material costs — using gestures, brushes, or anything. Eventually, using thought alone, the vision in your head could be created and shared in seconds, like a fluid rendering of the imagination into physical space — which this video touches on:

Sports and fitness

Athletes and sports fans will be able to do so much more with AR:

  • Have a pick-up basketball game without the ball, hoops, or court. With only flat ground, AR can make this possible using things like simulated physics and haptic feedback.
  • Watch your form in real time (or through instant replays) from different camera angles. Fix a yoga posture alignment without bending the neck toward a mirror. Snowboard with a drone following and recording you, streaming live video to your entire vision like a classic video game view of yourself. How quickly will users adapt to this or take advantage of it?
  • Learn from the best. Using another snowboarding example, imagine riding alongside a ‘ghost’ of your own fastest time down a groomed run, or behind a pro rider’s past run of jumps and tricks in the terrain park.
  • Make a bicycle spin class look and sound like a ride on a mountain road.
  • Go see a football game in the stadium and see replays right on the field instead of on the big screen or your smartphone. Rewind, pause, and use slow-mo at your own discretion, then share your clips with friends.
  • Watch a soccer game in another stadium, as if it were happening there.
  • Put a mini 3D ice rink on the living room coffee table or floor, and see the the entire hockey game at all times. Normal camera angles, including new GoPro helmet footage, could display on the entire wall, or in VR-like immersion. Include surround sound from an arena seat too.

Fantasy

Playful, weird, and awe-inspiring customizations are just around the corner:

  • Visualize all of our solar system’s planets between Earth and the moon — which apparently is just enough space for all of them, combined, to fit.

“Oh look — Jupiter’s having quite the storm today.”

Image from Ron Miller

What about a different sun — or moons and spaceships from sci-fi flicks? Real constellations could also be highlighted, similar to the SkyView app. While you’re at it, try out a (spoiler alert) Melancholia simulation.

  • Be a polar bear for the day, make everybody else one, or see how others dress up their ‘digital selves’. Think Second Life, for real life.
  • Enjoy a white Christmas… in Hawaii.
  • Make your house (or entire world) one giant fish tank.
  • Have a virtual pet dog.
Image from Microsoft

Microsoft shows us how far virtual pets will have come since the Tamagotchi Giga Pet. What about pet monsters — or an army of minions to follow you around?

  • Add Instagram-style filters to your vision. I’ve always wanted black and white sunglasses, and future AR glasses should make it possible.
  • Take digital hallucinogens that interact with objects in your gaze.
  • Play a soundtrack for life that automatically fits your mood, activities, and surroundings.
  • Got a bully? ‘Crush their head’ from afar, in all-too-real simulations.
  • Look or jump in a ‘vortex’ and experience a new or altered environment.

Making the invisible visible

AR will drastically increase our visual reach:

  • Parts of walls could be see-through, with external, connected cameras — upgrading front door peepholes (not the smart ones), and home security. Windows and skylights could be digitally added. Don’t like your view? Subscribe to one from the Mediterranean instead. In a car, let sunshine through the ceiling, or see past doors and seats during a shoulder check.
  • Like smartphones, AR headsets bring a range of new ways to assist the visually impaired. Google Glass has applications taking advantage of things like voice commands and object/text recognition, as well as crowdsourced location-based commentaries. Auditory impaired users will also benefit from apps like real-time closed captioning.
  • Eyes on our fingers may one day be a thing. Google has patented a system involving gloves with cameras on the index and middle fingertips, to see 3D, even zoomable, video — which could stream to AR glasses. Think about this next time you’re fishing under a couch for that thing you dropped, which should’ve stopped rolling/bouncing near your feet.
  • Eyes on the back of our heads (ie. using cameras on the back of the glasses, or the fingertip cameras, above) could let us switch to ‘rear-view mirror’ or ‘picture-in-picture’ mode anytime. Taking it a step further — why not see if we can adapt to full panoramic (or even spherical) vision? Experiments using goggles have shown that our brains can get used to upside-down views, and even correct severe distortions.
  • Zoom capabilities — perhaps using a ‘squint’ gesture — could be features of future AR glasses or peripheral snap-on cameras.

Telepresence

One of the more exciting telepresence apps from Microsoft’s HoloLens demo was exploring Mars using data from Curiosity — dramatized here:

Image from Microsoft

Microsoft also anticipates new Skype capabilities, like handsfree lessons:

Images from Microsoft

Imagine going on a date with a drone, that looks like a real person, thanks to AR glasses. That person is real, only they’re 500 miles away, doing the same thing as you. Drone ‘avatars’ could have two cameras eyes-width apart for capturing and sending 3D video — taking Skype to a whole new level. The couple may decide on who’s city to explore, while the other goes to a wide-open field so they can ‘walk around in’ their date’s environment safely.

In the workplace, design or brainstorm sessions could happen without everyone being in the same room — not to mention with virtual whiteboards, models, sticky notes etc., and no tedious computer entry afterwards.

Magic Leap envisions guided yoga sessions with virtual people — all in their respective homes, potentially:

Image from Magic Leap’s US Patent Application 20150016777

Aside from traditional telepresence, services like Eterni.me, which seek to create digital copies of people for use after their death, could become much more engaging with the help of modern AR — not unlike Superman getting advice from a hologram of his dead father in Man of Steel. These self-digitizations could also be used well before death. Imagine your friends borrowing your ‘digital’ self for a party you can’t attend. Later, you get a recap of what ‘you’ said and did. This could still just be a temporary step until we can actually interact in multiple places at once, using things like avatars and nanobot brain-computer interfaces.

Computing

This one’s a doozie.

Described by their founder as “..computing for the next 30–40 years,” Magic Leap has big ambitions for what their tech can bring. Microsoft is also highlighting a suite of apps the ‘era of holographic computing’ will allow.

Minority Report starts to look primitive…

If everything’s virtual, your workspace can follow you around. In the morning, stay in bed for your peak creative time with an office between you and the ceiling. Later, take a walk on your ‘treadmill desk’ (the beach).

Maybe you’ll want to sit down and consider the options:

Image from Magic Leap’s US Patent Application 20150016777

Even the TV may be replaced by this highly personalized experience. There’s no more need to fight over channels to watch, or seats with the best viewing angles.

Working from a couch, coffee shop, bus, or desk will get infinitely more freedom and privacy, with multiple ‘screens’, each as big as you want — all of which are invisible to others.

The ‘desktop’ metaphor can come back full circle — with 3D virtual objects on a real desk.

Physical keyboards may be the last surviving peripheral here (supported mainly by current Blackberry users) until good alternatives come along, like a virtual keyboard ‘tied’ to the movement of the hands, haptic feedback, (good) dictation, lip reading, mind reading, etc.

Once head-mounted AR becomes common, it may be the default device to control and interact with the Internet of Things. It will allow controls to be more intuitive and less fiddly, compared to smartphones and tablets. For instance, you could crank up the thermostat with a finger twist gesture at the wall until you see ‘76 degrees’, or get a pop-up video from inside your oven, with the message, “I think the yams are done. Shut off heat? Y/N”.

Computer interaction will be a whole new playground, introducing a vast number of design challenges, ranging from gestures to icon placement. For example, certain situations may call for user interfaces on the wrist, finger tips, or in the space around you — at different sizes and depths. Magic Leap’s patent sketches show us a bit of their thinking here:

Images from Magic Leap’s US Patent Application 20150016777

What about getting input from hands when they’re out of view from the headset’s sensors? Magic Leap has a patent for a ‘tactile glove’ to control things like robots (ie. avatars, drones, machinery) with simple finger-rub gestures. Third-party devices like Myo’s armband may also work well for this, and Microsoft’s Kinect will likely integrate for use in the living room.

Others are experimenting with different ways for users to interact in real space. For example, SEER’s AR headset uses a mouse cursor in centre view along with a ‘jaw clench’ gesture to select objects.

Companies like Emotiv are creating brain-sensing EEG headwear, capable of discerning basic thought commands like push, pull, levitate, rotate, and disappear — or even facial expressions like blink, wink, surprise, and smile. This kind of tech can make AR controls astonishingly smooth, in addition to bringing emotion/reaction-based intelligence to gaming, advertising, etc. It may even allow camera-less video chats, with the help of 3D face mapping.

‘Totems’—another concept illustrated in Magic Leap’s patent — allow digital interactions through tangible objects:

Images from Magic Leap’s US Patent Application 20150016777

Most totems may be fully analog, but some may need digital functions and wireless connections, like a joystick that needs to work even when it can’t be seen by the headset’s 3D sensors/cameras.

If you find yourself procrastinating on social media, create a ‘distraction cube’ (see FIG. 43b above) and place it in a spot that’s hard to get to. Upon finishing a big task, reward yourself with a cube session. Eyeball-tracking could let you explore social media without any clicks, taps, or gestures.

These computing interfaces may become so intuitive that even dogs could use them. It sounds silly, but imagine some of the things that special AR glasses for pet dogs might one day allow:

  • A way back home when they’re lost
  • The ability to guess their thoughts/needs and predict behaviour
  • New tricks and more advanced communication through visual cues
  • Augmented or hybrid (ie. with a real ball) games for dogs to choose from
  • More environmental info, like a direct visual line to a ball in a field (when a human points at it), or pictures of what they might be smelling
  • Safety measures, like virtual fences that could prevent them from running out in front of a car

Animals aside, it will be interesting to see how humans and computers evolve with this new medium. If Theodore in the movie Her had AR glasses, his computers’ operating system ‘Samantha’ could have appeared like a real person to him. Maybe she’d have requested a drone to see the world from her own perspective—even though AI in the movie did seem content interacting on the internet at lightning speeds. With advanced AR, humans too will have a quicker, more seamless connection to the internet — giving us more information, control, and mental capacity than ever before.

If head-mounted AR has the potential to replace smartphones, computers, and TVs, where is Apple in this space? Presently, its iPhones and iPads host much of today’s AR with various iOS apps and developers. The Apple Watch will include features both complimentary to AR (motion and heart rate tracking, vibration feedback, tap input) and overlapping with it (hands-free directions, bite-size info/notifications). It’s a great device for Apple to make users comfortable with wearable computers/sensors, get developers thinking (ie. minimalist interfaces), and boost the functionality of any AR headset it may one day release.

But how will Apple answer to the anticipated products coming from Google (Magic Leap’s primary investor) and Microsoft? Though incredibly secretive, and never early to market in a new product category, they clearly sees potential in this area — having acquired PrimeSense, along with its host of patents in head-mounted AR and 3D imaging, in 2013 for $345 Million. Add on a massive customer base and $178 Billion in cash to place bets: Apple shouldn’t be discounted here.

Public adoption

Apart from the quality and appeal of upcoming AR tech, many other factors can influence how society embraces it:

  • Etiquette standards practice — for example, consideration of the ideas in Google Glass’ Do’s and Don’ts list
  • Social acceptance of both cyborg-y wear, and the trend toward a new equilibrium of decreased privacy
  • Usage of friendly language — for example, Explorer instead of Glasshole, and other phrases such as Magic Leap’s Flutterpod (a name for virtual flying objects) and The Magic Shop (an experience/app store)
  • Standards collaboration and abidance from companies building AR hardware or software — allowing more device-agnostic experiences and a much larger network effect

Onward

We will soon have magical ways to personalize and interact with the world around us, not to mention get things done. Driving the need for increased engagement and connectedness, next-generation AR will be a catalyst for new wearables, drones, cameras, sensors, and apps.

This new tech doesn’t have to be synonymous with information overload and desensitization; it depends on how we choose to use it, and whether elegant interfaces can be designed. Some see AR as a chance to remove clutter from our realities, especially once it begins to learn and adapt to what individuals may consider useful vs. noise.

Will AR make people closed off to the world and others in it? For some, at times, sure — but it has potential to increase public interaction, by removing the ‘shields’ of smartphones, tablets, newspapers, etc., and giving strangers interesting icebreakers to share and discuss.

Security and privacy issues need to be considered early on, taking into account gaze-tracking, the subtlety of always-on cameras, automated object/face recognition, multiple inputs/outputs/apps running or sharing data simultaneously, as well as the potential for visual/audio manipulation, hacks into users’ physical surroundings, police surveillance, etc. That being said, the sensors, cameras, and apps in AR systems also have great potential to improve things like identity authentication and safety of the wearer or people around them.

Like other disruptive tech, there can be downsides, but additional thought now can help ensure the good outweighs the bad. This can mitigate harm and encourage magic leaps of faith — sooner realizing AR’s full potential.

Let’s go manifest this new reality!

--

--