Like many of us, the hype around AR (I’ll just use AR as a blanket term for the other variants MR and XR) started about 3 years ago with the news that Microsoft and Magic Leap were going to release revolutionary devices with “holographic” displays capable of putting virtual experiences into the world around us. Many of the initial concepts and demos can be roughly divided into 3 main categories:
- AR as medium for games and interactive stories
- AR to put 2D displays into our 3D world
- AR for spatial collaboration
In category #1 AR is thought of as a spatial medium for entertainment content. The form of that content was derived from older non-spatial mediums such as films and video games — examples such as being immersed into “holographic” stories that take place around the user’s immediate surroundings, or “holographic” games that is rendered and composited in real-time over the real world around the user. It soon became evident that experiencing content in one’s living room may seem cool at first but the novelty is quickly offset by the lack of artistic cohesion — the virtual story or gameplay seems to loose impact as it is drawn against the backdrop of the user’s very normal and very messy living room. Both film and game directors employ Production Designers to create a coherent visual world as a backdrop to their experiences. By putting the experience into the “real” world, the cognitive mismatch makes it hard for the audience to remain immersed in the fiction.
Category #2 saw AR as a way to put 2D screens into the 3D world around us, replacing monitors and advertising billboards with virtual ones. While the virtualization of digital screens for work seems inevitable and beneficial to the environment, there is no doubt that someone will also attempt to build an AR platform for advertising as it is the logical extension of the ad-based economy of the internet. As demonstrated by every Sci-Fi movie’s depiction of the near future, surrounding humanity with virtual advertising screens is possibly the most annoying (and potentially dangerous) use of AR ever conceived, and will no doubt lead to AR Spam Blockers being invented immediately afterwards.
Category #3 saw AR as a way to bring people and things together to allow them to participate in activities irrespective of distances. AR for spatial collaboration will be incredibly useful and things like “holoportation” holds a lot of promise — many different types of human interactions will be more efficient using this kind of technology. Additionally, spatial collaboration and holoportation will one day make a positive impact towards carbon reduction, making it slightly less necessary to physically commute work everyday.
This article is really about a 4th path and how my team and I are thinking about AR. Conceptually, the context for any AR application cannot be internal to the experience — it is not like running a spreadsheet app within a window within a desktop within a monitor. The context for any AR application is the user’s external world which is likely to be very chaotic, very dynamic and very real. So instead of asking what kinds of content we want to create for the user, we started out by asking what the user will want to do in their world. In other words, we see ourselves as tool makers and our AR platform as a way for people to use both virtual versions of real-world tools, as well as AR tools that has not been imagined yet. To be clear, the kind of AR tools that we’re imagining is not like software tools … they’re more like physical tools, each with its own shapes and forms, each designed to achieve a specific result. However, instead of being made of atoms, these AR tools will be made of pixels, polygons and code.
Humans have been crafting tools since the dawn of time — tools to help us hunt more efficiently during the times before history; tools to help us grow and tend to crops more efficiently after the Agricultural Revolution; tools to help us make things faster and cheaper during the Industrial Revolution; and intangible software tools to help us communicate and innovate faster in our current Information Age. Can you imagine how powerful AR tools can be? Unshackled from the constraints of real world atoms, these AR tools will enable us to do things that no mere physical tool can — they will be able to harness the full power of the net, take advantage of the accelerating power of A.I. and Machine Learning, as well as a million other capabilities not yet imagined. However, unlike pure software, these AR tools will have shape and form designed to be intuitively useful by almost anyone in the 3D world in which we all live — as intuitive as picking up a hammer. 3D in the context of how we think about AR tools is not simply an optical property — it also refers to the way users will interact with them. Just as the computer mouse wheel taught us how to scroll through web pages with our fingers, humans will have to learn new ways to interact with these virtual tools with their bodies. With advances in computer vision, hardware sensors and complicated trackers are already becoming obsolete, giving way to purely optical-based body tracking for interacting with virtual objects. There is no doubt that the software and hardware for true AR is just around the corner, so the real question becomes: what kind of AR tools do we want to make for the world?
Back in the prehistory of my own life, I had a chance to spend the last semester of high-school at the Ontario Science Centre Science School (OSCSS) in Toronto, Canada. One never truly appreciates anything except in hindsight, and it was at the OSCSS where I first glimpsed how learning can be so much more fun than just absorbing and regurgitating information. The Science Centre embodied what we now call “Active Learning” — every exhibit under its roof demanded to be played with and touched. Learning wasn’t simply about ingesting information — learning was the result of self-discovery via experimentation and observation.
Learning was the result of playing.
Fast forward a few decades and we are now at the dawn of AR. The virtual experiences that people like me spent our careers imagining for video games and films can now “exist” in the real world, unleashed from 2D screens. We have been quietly building and experimenting for the last 12 months, focusing on architecting our AR platform and developing demos that shows what our vision for AR is — but we’ve barely scratched the surface. Eventually, when the AR devices of our dreams become as ubiquitous as smartphones, we want to help unleash into the world an amazing assortment of AR tools that inspire and educate, to help spark the imaginations and intellects of the next generation. Perhaps, instead of just aiming to produce an Internet of Things (IoT) to be consumed, we should be aiming to create an Internet of Virtuous Tools (IoVT) that will contribute to elevating human capacity.
Won’t you join us?