The future of Augmented Reality and Virtual Reality: Alternative Reality

A vision over the next 10 years from a perspective of the usages

Greg Madison
Interfaces & Interactions

--

This interview was originally published in French at : augmented-reality.com Part 1 Part 2

Can you briefly introduce yourself and tell us about your background?

I’m a designer of innovative HMI, graphic designer, set designer and stage director. I am in charge of everything related directly or indirectly to perception, either by materializing users’ interfaces or by creating new usage concepts by staging them, to make them more accessible to understanding. My career is quite unusual, yet it was all mapped out so that I can do what I do today.

My father owned an electronics company. I always grew up surrounded by sophisticated devices and computers. But what really conditioned me was to receive for my sixth birthday, a magic box. That’s why I ardently wanted to become a professional illusionist magician.

To me, merging technology and magic has always seemed obvious and it gave me the urge to conceptualize devices capable of genuinely overlaying the digital world onto the real world.

In 2009, I presented my project “The 7th Sense” in the Human Machine Design category at the Microsoft Imagine Cup.

This device offered the possibility of visualizing and interacting with alternative realities among others. In other words, with today’s technology, the Google glasses crossed with the Kinect.

This concept has allowed me to access to the world finals and therefore it gave opportunity to be identified and hired by Gregory Renard, co-founder and CEO of Wygwam, Xbrainsoft and Xbrainlab.

I worked in the R&D department, especially on topics such as ambient intelligence, NUI, as well as on an artificial intelligence like Siri and Google now.

Since when have you been interested in augmented reality and how did you discover AR?

My real encounter with the term AR was caused by the WAP’s lack of ergonomics! In 2003, I bought my first mobile phone with a digital camera. At that time, the screens were not touch-sensitive, one had to be be an adventurer or completely desperate, to try to browse the internet with only a numeric keypad … I then came up with an idea : to use the phone itself as a mouse that we could displace in space to navigate through the content. Technologically speaking, it was not that simple, since at that time there was no accelerometer. I wanted to use the scroll of the pixels captured by the digital camera sensor to control scrolling on the screen, which would give the illusion of a magnifying glass that we would be moving on a map.

In the week that followed, pushing the idea a bit further, I imagined BUFFET”. A concept where each GPS coordinate was a URL that points to a geolocated content, in which one would have to physically move one’s mobile phone to access the information. Once in this area, the user would access the virtual buffet menu that he would see through the screen of his Smartphone and it is by dipping his device in “the dishes” that he would access or recover the information.

For better understanding, here is an example of use. I’m in a bus shelter, I run BUFFET on my phone. As information is contextual to the GPS and the real-world, BUFFET data around me belongs to the RATP (Autonomous Operator of Parisian Transports). On the menu, the next bus schedules, fares, games, entertaining media to allow people to wait, etc.. The purpose of this was to provide the user with a service or contextual information, without having to make endless entries and especially to anticipate their needs.

Well, while I was doing this research on the internet to find out if a similar solution to BUFFET already existed, I came across ARToolKit and the definition of augmented reality. This keyword was missing and that day my life took a turning point because I was able to access some documentation on the subject. I finally had words and examples to communicate my ideas.

Can you give us your own definition of this concept?

It may sound surprising to some of you, but I must admit that the term “Augmented Reality” annoys me very much. If you listen to the marketing guys, now everything is augmented reality … I could give a long list of examples, but it would just show that it is a gadget that makes the Buzz, which is use to sell cereals or even to increase the numbers of clicks on technology blogs with good catchy titles . It is by the way thanks to them that the general public associates the Google Glass with an AR device. So, obviously it sounds to me like a trickery which does not keep its promises.

A reality that we cannot touch, can it still be called reality?

If one wants to be more precise, the term we should use should be Augmented Perception” as through a tablet, a smartphone, etc., it is our perception of the world which is augmented and not its reality. I think that this distinction is very important; it helps to consider a new paradigm that is closer to what augmented reality is in the collective phantasm and this is what I personally call Alternative Reality”.

So if I had to define the term “Augmented Perception” I would say:

Artificial augmentation of the senses that enables us to access different degrees of visualization of its environment.

On your Linked-in profile, you are identified as an innovative HMI Designer, can you explain this to us?

My past experience as an illusionist magician gives me a unique capacity of observation and understanding about how people interact with the real and virtual world. Because, to a certain extent, magic is already augmented reality or virtual reality! I have learned to decipher what, unconsciously, controls the human behavior. And now, this is what I use to nourish my creations. While computing is shrinking to the point of no longer being visible and the insatiable need for immediacy leads us to dematerialize physical media, there is a time when humans must act and interact with this invisible and pervasive computing and this digital dust. Creating these vectors which recompose this information or creating an anchor as a tangible link between the real world and its computer model is precisely my job. And this is why I am an innovative HMI Designer, because my creative process is not based on the technologies of the market, but on a personal intuition of the future.

For example, when I created the Pandora’s Box, I based myself on the inner certainty that tomorrow our brain, helped by nanotechnology would be the graphic card which would generate images of the interfaces that surround us. It already perfectly fulfills this function in our dreams when we sleep! So, whether by this way or another, the screen is doomed to disappear! Yet, it is important to have unifying items that can be identified and that can be manipulated. And it is not because, today we do not have this “mental projection capacity” that we can not materialize its use. For now the Pandora’s Box works with cleverly hidden video projectors for the video mapping, the Kinects to calibrate the projections as well as to trigger the users’ scenarios, NFC to make inert objects communicate with various devices, and so on.

Pandora’sBox — New forms of interaction with pervasive computing

For the user it is almost transparent, he could almost have the feeling that he has super powers. But tomorrow, when everything is in place, by synchronizing all visible and invisible sensors, everyone will be able to benefit from these augmented capacities, as if they were innate abilities and I call this ecosystem:

The Egosystem

I know that interfaces and pervasive computing are a subject that fascinates you. How do you think we will interact with our environment in the future?

For AR to take off, we clearly need suitable devices that allow us to manipulate these alternative realities as we handle the mother reality. But above all, there is evidence beyond that: in order to materialize all future uses, there needs to be a common ground, a common grid … a matrix!

This missing link will be the conductor which will attune the digital and the real. By constantly performing a numerical copy of space, time, objects, and the living, we will naturally have this reference grid where the object-oriented programming will be intricate to its real and augmented attributes. By unifying all data generated by the sensors, cameras, satellites, drones, smartphones, connected devices, and everything else, we will cause a singularity of the earth which will provide Gaia with a digital omniscient consciousness.

It is logical and inevitable. Whether it concerns cars without drivers to move, whether it is about drones to fly, robots evolving among us, and even to create an internet of objects with inanimate objects, rather than to recalculate the environment for each entity, mutualisation will enable it to be more efficient and less resource demanding, while providing an accurate model for everyone.

There already exists initiatives that go in this direction, such as Kinect@Home or even the European project RoboEarth which aims at mutualising in the cloud, the robots’ various knowledge. Google maps, Bing Maps, Nokia maps or the French UBICK are involved in digitizing our environment. We already have a bunch of sensors that could transcribe any of our actions, not to mention the amazing number of technologies developed by universities, research labs and that the general public does not even suspect: 3D tracking of people and objects with simple surveillance cameras, automatic 3D reconstruction by photogrammetry from movies or pictures, prediction algorithms that indicate where future crimes are likely to take place and even where one will be in 24H, laser vision by reverberation that can shoot in 3D inside a room without being in it and so on and so forth. Not to mention countless microphones around us that are able to capture and interpret what we say.

Today, it is humanly impossible to store all this data, or even to treat it by the way. Great futurists, such as Ray Kurzweil, base themselves on an exponential temporal technology curve, and predict that in a few years, storage will no longer be a problem. As for the data processing, one can reasonably rely on an artificial intelligence that will have, by far, exceeded the cognitive abilities of Man and will be able to put everything in order … in short: The Ordinator! To finish with this A.I, it will be able to reconstruct by anastylosis our past environment, and by learning from our behavioral models, will be able to complement the gaps of our past behaviors and simulate our likely immediate future.

So to simplify: this awakening world that I call the Egosystem will be a real-time 3D copy of our world. It will be the perfect mesh of our environment and if we consider this 3D mesh like the one of a video game , we then understand the incredible reach of the tool. This enable’s us to create an unlimited range of possibilities since everything is programmable and configurable!

For example at breakfast, a simple cup in front of me will have its digital layer, which will be able to be programmed. I can change its color depending on the temperature of its content. While taking my morning coffee, I can display information on my cup’s periphery such as the day’s weather forecast …always useful to know before going to work. One can also imagine rich and cross interactions, like for example, turning the cup on the table like a potentiometer to raise or lower the music volume in the room… And this is of course through augmented reality devices that I can see and interact with these different states of the object. In short, with the Egosystem, seeing through walls, having the gift of ubiquity, looking at the picture of one’s relatives to instantly communicate with them and so on, would get summarized in only a few lines of code!

How do you see yourself evolving in your career in the near future?

My main goal today is to be hired by one of the most coveted companies in the world: GoogleX. They already possess all the necessary technologies for the set up of the Egosystem. They make all the necessary resources available to the inventors of the future. In addition, all my other concepts seem to be in perfect harmony with what will be, according to me, their next technological innovations. On the other hand, I must admit that, the vision of Palmer Luckey on the evolution of virtual reality besides the field of video games , makes me also really eager to work for OculusVR.

What changes in the different usages have you notice in recent years?

Strangely, it is not the technology or even the usage that has tremendously evolved; it is rather the expectations of the Geek public. Caret people find “entertaining” what is offered, but paradoxically, they don’t really understand and do not see the point of this “gadget” in their everyday lives… and on the other hand, when listening to digital natives, they formulate relevant requirements. They would like to see useful things that would serve in medicine, tourism, and shopping … even if it is simplistic to think about the device only as a contextual information display, they already have a vision and a desire for what it could bring to them. By the way, I also think that it is video games that allow them to easily project themselves in the usage.

What would be the AR experience of your dreams ?

My dreams are all the thoughts I put forward throughout this interview. Yet, the idea that I have of the evolution of the usage of AR through the medium of my technology watch, indicates that we already have the technologies in our hands for these not to be only dreams any more. Maybe this will become reality, or maybe not. The key is to outline new perspectives that will naturally impose new questions, new thinking and that will lead to disruptive innovation.

In your opinion, where will we find augmented reality in our world in 2033?

We will find it in the History books! We will talk about Augmented Reality, as the spark that set the stage for the falsification of reality for everyone. But as I am an optimist, here’s what will happen well before 2033 according to me.

In the weeks and months to come, the first devices such as the Google Glass will make the HowTo / DIY with a first person point of view explode, as well as the E-learning. By giving everyone the ability to simply capture what they do best while having the opportunity to comment on there actions, the users will flood platforms such as Youtube with their knowledge of all kinds. From, how grandma makes her secret apple tart, to, how to change a superconducting electromagnet of the particle accelerator at the CERN, it will involve everything. Suitable websites for the media will bring order in this flood of knowledge and in parallel MOOCs will be rethought through these new prostheses, in order to add this layer of interactivity that Augmented Reality enables. For example, learning the piano with a pre-recorded teacher who will illuminate the keys on your real piano to show you the proper positioning of your fingers…

Like Ashton Kutcher on Twitter in 2009, the most connected Stars will offer to live their lives vicariously. A movement followed by “caret” people who will share as well, their slice of life and their travels. This will have the effect of democratizing the wearing of these devices that were, until then, used with restraint due to privacy issues.

TV shows will integrate the connected glasses technology into their programs, although it is more for the video aspect than for the augmented reality aspect. For example, reviewers will use it as confidence, to interview somebody, to show behind the scenes, communicate sensations during a test or simply to shoot all what they do with their hands. We will also likely have programs based on citizen journalism that will capitalize on the fact that we will all be potential drones closer to the event. Finally, it would not be surprising to see for example the appearance of reality TV games in United States, where the viewers would follow in real time and help bounty hunters teams, who will have as mission to intercept a real criminal in an allotted time. The audience will also be actors then; they would be able to zap between the different participants, thus choosing the best view to watch the action. Adrenaline by substitute, which will generally not cost much to produce.

On the same principle, during historical events, you will be able to choose through which SmartGlass user present on location you would like to see the scene. But, beyond the capture and live broadcasts, it is while rewinding temporal archives that the prospects become exciting. Augmented Reality will enable “time travel”. Indexing and processing of specialized geo-time code video sources, will make the reconstruction of 3D scenes possible and will allow us to physically navigate through past actions. Let us take Obama’s inauguration speech in Washington in 2009, we would then, be able to go on location to live and relive this historic moment facing him on the spot or be with him on the gazebo. Later we will be able to model the missing parts of our past and we will add simulations of the protagonists of that time. Students will follow their school teacher on the beaches of Normandy on June 6th 1944 to attend the landing and you will visit the Notre Dame de Paris cathedral on December 2nd 1804 to attend the coronation of Napoleon.

Brands will understand the real value of AR and will think about their consumer product in an augmented version. We will not buy a product just for what it is anymore, but for what it proposes to do. A bottle of water of a certain brand may account the amount of fluid you drank during the day so that you are adequately hydrated, while another brand in the canning sector “projects” on the container , a chef who offers to accommodate the contents of the box with what you have left in the fridge.

Users of AR devices will hear everything translated into their language by foreigners equipped with the same device.

Microsoft Imagine CUP 2009 — Greg Madison

Thanks to the META Augmented Reality Glasses, to the version of the Oculus Rift equipped with sensors to capture the real environment and all headsets/ glasses that will arrive on the market, coupled with a smart-phone, we would possess autonomous solutions available to move and interact in a mixed Augmented Reality. All business applications will be redesigned for virtual realities / alternative realities and we will see an explosion of Geogaming.

Like Ingrees today, the world will become the reference grid for video games. For example you will be able to challenge any athletes around the world on a 100m race on your usual running track, or even stroll through the streets of Paris in the 18th century to solve investigations and why not escape from a zombie invasion in a park.

As part of mmorpgs, we will superimpose this virtual world on our physical environment, in order to evolve in it. In this world, we will have to face real challenges. Imagine yourself trying to find the spear of Odin while physically confronting the far North and actually braving the tumult of a stormy sea to reach the island Vulcano in Italy to gain Ifrit’s ultimate fire incantation. Depending on the game selected, the architecture of your city will be substituted in your view by, for example steampunk buildings or elven shops. Players will put on their avatars like a costume and only subscribers will be able to see it.

Since Artificial Intelligence has made ​​rapid progress, we will see the birth of holobots. Indeed, even if robots eventually become part of our daily lives, for obvious reasons of cost and utility, holobots of comfort, will be those who will populate our world first. Through our augmented reality devices we will be able to converse in natural language with these digital agents which will be our personal valets. They will be the natural extension of Siri and Google now and will be fully integrated into our real environment. These holographic humanoids confidants generated in the cloud, will have as mission to help us in our everyday tasks, advise us, memorize and then restore all the information we might need.

Due to the need to recover the slightest movements to finely control the interactions with Augmented Reality and the ubiquitous computing in general, a new general public device will be launched “The Orb”. When strategically positioned in one place, this device will act partly as a hormones boosted Kinect, with a volumetric 360 ° 3D vision and partly as a “T-ray scanner” , that will allow it to capture information through certain materials.

Durable goods manufacturers will create objects with already programmed behaviors. Like a fruit basket that tells you which seasonal fruits to buy or that checks that you have well consumed 5 recommended daily fruits and if it’s not the case, it alerts you . A car that gives you the impression of being transparent once inside. You will be able to speak directly to the objects’ “souls” and ask your fridge not to open its door to your child between meal-times or ask your washing machine advice on the programs.

So as to make “teleportation” possible, we will create merge spaces that will be areas built identically, where, the objects and the living, present in these areas, will overlap as layers. This will enable brainstorming sessions in firms; presentations in a meeting room with participants located all around the globe, or enable someone to have his desk in an openspace while staying at home.

Merged gardens, parks and public places will enable a British person to stroll with a French person. It will also enable a Portuguese and a Brazilian person to chat on a bench … while being physically in their respective countries. At school, students will share their classroom and work with other nationalities. At home, a broken family will be able to continually forgather in a dedicated area of the living room or around the dining table, the children will play in their room instantly with their best friends.

In more than 5 years

The birth of the Egosystem will be imminent. Following the same pattern as the Internet and mobile networks, the infrastructure that will provide access to it, will unfold itself gradually covering the dense areas to start. The era of Augmented Reality will come to an end, and we will, then, talk about Alternative Reality. They will be based on:

The mother reality that is the original matrix.

The common reality which is the first overlay of Alternative Reality shared by everyone and that will be managed by an organization such as ICANN.

Personal reality which is the sphere of private screening and which will have its ascendancy over all the others.

This systemic will be the means which will allow everyone to experience their own definition of reality while staying connected with the rest of world.

Like the non-player characters (NPCs) in video games, service holobots will be part of our everyday environment. They will be tour guides, receptionists, sports coaches … They will be available to everyone. They will have their own personality that will give us the impression to discuss with a being endowed with reason. Service holobots may as well be the body envelope of some robots, like a kind of holographic skin.

Nanotechnology will allow us to consider the option of not wearing glasses to view Augmented Reality. Then, the Egosystem will be able to read your mind statistically. By capturing any single of your actions, by analyzing what attracts your attention and on what your eyes focus on, the computer will guess and determine what you are thinking about, what are your interests, what are your beliefs, what is your sexual orientation and your political views … it will even already know where you will be going to and what you will be thinking about. Which will make Time travel to the future possible. (Statistical as well)

There will be a wide range of new jobs that will be linked to perception. Alternative Reality designers will shape overlays on our environment where they will have the freedom to create new physical laws, new features and interactions with the real world. These realities, accessible via application stores, will allow you to buy packs of new materials and textures for architecture, soundscapes and special effects as well as new capabilities. In the city, it will be up to you to walk on sparkling sand or on a flowerbed that blossoms in as you walk to be guided by a cloud of fireflies, to see snow slowly “fall” in reverse mode or be surrounded by a musical symphony that adapts to where you are. We will change our personal reality as we change TV channels.

In more than 10 years

A third type of holobots will be developed : residual holobots. These avatars are the sum of all digital traces left by the living and dead beings. We will begin by developing the residual holobots of the livings. They will be your Doppelgänger, capable of simulating in any points your behavior, your reactions, your actions and your words. They will give you the power of ubiquity in space and in time. We will address to this ersatz of you when we will travel in time. It will have the level of knowledge you had at time T.

Then, there will logically follow the transformation of the residual holobots of the livings, to residual holobots of the defuncts, which will allow us to continue to discuss with a dearly departed. Finally, we will numerically resurrect people of other ages and personalities such as Einstein, Victor Hugo, etc…

Technological innovations concerning the sensors of the Egosystem, will virtually stimulate your senses (kinesthetic sensation, taste, smell …) and will allow you to connect with it by thought.

In the short to medium term, all this will lead to great philosophical and legal reflections. Indeed, in a world where absolute illusion would have held sway over the existing, would we still be able to consider simulation otherwise than reality?

Should we banish from appstores, augmentations that will have the ability to mark with a distinctive sign people with a specific ethnicity, or with different religious or political beliefs? Even if all of this is within the realm of personal reality? What about digital drugs, which will simulate all the real effects of drugs on your senses but without chemical molecules: will they, in this case be illegal? Won’t regulation generate a black market and so on?

Anything else?

I do not think people realize how much Augmented/ Alternative realities will have a profound impact on society. I would like to draw attention on the dark side that will cause considerable collateral damages and can be summarized in one sentence:

There will be no more room for lies and mediocrity.

This will force the industrialists to be extremely rigorous or be good communicators! Through collaborative programs, negative consumer feedbacks will affect the representation of our world. Death’s-head moths could be superimposed on each cereal box containing GMOs or hydrogenated fats; clothing made ​​by children could be illuminated by a blood red halo…

Of course these systems already exist in the form of Shopwise, Tripadvisor, etc., but today we do not always think of taking our smartphones out when buying something or choosing to go or not to a restaurant, shop … Tomorrow, equipped with our glasses the comments of any fellow, will virtually be part of the shops’ facades and on a human scale, the individual will have a type of apparent banner, displaying for all to see the history of his life, his youthful errors and his resume.

That is why it is very important to pay attention to the exercise of putting the future into perspective. If we do not anticipate the different usages today, so many people will suffer from the memory of the web and many societies will have to question themselves: Die? Adapt? Or be compelled to get involved in the endless and exhausting race of “hyperformance” to remain attractive?

I would like to thank Olivier Schimpf and Grégory MAUBON for their questions, Charles Lescaut for the English translation, as well as Yann Mihoubi and Eglantine Lahay for the proofreading. I hope there are no major mistranslation of the technical terms, thank you for letting me know if this is the case.

I will be at the Silicon Valley during the month of May-June to improve my English and meet people who are passionate about these topics. Please feel free to contact me madyvision{àt}gmail{d0t}com

--

--

Greg Madison
Interfaces & Interactions

Designer of innovative HCI / Interaction Designer, illusionist magician. Work at Unity. www.gregmadison.co @GregMadison http://fr.linkedin.com/in/gregmadison