A simple and effective Design Thinking visual tool for designers to evaluate and envision immersive experiences of all kinds.
The Covid-19 pandemic has changed the way people interact and enjoy spaces. This is true for almost all dimensions of daily life (homes, stores, doctors’ offices), cultural life (schools, museums, shows), and, above all, the work environment.
Designers daily face new scenarios that require in-depth exploration. Unique needs have arisen that are not yet fully addressed.
Companies, for example, need better and deeper integration between people who work at home and those who share the office space. I call this scenario Hybrid Inclusivity.
Hybrid inclusivity is the way to prevent remote workers from being discriminated against because of the difficulty in collaborating and interacting with co-workers. It can only be possible if remote workers are credibly represented in the office space. Not as a cartoon but as a person. Human-sized, like any other participant. No audio lag. No glitches.
I believe that hybrid inclusivity is a potential true revolution that would democratize immersive technologies and digitally enriched experiential spaces — a desirable future for many.
But what does it take to realize it?
Imagination for good
We do not need to invent new sci-fi technologies. Existing technologies allow us to create solutions that can get us very close to hybrid inclusivity and other scenarios to improve people’s lives.
The real disruptive tool is imagination, and what we need is a set of instruments and procedures that allows us to use it to the fullest.
The phrase tech for good emphasizes using technology to make life better. Instead, my motto is Imagination for good: imagining desirable futures, then designing and implementing them.
In my daily work as Director of Exponential Technologies at NTT Disruption, I am part of a team developing a product that will revolutionize the integration of physical and digital spaces. There was a strong need to identify possible scenarios where new technologies or new implementations of existing or emerging technologies will help improve people’s lives.
For this reason, I created the 7 Axis Experience Mapping Tool. It is a tool we use daily to navigate the universe of immersive experiences of all kinds.
The 7AEMT is part of the design thinking process. It is an activator of conversations, a generator of visions that can help us imagine and design desirable futures. This is why it makes sense to me to distribute it under a creative commons license. I hope it can be as helpful to you as it was to me.
Exploring the 7Axis Experience Mapping Tool
It’s a radar chart. A radar allows us to see what is usually invisible to the naked eye. It uses n 7 parameters, each represented by an axis. Each axis has three levels; infinite intermediate values can be between one level and another.
Let’s go through them one by one.
I. Visual, II. Walkable, III. Multisensorial
The first axis indicates how involved the user is. Starting from the center, the intensity increase gradually.
The simplest level of engagement is Visual: a videogame or an watching a movie mainly involves the user’s sight. All information comes through that channel.
The Walkable level involves the experience within a space: think of an exhibition in a museum, for example. Augumented and Virtual Reality experiences could involve movements too.
The most intense level of involvement is, of course, the Multisensory level, in which the user, immersed in a space, is involved in the experience through multiple senses. Think of the increasingly popular experiences in which people are invited to interact by gesture or touch.
I. Digital-only, II. Digital Interactive, III. Phygital
This axis is about the nature of the experience we are examining. There are no degrees of intensity. It is about the mode of fruition.
Digital Only is an experience enjoyed passively through digital media. But if we are looking at a video game, we have to classify it as Digital Interactive. Instead, experiences that mix the physical and digital worlds are ascribed to the Phygital label.
I. Minutes, II. Hours, III. Days
How long does the type of experience we consider last (or could theoretically last)? We can categorize experiences based on their duration. Business meetings should not last for days. A coffee break should not last for hours. Visitors’ expectations for a movie, an exhibition, or a dinner range of some hours, etc.
The apparent straightforwardness of these values should not make us think that they are any less critical. In many contexts, duration is paramount. If we’re considering a business scenario for meetings, for example, we may find that a technology that allows for experiences lasting only a few minutes may not be suitable for our purpose, so we need to consider an alternative.
I. Room, II. Building/City, III. Planet/Universe
Video games based on Open World game mechanics can have a spatial extent as large as the Universe (or multiple universes). A Business application for immersive video conferencing is typically no larger than the size of a Room.
Disney’s Star Wars Theme Park: Galactic Starcruiser, is an immersive experience inside a giant Building.
I. Solitary, II. Small Groups, III. Massive Multiplayer
The co-presence axis evaluates the number of participants expected for a given experience. The range is from solo experiences to multiplayer games that are limited only by the world population.
Of course, I’m not just talking about participants who enjoy the experience by sharing the same physical space. This parameter also concerns the potential for hybrid co-presence.
I. > 1 MM to serve the first customer,
II. > 99K to serve the first customer,
III. < 99K to serve the first customer.
How much money does it take for the experience we are analyzing, or imagining, to be enjoyed by the first customer or user?
Cost, here, does not represent a “Cheaper is better” kind of metric or vice versa but serves to frame the economic impact that creating a certain type of experience can have.
*** Investment axis is reversed to preserve the radar visual integrity and readability.
I. Hard to edit or replicate, II. Complex to edit or replicate, III. Easy to edit or replicate
Some experiences are challenging to replicate.
Building another Star Wars: Galactic Starcruiser in another part of the world involves the construction of a new building, deploying technological equipment, training of new personnel, etc. Not exactly a copy and paste!
In other cases, think of an immersive art exhibition like the one dedicated to Van Gogh [https://vangoghexpo.com]. This exhibition is perpetually on tour around the world, but even in this case, replicating it is not always child’s play (even if it is infinitely easier than replicating Disney’s Galactic Starcruiser): there are problems of transportation, adapting to different spaces, localization, etc.).
Finally, there are immersive experiences that are much easier to replicate. So easy that you just need to download something from the cloud (Spoiler alert: I’m working on just such a kind of technology).
Disclaimer: 7AEMT does not claim to quantitatively measure and compare experiences, nor does it claim to be sufficient to automatically materialize any desirable future. 7AEMT is a radar: it is used to identify directions. And 7AEMT is a canvas: it supports design thinking and, in some lucky situations, stimulates the imagination. For example: ****Replicability axis might look very qualitative because “hard” or “easy” is relative, and that’s precisely the point. Hard or easy should be defined while practicing 7AEMT and according to the actor’s specific qualities, skillsets, resources of the actor in place.
Let’s play with the 7 Axis Experience Mapping Tool
This tool allows us to identify the “shape” of an experience, set by specific limits established by the technologies we use, the enablers, and the context.
And it is this “shape” that experience can take that will allow us to decide whether it is good for our purpose or not. Let’s now use our radar to intercept some typical scenarios.
Pictures of an exhibition: from a Museum to Oculus
An exhibit can be many different things. A visit to a museum, for example, is by definition a multi-layered experience.
First, there is the Museum, which is often a majestic building, an architectural masterpiece.
Then there’s the curatorship, the content, and, of course, the visitors, including the one standing next to you.
As you walk through the exhibit, you can feel the physical presence of the masterpieces on display along with that of the people looking at them. You can converse with another visitor, exchange views, discover that your soul mate is in front of you.
All the senses are involved, which is part of the staging in some cases.
As for the duration, this kind of experience should never be too fast: you want to stay for hours, not minutes. You probably wouldn’t like to stay inside an exhibit for days, either.
If we decide to digitize this experience, we will be able to expand it in many ways by adding more objects or more content. For example, visitors to the digital version could watch hundreds of additional videos, by curators, artists, commentators, previous visitors, etc. There are no limits.
The digital space that houses the exhibition could be a wonderful castle floating on clouds or any other fantastic building. However, you will not feel the place, the exhibit, your presence, your walking through the spaces, the smells, as you would in the physical world.
Wearing the most immersive consumer gadget available, the Oculus Quest 2, you will see things as if you were there, inside the castle above the clouds. At the same time, you’ll see others as avatars made of pixels.
Walking the virtual hallways with your special one while wearing a head-mounted display may not be the same as doing so in the physical world.
A digital exhibit may be available to you 24/7/365. No wait time, no commuting, no waiting in line. However, the duration of any oculus visit can hardly be hours; more likely, we are in the 30-minute range. Head-mounted displays are typically tolerated for short duration sessions.
These are some key differences between an online visit to a virtual exhibit and a walking visit to a physical exhibit.
Let’s now try to synthesize what we’ve said about these experiences and visualize their shapes through our 7ARMP radar.
This is a generic exhibition mapping.
Example: Immersive Van Gogh exhibition. San Francisco, CA
Oculus Virtual Exhibit
This is a generic Oculus game mapping.
Example: Star Wars Tales from the Galaxy
Leveraging 7AEMT to imagine desirable futures
Now we will use our radar in reverse, that is, not to analyze an existing experience, but to imagine the ideal experience for a business context.
I could refer to real cases we were working on and take it as a credible example. According to these preliminary applications, the ideal experience for a business context should:
- Be multi-sensory (high immersivity).
- Integrate physical space with digital content.
- Be able to last hours (not minutes, but not days either).
- Is adequate to fill a room or a conference room or innovation center. Usually, it doesn’t need to be any larger than that.
- Be absolutely capable of accommodating hybrid groups, supporting co-presence and collaboration.
- Cost as little as possible.
- Be easily editable and repeatable.
Plotting these requirements on our radar, we get the following shape:
As you can see by comparing this chart to the forms obtained by analyzing the museum exhibit and the virtual exhibit with Oculus, our business experience covers areas that other experiences do not.
Below, two more charts depict a video game and an amusement park. I’ve overlaid them on top of our business experience so you can see what different requirements are at play.
Videogame / Living Room TV
This is a generic exhibition mapping. Example: No Man Sky for Playstation
This is a generic exhibition mapping. Example: Disney Star Wars Galaxy Edge.
This allows us to make our needs visible and trigger conversations that, iteration after iteration, will allow the project to become more and more concrete: from an potential future to reality.
This example illustrates very well the usefulness of 7AEMT in the design process. That’s what the radar is for: it’s a Design Thinking visual tool to intercept desirable futures.
In 1992, in a memorable book (Computer as Theatre) Brenda Laurel presented a new theory of human-computer interaction. Building on Aristotle’s analysis of the form and structure of drama, Laurel showed how similar principles can help us understand what people experience when interfacing with computers.
Brenda Laurel’s theatrical metaphors have greatly inspired our work.
With the design team, we conceived a digital device that surrounds and defines a physical space: just like a theater, this space surrounds people and allows interaction between them, sometimes magical: imagine a speaker talking to an audience surrounded by the presentation.
As the presentation progresses from one slide to another, the previous slides don’t disappear but go to occupy a space on the room’s walls, which are actually digital displays or projections. The presentation ideas are always available and, if someone wants to review one of the previous slides, all he has to do is walk along the walls and find the one he’s looking for.
When completed, the presentation can remain available to the public for some time. Attendees can review, share, and comment on it, almost as in a museum exhibition.
Unlike a traditional theater, this digitally enhanced space allows hybrid co-presence, i.e., people present in the physical office space can collaborate profitably with people who at that moment only have a virtual presence.
This makes training much more effective, pushes company culture in a digital direction, strengthens identity and increases sales.
Conversations prevent dystopias
Dystopia: A very bad or unfair society in which there is a lot of suffering, especially an imaginary society in the future, after something terrible has happened; a description of such a society. (Cambridge Dictionary)
The concept of dystopia has always been linked to technological progress in the collective imagination. Most science fiction novels and movies are based on this paradigm. As a technology enthusiast (and designer), I would like to end this article with a different and optimistic vision.
My two cents on the subject are quick to express:
1. I don’t think there is an inevitable dystopia out there. We are the ones who are designing and embracing emerging technologies. If you are among those who are actively creating the next generation of technologies, you know that no future is already written. We are creating it. Let your imagination inspire you for good.
2. Conversations are very important, and 7AEMT is a tool you can easily use to ignite conversations at any level. Use it to explore and explain your specific vision of a desirable future, and exclude things that aren’t part of it.
Even if you are not a technologist or a designer, etc. you are making your own contribution to technological evolution. When you choose or purchase one technology over another, you are deciding its evolutionary path and its impact on human society. At the mall and browsing Amazon we are all building the future.
Commerce is the most accurate Darwinism.
Experience’s quality depends on content much more than technology enablers or the context in which users enjoy the experience.
However, 7AEMT radar does not intercept the quality of the content. Not because the content flies too high or too low, but because it is just transparent to this tool, and its analysis requires other tools.
Internal ontology matters
The terminology used in the radar axes is necessarily generic with respect to individual experiences we analyze.
For example, the term “Multisensorial” found in axis 1 (Immersivity), could have different interpretations. Should an audio and video experience be labeled as Multisensorial since it involves two senses?
After applying the tool to many different scenarios, I noticed that almost all of the experiences mapped were natively audiovisual. Therefore, we should apply the label “Multisensory” only to experiences that include touch and smell.
I preferred, however, to leave the label “Multisensory” in place to avoid further interpretation complications.
Therefore, a shared ontology among those who use the radar in the same context is essential. This allows us to attribute a common value to the parameters we use.
The walkability issue
The “Walkable” parameter in the “Immersivity” axis refers to the human who is enjoying the experience. So, walkability is zero if the user is playing a platforming video game on the living room TV. If the user is playing on oculus, you can set up a safe area almost the size of a room. So there is an initial level of walkability.
I have to start by thanking the entire NTT Disruption Cokoon team, and in particular to: Marc Alba Otero, Diego Munoz Galan, Gian Pablo Villamil, Victor Moran Rodriguez, Manuel Dominguez Varcarcel, Jorge Garcia Dominguez, Adrian Fernandex Martin; Without the experiences and support from my peers and the entire Cokoon’s team, 7AEMP would not exist. A very special thanks to Gioacchino Di Fazio for his editorial support, daily advices. You are much better than Toninho Cerezo. A hug to the early readers/commenters: Antonio Grillo (Tangity), Dirk Knemeyer (SciStories), Fabio Sisinni (Amazon), Antonio Rizzo (UNISI) and Alberto Corti (Generali).
7AEMP is an original approach conceived by Leandro Agro in 2020 while working on the book “IoT Designer II: from Nest to exponential Artifacts”. This canvas is available under the Attribution-NonCommercial-ShareAlike CC BY-NC-SA license. https://creativecommons.org/licenses/by-nc-sa/4.0/
<a rel=”license” href=”http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt=”Creative Commons License” style=”border-width:0" src=”https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a><br /><span xmlns:dct=”http://purl.org/dc/terms/" href=”http://purl.org/dc/dcmitype/StillImage" property=”dct:title” rel=”dct:type”>7Axis Experience Mapping Tool</span> by <a xmlns:cc=”http://creativecommons.org/ns#" href=”leeander.com” property=”cc:attributionName” rel=”cc:attributionURL”>Leandro Agro</a> is licensed under a <a rel=”license” href=”http://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>.<br />Based on a work at <a xmlns:dct=”http://purl.org/dc/terms/" href=”leeander.com/7AEMP” rel=”dct:source”>leeander.com/7aemt</a>