Lessons for the future

Micol Galeotti
Coinmonks
12 min readJun 29, 2018

--

Window to the future, a project ideated and prototyped during the Immersive Experiences workshop by Jing Yu, Micol Galeotti and Sami Désir, at Copenhagen Institute of Interaction Design (CIID) facilitated by James Tichenor and Joshua Walton.

Window to the Future

Summary

Over a three week intensive dive into AR/VR, we designed a AR window to the future that reads the future of everyday objects and hints at the infinite possibilities that are embedded in a moment of time and place in space.

The interface aims to provide a framework for the future that is visualized and translates the affordance of the navigation of space around the object which leads to the materialization of other futures. Window to the Future was our first foray into the medium of Mixed Reality, this is our process and some of the lessons learned.

Introduction

As it turns out, the future is a pretty intriguing thing. There are about as many theories on the progression of time as there are possible futures right now.

Theories of the future

Throughout our project, we thought a lot about the different ways people think about time and frameworks people employ to think about future. This one idea — that draws from the theory of special relativity — that the present is a point from which an infinite expanse of possibilities must be constrained, caught our imagination. We used a tree diagram to speculate all the kinds of futures possible at one moment in time. Something as mundane as how close a cup was to the edge of a desk could alter its potential future, and, strangely enough how we thought and acted around the cup in the present.

In the act of speculating about the future, we change the future. Be it utopic or dystopic, imaginary versions for the future of society are common devices to translate our fears and aspirations about contemporary reality.

Futurecasting is a well-known strategy for businesses to design for problems and identify opportunities that are years away by extrapolating the trends and patterns of the present. In the words of Sean Rhodes (ECD at Frog), “…to think about the future in a little bit more of an open way, which helps us to overcome our inherent built-in biases that really limit the creative thinking of where we are going.”

Statistics and probability are probably the closest thing we have to a practical model for predicting the future and the first model we applied to make sense of our tree of futures. Probability helped us navigate and classify the alternatives from probable, to plausible, to possible, to impossible.

Other, more biased, frameworks such as the 4 futures model or the classic optimistic/pessimistic were used as provocations to think about perspectives on the future.

Each theory for the future brought new elements and perspectives that we considered while exploring the subject of AR, subject in which we landed trying to find an answer to the question regarding the reason why we would want or need AR in our life.

“For me, the idea of building AR as an immersive experience came with a challenge to reality — after all, what is more immersive than reality? And I like reality, I think it’s plenty”

The Brief

Create a project which displays a more preferable future using Mixed Reality. It should be a prototype which lets someone feel a part of that future themselves.

The process

Toward the beginning of our project, we were very focused on designing for practical applications, “useful” applications of AR/VR. One of the most obvious value of AR is the promise of information in context. Our initial idea exploration was using AR and future forecasting to help explain bad, overly minimalistic UI.

It felt like a strong concept, replacing user manuals that no one really uses and make their content immediately available on demand in a very straightforward way. However, as interesting as it was, when we started to develop it, we came to face technical constraints and unravel implications that made it become more about demonstrating UI in place than seeing into the future.

So we went back to ideation and discussion about what it was exactly that we found so fascinating about futures. The video Charles Eames showed during his lecture on The New Covetables at Harvard in 1970–71 drove us to talk about the multiplicity of potentials in every single thing, at every scale, in every single moment.

The Norton Lecture, “Good” — Charles Eames (1971)

As our discussion unfolded we started experimenting, we took a cup from our working table and started thinking about what would it mean to be able to see these multiple possibilities projected around it.

Two approaches, the control freak and the chaotic, emerged to navigate the infinite possibilities.

As a control freak you could choose what type of future you wanted to see, be it optimistic, pessimistic or realistic. Using that heuristic you could move along the tree of futures, seeing how one specific possibility could evolve and implicate a new multiplicity of futures.

For the control freaks on the team, the idea was obviously superior, however to make it understandable it implied the necessity to navigate through this tree of possibilities and, ideally, visualize it. Considering we did not want to rely on screen based paradigms of interaction, explaining the tree in every interaction proved extremely clunky to manage.

On the other hand, the chaotic future focused more on the unforeseeability of which one of the potential possibilities would happen. Therefore, the core was to visualize, randomly, one of these alternatives. As a trigger to show the different possibilities, we decided to have a physical manifestation of an easy to understand metaphor: the eight ball. Shaking this artifact a different future would materialize.

Because we found the unpredictability of the eight-ball closer to the experience of reality, we decided to develop the chaotic futurecasting, and went through a round of user testing. However, the class found our storytelling unclear and our metaphor did not come through. In both aesthetics and execution, the eight ball was associated more to a snow globe. Furthermore, having the ball as a completely separate object further degraded the link between the shaking and the visualization of different alternatives. One consistent piece of feedback was that the testers strove to find a logic; they were trying to find directions for their agency that connected to what was happening.

Our chaotic representation of this multiplicity seemed too chaotic. So, once again, we took a step back and tried to look at our concept through a critical lens and stripped it off of all the superfluous elements to isolate the core interaction: willingly and consciously look at things to see one of the potential multiple futures that are embedded in them.

The concept

Using an iPad, which represents a window to the future, we choose to look at objects, which are part of the environment that surrounds us. The program will recognize the object in scope and elaborate the multiplicity of possibilities that are embedded in it in this moment of time from the unique perspective which the seer is standing.

Upon detection, an interface appears and shows the category the future visualized belongs to, other elements appear in the distance to prompt further interactions. The interface aims to provide a framework for the future that is visualized, and to translate the affordance of the navigation of the space around the object that is being scanned which leads to the materialization of other futures.

AR as a medium

In the very early stages, during ideation, we thought a lot about technical limitations AR bears: even as engineers are rapidly bridging the gaps, achieving a full immersion requires the involvement of all senses. Be it due to technological limitations or that our heavy dependence on sight is abundant in this medium, some of our senses, specifically touch and smell have been disregarded.

For this reason, in our process we aimed to stay away from the common screen interaction paradigm of 2D UI and strove toward human interference as the main interaction. The human action on the object of which the future was shown caused the multiplicity this representation belonged to cease to exist and created a completely new set of possibilities.

Destruction and creation of the multiplicity of potential futures

However, as of today, AR still lives on a screen and this was a problem.

As Alysha Naples, former Senior Director of User Interactions at Magic Leap, said at the IDSA Conference in Seattle in 2015, “Augmented reality is a technology that superimposes a computer generated image onto users’ view of the real world”. Though today it still has to get through the filter of a screen, in fact, it happens between the real world and the user’s device.

When talking about Mixed reality we think about the digital and virtual world to be integrated into our physical world in a seamless way, allowing the virtual to interact with the user and the environment. However, the digital world is still imprisoned in screens and devices, making the experience fractured, a continuous jump between two different realities.

AR requires a filter, be it the screen of an iPad or a bulky and weird headset, and both of these solutions required us to compromise a part of our concept. Wearing a headset would have made the person’s hands free to intervene on the object and its potentials. However, wearing a headset would make it complicated to translate the intentionality of the request of visualization of the future. If we think of this to an extreme, would it mean that everything you look at would have virtual copies of their future popping up constantly? Otherwise, we would have to come up with a sort of interaction to express the intention to see these, which would make the experience more controllable but still isolated to the person who is wearing the headset.

For this reason, we opted for the iPad, which while remaining a filter, granted a more shared experience and allowed us to use the relation of the people with the environment as main interaction. Its presence is framed by the metaphor of it being a window to the future and we strived to link it with the person’s agency. In fact, by prompting them to take in different views and perspective of the scanned object, different possible futures appear.

Joshua Walton said “sometime the best AR interaction is just looking” so aiming toward an interface that gazes, we tried to turn the technical constraints around and integrate them in the design.

AR is a fairly new area of design in which it is still possible to create and develop things without trying them, however, the value you can get by experiencing and testing what you are designing is enormous, especially when you are trying to achieve immersion.

It is hard to explain what we felt when we showed our window to a classmate and their response was “This looks like nothing to me”. In fact, in that screen there were not only physical objects through a camera. Therefore, we started to think about what were the elements that allowed that moment to happen. We found three:

  • The link with the environment: Having the projection of possible futures appear at the same scale and side by side with the object that future belongs to, obeying to the same laws of prospective, made it easier for us to understand and accept that projection as an alternative version of the same object.
  • The physics: In order to achieve a complete fusion between virtual and real, and allow a total integration of the digital content in the physical world, the experience people live needs to be so seamless that they struggle, or need to pay attention, to distinguish digital projections and physical elements. In fact, having the digital content behave according the same physic laws that regulate the real world made it stronger and more powerful.
  • Seamless transition: When future projections appear the camera glitches, this effect started as a bug of the animation in one of the first iterations, however in user testing it resulted being an element that reinforced the representation of the future, which is uncertain and unstable. Furthermore, this effect aims to give a subtle feedback to the movement of the window and establishes a connection between the agency of the person and the appearance of the future projection.

The tools

“The voice of people who speculate in AR/VR, who have actually worked in AR/VR comes at a completely different angle from the voice of those who are looking from the outside” — Joshua Walton

One of the great value we found in Unity was the embedded physics laws that we used to develop most of the animation of the future. This simplified our work because it granted us the possibility to create behavior that are believable and faithful to reality in a fast and simple way, without requiring a line of code, which means a lot for people that do not have a training in computer science.

Our prototype has been developed in Unity using Vuforia. Writing code in Unity is like opening a box of chocolates, while you’re pretty sure you’ll get chocolate — you’re not sure what kind and sometimes, the box is empty. Schrodinger’s chocolate. Jokes aside, it is usually a straightforward process as long as you keep in mind one fundamental thing: all scripts are linked.

Whatever you code, wherever you save it, it is going to influence a very small part of a greater system of code that controls your Unity project. Coming from the world of rudimentary Processing and Arduino, where our scripts lived either in isolation, or in a very clean and controlled interaction, this hierarchical and nested way of coding has been a great struggle, especially when debugging.

The platform Vuforia can also be leveraged with rudimentary coding knowledge. We used native scripts that are available in Vuforia and managed to achieve a complex behavior that sent events on tracked target images and lost target images to initialize a scene manager.

Using Vuforia as a tool to implement scene management is a good way to control time sensitive animations, however — as with any sort of repurposing — it has own drawbacks. Its extended tracking has proved both a blessing and curse, it grants a very robust fiducial tracking, sometimes even too robust, as tracking lost is not implemented when the user looks away. That causes the scene to stick to the viewing window and this breaks our immersion. While we are sure there may be clever ways to write around those limitations, or even write our own fiducial tracking system — given the time and knowledge restrains of this course that did not seem possible for us.

The future of futurecasting

As mentioned before, the introduction of a UI in our prototype was our answer to a request for clearer understanding to aid the agency of the user, and it consisted in labeling different future alternatives. Both when we were considering the control freak version, and when discussing about the categories to indicate in the UI we came to several moment of conflict where the question regarded the tone of voice we wanted to use when talking about future. We were split between a very clinical and unbiased voice or a purposeful injection of our own biases.

This is when we started wondering: when is it appropriate to design with personality and bias? For AR the answer depends on the product and its purpose, for future projection we are still undecided and quite split. We opted for a set of categories that have implicit and clear biases embedded in them, and we are fully aware of the fact that people might not agree with our framing. This is what we aim for, to use our prototype to provoke conversation about the future.

Hopefully, some of these discussions will revolve around not only the possibilities but also the potentials each element of the environment we live in has in them. It would be great if this could become some sort of creative tool, providing inspiration for other designers and innovate.

The potentials are infinite especially because a part of the way it is going to be used depends on the people that will use it.

We think that one of the reasons why we were drawn to futurecasting as a topic in the ideation process was mainly due to our struggling with the potential value of AR, we strove to find the appeal this medium could bring outside information in context model. “It was hard to imagine a world with AR that was preferable to the world without AR” and that’s probably because we are just not there yet.

For us, the technology is so new that we can not fully grasp the implications. Rather than staying trapped in a mindset where we were pushing AR to try make valuable things, we strove to experiment with the medium to provoke a conversation and discover its possibilities.

--

--