Creating “Rewilding,” An Interactive Medium Between Holograms & Education

Michael Carter
Through the Looking Glass
10 min readJan 16, 2019

This article details the concept and process behind Rewiliding, an educational application I made for the Looking Glass. My application uses 3D printed artifacts with RFID tags embedded in them to spawn models in the Looking Glass when they’re placed on a RFID Reader.

This application was undertaken as a thesis project for my Master’s degree in Digital Media, but was also created as an experiment to visualize how this technology could be used in future education. I hope that my work will not only lead to discussions on how this technology could be used in education, but also inspire other creators to use the Looking Glass to conceptualize new ideas that push the boundaries of what is possible!

Holograms in popular culture as seen here in Tom Cruise’s Edge of Tomorrow

The Concept

When I first discovered the Looking Glass — a holographic display that was designed by the Looking Glass Factory team in New York City — I was instantly compelled to get my hands on it. It is one of the first devices that takes a vast leap towards the holograms we see in sci-fi, something I’ve always wanted to see in reality. The dream of the hologram has been something many people have fantasized for decades. However, only the illusion of holograms has been achieved in reality. Thus far, the “holograms” we’ve seen are mostly smoke and mirrors, leaving us to only hope the dream of the hologram will someday become a reality.

In an alternate universe — where holograms are here and real — we would ask: Do we really need them? What purpose do they serve? Could they actually contribute to the greater good? We have seen in various situations how technology today can cause detrimental distractions to users, so if holograms were real and they created a stronger connection between users and technology, why would this be something that the world needs?

If there were an answer today, it would be that holograms could have benefits pertaining to content engagement. Specifically because they would create a level of engagement with technology that is not possible today. If holograms were real, they would have the ability to create an intersection between the digital and physical world.

“Help Me Obi Wan Kenobi, You’re My Only Hope”

This then leads to questions about how holograms could be used in education. Specifically, to enhance educational content. In today’s learning environments, increasing student engagement has never been more challenging since the inception of smartphones. In most cases, content is still presented in a way that students do not find engaging and students must now attempt to absorb class content while ignoring notifications on their distracting smartphones. This has created a divide in the classroom, a divide in which technology and content are at opposite ends of the spectrum.

This is not to say that technology has never been used to increase engagement with students. One of the most recent examples that uses mixed reality in education is the zSpace Laptop, which utilizes VR and AR to create what appears to be floating visuals from the laptop’s screen. The only downside to this, which is common with many other technologies that utilize AR and VR, is the zSpace laptop requires special headgear to create this experience. Although this is not a huge drawback, the Looking Glass is able to create 3D stereoscopic visuals of 3D models without the need of any extra peripherals or headgear.

So the question today becomes — How could the the Looking Glass be used in education?

My answer: Rewilding!

Rewilding is an application that is comprised of four unique scenes. Each individual scene represents a unique model that users can learn about. Each scene was created in Unity and designed to represent a fragment of the environment each model belongs to. The scenes are accompanied by background audio. The audio is meant to represent environments, descriptions and narration for each model. It was designed to be an all encompassing learning application that specifically enhances educational content pertaining to plants and animals.

The Process

If you are already familiar with Unity, the setup of creating holograms in the Looking Glass is just one extra step. Setting up each scene to render on the Looking Glass is extremely straightforward and only requires you to place your 3D assets within a box provided in the HoloPlay SDK. It’s as easy as adding another GameObject to your already existing scene. This box then replaces your camera and allows the Looking Glass to render your scene as a lenticular stereoscopic image, which outputs as seen in the images below.

(LEFT) Image that appears on a regular computer screen. (RIGHT) Rendered image displayed in the Looking Glass.

In terms of rendering text, this was slightly more challenging because the Looking Glass box in Unity uses its own camera to render what is placed inside it. The Looking Glass is able to clearly render 3D objects and slices 3D objects into a lenticular image. This allows the 3D models to render as a stereoscopic image with 45 viewing angles. As a result, if an object is 2D and placed inside the Looking Glass box in Unity, it will always appear blurry. Since I used text as a 2D sprite in Unity, it always appeared illegible and posed a huge challenge for me to overcome.

Example of text displayed in the Looking Glass.

I was able to work around this by using a regular camera in Unity and switching the perspectives between the Looking Glass camera and the regular camera. This method will never create perfectly clear text, but does make text legible at least. Theoretically, if your text was 3D, it may render on the Looking Glass clearly, but the size of the text would be a major consideration if going this route.

Unity scene with the model inside the Looking Glass render box.

The Interaction

Originally, the interaction between the user and the Looking Glass was image recognition handled by the Vuforia AR plugin for Unity. A set of 4 unique image cards were designed to handle scene triggers in Unity during this version of my application. Each card was designed to have a high contrast separation from the image and the background. A minimalist design was chosen with a pure white background to isolate the image target and help the camera identify the images efficiently.

UX design was also considered for the aesthetic of the card, with a blue square placed at the bottom left of each card to indicate to the user where to place their finger when holding the card to the camera. This was mainly done to to minimize the chances image obstruction that could negatively impact the user experience. A common back was also designed for each card, which acted as a method of removing an active scene that was displayed in the Looking Glass during any time. Each card and the back can be seen below.

The image recognition worked very well for the most part, but was dependent on lighting conditions, which interfered with the identification of the image with the camera in some situations.

The main issue with this interaction was whether someone would be able to understand the interaction without any explanation. In most cases, it was not apparent how the interaction worked because there was little context indicated when placed on a table and left unattended. In terms of the design, if the interaction needs to be explained to a user, it usually means the interaction could be designed better. I decided to take a step backwards and approach the interaction from another perspective.

Vuforia image recognition on the Looking Glass.

In order to create a more responsive and tactile feedback, RFID recognition was then chosen to be the primary method of spawning scenes models in the Looking Glass. This also allowed users to interact with a digital platform through a physical interaction. Taking inspiration from amiibo and the game Skylanders, I created an RFID reader that identifies a specific set of 4 RFID tags. These tags were then embedded into 3D printed footprints to allow users to learn about the footprint that belonged to each model, and in one case, the print of a leaf for a Ginkgo Tree. These artifacts can be seen below.

The RFID reader was made from 2 Arduinos, with one acting as the RFID identifier, and the other acting as the Unity communicator. Due to the fact that Arduino’s understand C++ and Unity understands C#, there was a language barrier that needed to be addressed in order to get the two to communicate. This translation can get messy, especially if you are new to both Arduino and Unity. Thankfully, the Unity Asset Store is a magical place where you can download a Unity plugin called “Uduino” by a gentleman named Marc Teyssier. Marc’s Uduino plugin allows Unity and an Arduino to communicate seamlessly to save you the hassle of translating the coding languages.

Evolution of the RFID reader from sketch to final product.

From here, I faced another problem; how would I get the Arduino recognize RFID tags and then communicate with Unity to activate models in relation to those tags? This functionality took some time to get right. What I ultimately made is most likely not the most efficient way of accomplishing it. This is where Arduino B comes in to play. It solely acts as the RFID identifier and sends an output to Arduino A of either 1 or 0 (On or Off) when tag has been identified. The most simplistic way this interaction can be defined is as stated:

  1. A 3D printed object with an RFID tag is placed on the RFID reader.
  2. The RFID tag is identified by Arduino A, and sends a signal of 1 (HIGH) to Arduino B through 1 of 4 unique pins.
  3. Arduino B receives the signal, then activates 1 of 4 models in Unity depending on which of the 4 tags had been identified by Arduino A. In addition, all other models are deactivated at the time of the RFID recognition, with a signal of 0 (LOW) being sent to all other pins from Arduino A to Arduino B.

For better context, a small snippet of this code can be seen in the image below.

The Reason

The idea behind this application was to re-imagine how content can be presented in educational institutions, with an emphasis on how this technology could be used in a museum. It was designed to conceptualize how an educational application using this technology could be used to stimulate new forms of engagement in educational content, highlighting the 4 specific learners of the VARK model (Visual, Audio, Reading/Writing, Kinesthetic).

The actual learning comes from both the 3D printed artifacts through the understanding of what footprint belongs to what, and through the content on the Looking Glass itself. After the a model is spawned in the Looking Glass, users are able to trigger description boxes, audio narration and rotation for each specific model. This concept could be used in educational institutions to learn about extinct plants and animals in way that is not currently available, by increasing engagement between the students and the content.

This is not an absolute solution for the battle of attention, as this battle is ongoing. However, I think it is important to think of new ways to stimulate engagement in educational content so that we can evolve teaching practices to coincide with the evolution of technology. The final version of my application with its entire functionality can be seen below, which also demonstrates the interaction between the RFID reader and the Looking Glass.

The future of how this technology will evolve is uncertain. It is so new and we have yet to see the trends of the technology’s use. I do feel that if developers continue to innovate using this technology, we can narrow in on specific ideal use cases. Although it seems like we may still be a long time from the holograms we see in science fiction, the future is optimistic. The Looking Glass is one of the first technologies with actual potential to make the dream of the hologram a reality.

But that’s just one person’s perspective…

To learn more about the Looking Glass, click here.

To learn more about Michael and his work, check out his website here.

To get in contact with Michael, you can find him on Instagram and Twitter!

--

--