Published in


This Entrepreneur Is Turning Science Fiction Into Science Fact

Shawn Frayne and the quest for the hologram.

Like the Death Star taking out Alderaan, Star Wars blew America’s collective mind when it hit theaters in 1977. The movie became the defining myth for a generation, fascinating audience with high adventure in a galaxy far, far away and inspiring kids to dream of mastering the Force or taking the helm of the Millennium Falcon.

In our own backwater galaxy, we don’t yet have lightsabers that can deflect blaster bolts or hyperdrives to power us through the Kessel Run in less than 12 parsecs, but Shawn Frayne is building a real-world version of one piece of iconic Star Wars technology: The hologram of Princess Leia that R2-D2 projects to deliver her iconic line, “Help me, Obi-Wan Kenobi. You’re my only hope.”

Shawn embarked on his mission to build a hologram in high school and, after nearly 20 years of experimentation, cofounded a company, Looking Glass Factory, to make good on that promise. The latest iteration of their volumetric display just launched on Kickstarter, and appears to do just that:

The Looking Glass in action.

Shawn was kind enough to answer a few questions about human-computer interaction, media innovation, interface battles among the tech giants, and the inspirational power of science fiction.

In your view, what were the key turning points in the history of human-computer interaction? What new opportunities did each paradigm shift unlock? What hidden assumptions govern the status quo of our relationship to computers, and how might the future be different?

I think a lot of the chase for new interfaces over the past century has been focused on bringing the interaction between people and computers more closely in alignment with how our species has evolved to work and communicate in the physical world. To the end of enabling non-specialists to begin to interact with and use the power of computing platforms and networks. The standard example of this sort of shift is the transition from an abstract command line interaction toward a popularized virtual desktop GUI with the Macintosh in 1984, but I also consider things like the transition from telegraph to telephone to be a closely related phase transition, since this allowed regular folks, the non-specialist, to communicate at a distance for the first time.

Each paradigm shift like this — and these seem to only come along every few decades — gives more people access to greater and greater powers of creation and communication.

Our work in Looking Glass Factory is focused on getting more information from computers into people’s brains through three-dimensional display and interaction. This is analogous to the command-line to GUI transition in a lot of ways. For instance, to simply place a virtual light in a desired position behind a virtual character — let’s say Woody from Toy Story — using a 2D screen requires a number of rotations of the 3D scene on that 2D monitor, to simply understand where that light is in relation to Woody in virtual three-dimensional space. That placement might take 15–20 seconds for a specialist now, and a minute or two for an amateur. But with a holographic desktop display like the Looking Glass, that same skilled designer can place the light into position in three-dimensional virtual space in a split second — and the amateur can too, since the movement of the virtual light is now much closer to the experience of moving a real object or a real light around in the real world, which we’ve evolved to know how to do with great efficacy.

This may seem like a trivial example, but it has major implications — that simple solution of the 3D placement problem has ripples through all design of almost the entire modern world, as most 3D design of buildings, rockets, virtual characters is already being done in 3D. It’s just not being done in a way that is aligned with how our monkey brains are used to working, so it’s been slow and requires significant training to do, but that’s about to change.

Human-computer interfaces are the main tool of our species at this point, and their power and value ultimately lies in letting more of the world communicate and create in more powerful ways than was possible before. Making interfaces that give new power of creation and communication to folks with less and less specialized training is, in my view, the goal of the modern toolmaker.

Virtual and augmented reality systems are getting a lot of attention, even if few applications have gone mainstream. What’s special about Looking Glass’s approach to changing how we interact with virtual worlds?

Our approach in Looking Glass Factory, and how we designed the Looking Glass holographic display that just launched, is governed by two simple principles. One, that people want a low-friction experience. We don’t want to put something on our heads if we don’t have to. And two, that people enjoy doing things in small groups. That’s it. Those two things, of making interfaces that are instantaneously useful without the friction of putting on a headset or glasses of any sort and of making that experience something that small groups of people can partake in at the same time, are very different goals than the goals of and beliefs held by the big companies developing AR and VR headsets.

We take an approach that respects the ways in which people have evolved to live in the real world and has more resonance with how we’ve interacted with information and with each other in the past. You know, sitting around a campfire and telling stories or listening to the radio in a living rooms. We believe the Looking Glass is the next step in that evolution, and I think it’s important because it means that folks can connect with richer content without sacrificing all the things that are great about the real world and person-to-person interactions. Being able to evaluate a 3D thermal flow simulation for, say, a rocket engine part or to play with a virtual character with your kids is something best done in groups, where a glance between the 3D virtual world and the person sitting right next to you carries a huge amount of information. VR and AR headset-based approaches give that up and in that way I think are less powerful tools and ones that ignore all that is powerful about real-world interaction.

Also, I personally don’t want Mark Zuckerburg to own the high-speed port to my brain — my eyes — with a VR headset, and the approach we take to HCI in Looking Glass Factory is meant to counter that dystopian future. But that’s another conversation. 1984 won’t be like 1984, I hope.

How has media innovation shaped the art and entertainment we create and consume? What are the most surprising things people have done with Looking Glass technology?

We dream in ways that have been impacted heavily by the 2D moving screen media of the past century, and no doubt this has shaped what we create. I find I mostly dream in the third person, and I think this is a direct result of the movies I watch and the home videos I see, in which I’m watching a character or myself in the third person. That’s pretty insane if you think about it. The interfaces and displays of the past century have actually rewired how we dream. I don’t think it’s a bad thing necessarily, but it’s something that I think will continue to happen as interfaces continue to change. I believe this will be just as true for the shift from the 2D illusion of life, as the Disney folks call it, to the 3D illusion of life that’s happening now.

We’ve seen folks making incredible virtual characters that live in the Looking Glass, in some cases that you can feel — literally feel — by tacking on a haptic feedback array to the Looking Glass. I think we are just at the very beginning of seeing what sort of holographic apps folks make though, now that the Looking Glass is launching and a world of 3D creators can finally have a holographic display on their desks.

A big application for me that is in the works is lightfield capture direct to lightfield display (which is the technical category of device that the Looking Glass falls in). This will open up applications like holo-skype, where you can feel as if someone from across the world is right there in front of you, as real as if they were really there. Again, without the hindrance of a headset. I travel a lot between our labs in Brooklyn and Hong Kong and my family is spread all over the world, so this one is particularly and personally important application for me.

The best computer interfaces eventually become communication interfaces, and that’s what I hope happens with the Looking Glass.

My immediate reaction to seeing Looking Glass in action was, “Our science fiction future has arrived.” Have you found inspiration in science fiction books or movies? How has science fiction influenced your ideas or worldview, and what role do you think it plays in our culture?

Absolutely. I started making conventional laser-based holograms in high school after seeing the holographic shark swallow up Marty in Back to the Future II. More recently, I saw this holo-message player in Adventure Time and thought, damn, that looks exactly like a Looking Glass. So, I think the dreams of sci-fi inform the technologies of the future and vice versa.

Shawn’s daughter with a real Looking Glass on the left vs. Adventure Time’s hologram display on the right.

What do advances in holographic displays mean for tech giants betting heavily on AR/VR?

A lot of folks ask me why this sort of holographic display is possible now, and how a small startup of 20-people can have the gaul to take on the giant companies like Microsoft and Apple and Facebook in the interface wars.

I usually say that commodity pixel density on LCDs and OLEDs is at a level now that allows for a 45-view superstereoscopic display like the Looking Glass for the first time (thanks to the emergence of retina resolution tablets) and that computers on the desks of most 3D creators are fast enough to generate those 45-views at 60fps for the first time as well (thanks to the games industry and the fast evolution of GPU power).

But I also think there’s a unique misdirection that’s happened that has given us this rare opportunity to make a fundamental leap in the chase for the next interface after 2D screens before the big companies have. After Facebook acquired Oculus, a lot of the brain power and financial resources in the big companies that was historically spent on holographic/lightfield/volumetric displays was rerouted to systems that went on on person’s head. While all the big companies are looking one way at the headsets, it gives the unbelievably rare opportunity for a small startup like us to make this advance. Hopefully the misdirection lasts a bit longer before they catch on.

Eliot Peper is the author of Bandwidth, Cumulus, True Blue, Neon Fever Dream, and the Uncommon Series. He’s helped build technology businesses, survived dengue fever, translated Virgil’s Aeneid from the original Latin, worked as an entrepreneur-in-residence at a venture capital firm, and explored the ancient Himalayan kingdom of Mustang. His books have been praised by The New York Times, The Verge, Popular Science, Businessweek, TechCrunch, io9, and Ars Technica, and he has been a speaker at places like Google, Qualcomm, and Future in Review.



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store