A Flick of the Wrist: Defining the Next Generation of Human-Computer Interaction

The following is a written version of a talk I did at TedX Goldey Beacom in January 2019. They filmed it, which you can watch here!

Alina Christenbury
8 min readApr 29, 2019

--

For years, we’ve been enchanted by the idea of magic. The thought that someone, with the wave of a wand, snap of a finger, or some special words, can completely change the world around them in an instant is an idea that has captured minds throughout history.

Any sufficiently advanced technology is indistinguishable from magic.
— Sir Arthur C. Clarke

Right now, we live in a world where we manipulate the entirety of human knowledge on screens barely bigger than credit cards. I’d be willing to bet that everyone in this room has a device in their pocket that has more computing power than the technology we used to send people to space. And relative to the rest of human history, it’s all happened in the blink of an eye. It’s really difficult to overstate how far we’ve already come, even within my lifetime.

St. Peter’s Square in 2005 via NBC
St. Peter’s Square in 2013 via NBC

With the advent of extended reality, machine learning, and other emerging technologies, the way we work with computers and each other is going to drastically evolve over the next several years. We are increasingly able to not only perceive digital worlds in three dimensions but interact with and be seen by them in return. This is in essence, what the field of human-computer interaction is developing.

Today, I’m going to show you some HCI research projects that are pushing the boundaries of technology. I invite you to look ahead at how they will fundamentally change how we interact with computers, information, and each other.

But first, take a moment to dream with me. It’s 20XX, a Tuesday.

Imagine being a kid taking chemistry for the first time. Remember learning molecular geometry? The thing where atoms are arranged in various 3D shapes like tetrahedra and trigonal pyramidals? You had to draw them out on paper, using nothing but a pencil and ruler to visualize these abstract shapes that make up fundamental pieces of our world.

Or, maybe you were a bit luckier and did an activity where you arranged playdough and toothpicks like I did in high school. They are really sad and droopy.

Instead of that, kids in 20XX play with holograms, building out octahedra and seesaws with their bare hands in space. They can manipulate atomic bonds intuitively, playing with digital representations that function as we understand things to at an atomic level.

Imagine you’re out hiking and see a gorgeous landscape of mountains. Inspired by their beauty, you whip out your sketchbook, but you don’t exactly have an entire collection of paints on your person. But that doesn’t matter, as you draw thin wobbly lines, it transforms into a picturesque landscape painting right before your eyes.

You get home from work and jump into a game that basically puts you into the matrix. As the lone hero, you stand alone in a hostile world. You’re swarmed by agents, and dodge bullets in slow-motion using your entire body.

Now, you want to tell me that all of these sound crazy, requiring tech we don’t have, right? But it turns out, “20XX” is actually 2018. These are some of the things we did *last year*.

The chemistry application is Project Pupil at Carnegie Mellon.

Project Pupil Chemistry Demo via Yujin Ariza

The painting? An application by Memo Akten.

Learning to See via Memo Akten

The slow-motion shooter? Super Hot, which you can literally go to a VR arcade to play *right now*.

via Super Hot VR Trailer

So, what are these things anyway? How are we doing this?

For the uninitiated, XR is used as an umbrella term to describe a continuum of combinations of real and virtual objects interacting in tandem. This includes technologies like virtual reality, where your entire environment is digital, augmented reality, where you overlay flat images onto the real world, and any dimension in between.

Leap Motion Mirrorworlds Concept Video

Maybe you’ve played with primitive augmented reality systems, like Pokemon Go

Or are lucky enough to have tried virtual reality system sellers like Beat Saber.

Beat Saber via LIV and SwanVR

The one thing XR technologies have in common is they use computers to shape your perception. XR as a spectrum can put you in wholly new and different environments, or simply add information to the real world.

Machine learning is essentially using particular algorithms to teach computers how to solve problems. It’s used in all sorts of applications, from mastering Go

… to powering the brains of self-driving cars,

… to generate cats from a handful of lines.

via pix2cats

I drew that last one, he’s probably okay.

There’s a lot of exciting work using machine learning to see the world through a computer’s eyes.

We’re able to take artificial intelligence and show it parts of the world. We can show them our bodies, our paintings, how objects interact, see what they come up with, and use that to shape our perception. Machine learning is able to make sense of the vast amount of information in reality, while XR will help us see it more clearly.

All watched over by machines of loving grace: Deep-dream edition (2015) via Memo Akten

I feel like some of the most exciting developments have been through open source and publicly funded research projects.

OpenPose is a research project at Carnegie Mellon that uses machine learning to detect bodies in single images.

It’s been used as the backbone for other work, including research projects that help put your whole body in virtual reality

Deep Learning for VR/AR: Body Tracking with Intel RealSense Technology

… and help you, (or at least, a video of you) perform intricate ballet dances.

Everybody Dance Now via Caroline Chan

Pix2Pix is a project at Berkeley that uses neural networks to generate images based on training data.

Pix2Pix via a Affinelayer

It’s been further remixed into applications that turn your webcam feed into flowers

Learning to See: Gloomy Sunday

… or turn photos of Wilmington’s skyline into gorgeous paintings that emulate Van Goh.

via Deep Dream Generator

Project North Star is an augmented reality headset that you can literally 3D print anywhere in the world. There’s a community growing around sourcing and building these headsets, and I think we’ll see some interesting applications as it becomes more accessible.

These are all open source, so anyone can take their work and build on top of it to make all sorts of applications, which they have.

“They” includes me.

Me in my North Star via UDaily

I’m currently building my own North Star. Some of the parts I was able to 3D print back at the University of Delaware, others were sourced from community members that have cropped up around the project. Most of this happened over UD’s summer scholars program, where I took 10 weeks to learn the basics of XR development. After the semester started, I turned that experience into an undergraduate research project focused on getting cross-disciplinary students together to develop XR applications.

Just last week I went to Reality Virtually, a hackathon at MIT’s Media lab. I got together with over 400 other developers, artists, designers, and coders to make XR applications.

The University of Delaware Human-Computer Interaction at Reality Virtually. Right to left: Zhang Guo, Dr. Roghayeh Barmaki, Alina Christenbury, Yan-Ming Chiou

The one rule for all projects at the hackathon was that they had to be open source, so that anyone around the world could take what they made and create new and interesting applications. Together, we made just under 100 XR projects including tools for physical therapy and accessibility, but also games and interactive art. My team made a VR escape room in under 5 days, and my advisor Dr. Barmaki’s physical therapy project won “Best VR application”.

Escape the Witch’s Grotto

In my mind, this technology really comes together in the concept of Mirrorworlds.

Illustration by Anna Mill

Rather than ever leaving your physical space, this technology will help transform it around you into another parallel dimension. Chairs become mountains, walls become sunsets, and “the floor is lava” transforms from a simple kid’s game into a visceral experience. You can interact with digital objects the same way as you would with physical, and interact with physical ones to an even greater effect. Your environment can show you how it works, as items show you how to use them. A guitar could teach you how to play itself, showing you where best to hold it to play particular chords. Or objects could change altogether, as tables turn into touch screens and pencils into wands.

It’s time to shift the conversation from what an AR system should look like, to what an AR experience should feel like.
David Holz

The question is no longer “how can we make this work?” but rather “how should this feel?” We’re at point in history where what would have been considered “magic” is real. It’s here, and it’s now. And so, I leave you with this:

What will you do with it?

I read many, many, *many* things to prep for this in order to create something mildly resembling a narrative for this talk. I did my best to save sources, most of which are linked to throughout, but feel free to check the source for a full list. This post is also available on my website at alinac.me.

I also have a mailing list! Sign up if you want occasional emails from me at alinac.me/subscribe

--

--