(Science) Fiction and Design

This is an embellished transcript of a talk I gave at Eyeo Festival on June 2, 2015. A video of the presentation is available on Vimeo.

This teddy bear of a man is Jules Verne.

In 1865, he wrote From the Earth to the Moon. The Baltimore Gun Club launches a projectile from Florida with three men on board. In the sequel, Around the Moon, they return safely to Earth, landing in the Pacific Ocean.

In 1902, Georges Méliès turned it into a major motion picture.

About 60 years later, President John F. Kennedy addressed Congress:

“I believe that this nation should commit itself to achieving the goal, before this decade is out, of landing a man on the moon and returning him safely to the Earth.” — JFK

In 1969, we launched a projectile from Florida with three men on board. The Baltimore Gun Club was not involved.

They made it to the moon and returned safely to Earth, landing in the Pacific Ocean.

I work at the Jet Propulsion Laboratory in Pasadena, California.

JPL actually predates NASA, tracing its history back to early rocketry experiments in 1936.

One of JPL’s early claims to fame was the first successful U.S. satellite: Explorer 1.

It looked something like this:

Oh, sorry — that’s an illustration from the Jules Verne novel. It looked something like this:

It’s no secret that science fiction has inspired the work we do. In fact, here’s a recruiting ad that JPL ran in Scientific American in the 60's.

When I go to work, I walk past this sign. (Really, it’s real!)

And then I walk past this building, Building 264:

This is the home of the Mars rover programs, which means it’s home base for the team that decides what the Curiosity rover does every day.

Now, despite looking like the coolest RC car ever created, we can’t just drive it with a joystick. There are two reasons.

The first is time delay. Sending commands over radio, at the speed of light, it still takes 4 to 22 minutes to reach Mars, depending on the orbits.

But an even bigger problem is complexity and safety. Curiosity is a science laboratory — it’s not just a question of where it drives, but what scientific discoveries it’s supporting. To make informed, considered decisions, and to do so safely, requires a day-long process. It looks something like this:

We get new data down early in the morning and start looking for the most interesting locations. Throughout the day, we move from high level science discussions to specific activity plans, to command sequences, and then everything is validated before getting uplinked to the rover.

The next day, the resulting data is downlinked, and the process repeats.

For images, they come down in individual frames.

These frames are then stitched together into panoramas, like this:

These cylindrical projection mosaics are what much of the science team spends their time looking at to assess science targets.

This is the question that we often ask ourselves in the Human Interfaces Group, where I work. How will we control space robots in the future?

Last year at Eyeo, Scott Davidoff and I talked about an experiment that our team ran. We had a hunch that immersive 3D would aid in the perception of these Mars environments.

Each participant saw a panorama with marked locations, and a 3D, immersive version of the same scene in an Occulus Rift. After each experience, we asked them to draw an overhead map showing the marked locations. These plots show the results:

In short, people were far more accurate with the immersive view, and they also were in closer agreement with each other.

This stark difference gave us the evidence and confidence to embark on a much more audacious project: OnSight.

The goal of OnSight is this:

But to tell this story properly, we have to rewind a bit.

This is L. Frank Baum. You know him as the author of The Wonderful Wizard of Oz, which he wrote in 1900.

But what you might not know is that just a year later, he published a book called The Master Key: An electrical fairy tale.

In this book, a genie gives some technological wonders to an unsuspecting (and unprepared) young boy. One of this gifts is called the Character Marker.

This is precisely the idea of augmented reality: a layer, visible only to the viewer, added on top of the real world.

So that was in 1901. About 60 years later, in 1968, Ivan Sutherland created the first head mounted 3D display.

“Our objective in this project has been to surround the user with displayed three-dimensional information.” — Ivan Sutherland

The diagrams were fantastic.

In 1980, Steve Mann created the first wearable computer. He’s been wearing some version of it pretty much ever since.

In 1990, Tom Caudell coined the term Augmented Reality. In 1992, he published a paper with David Mizell about the setup they’d been working on at Boeing.

The diagrams were fantastic.

In 2013, the Google Glass beta started.

And this year, in 2015, Microsoft announced HoloLens.

Microsoft describes HoloLens as a holographic computer. It’s self contained and untethered. It can map the room you’re in and track your position within it, all without any external hardware.

And it can layer information on top of your real environment.

This means we can use it to put stable virtual objects into the world around you.

Which brings us back to OnSight.

We’ve been using the HoloLens hardware and collaborating with Microsoft on the hardware, as well.

But what is it? What’s the vision?

I came on to the project slightly late, so my first real experience of it was an internal vision video. I can’t share the full video with you, but I can show some stills.

The scientist puts on the HoloLens and launches OnSight. Mars fades into view in his office.

Everywhere he looks, he sees the Mars terrain. Except for his desk and computer, which are masked out so he can still use them.

He seamlessly moves between his computer and the terrain, using his mouse to identify new targets.

Or, if he chooses, he can get up from his desk and walk around.

And sure, why not: he can invite his colleagues to join him, and discuss the Martian surface as a group.

When I first saw this, I thought:

“Wow, this feels like science fiction.”

And then, I thought:

“Uh oh, this feels like science fiction.”

What do I mean by this? It means I agree with Frederik Pohl.

It means that I agree with Erika Hall.

And it means that I agree with Public Enemy.

But what is the hype, exactly? What should we be wary of?

One is utopias. The problem with utopias isn’t that they’re imaginary, but that they’re incomplete. They don’t address the traffic jam so much as completely fail to anticipate it.

And then you have this problem:

Utopias always come with a point of view, and there’s a pretty good chance that there are some people who have a dramatically different idea of what the future should look like.

“All paradises, all utopias are designed by who is not there, by the people who are not allowed in.” — Toni Morrison

And though utopias are frequently problematic, at least they’re trying to picture a better future. What I find even more insidious is a blind faith that technology will make things better, and if it does cause problems, more technology will solve those problems, too.

It’s our job to fight that line of thinking, to make sure we’re making informed decisions.

On a less grandiose scale, there’s another issue, at least for me, personally.

I distrust shiny things. And you should, too!

When I started working in the field of information visualization at IBM Research with Martin Wattenberg and Fernanda Viégas, I noticed something: visualization demos were universally well received.

At first I thought this was great. And then I realized it was kind of terrible.

When this happens, you can’t distinguish good work from bad. And, more importantly, you can’t distinguish an idea that will work from an idea that will fail.

That enthusiasm is noise, not signal.

But cynicism and pessimist aren’t the answer here. Pessimists don’t build the future.

I still firmly agree with Erika Hall, but Helen Keller is right, too.

So how do we reconcile these things?

So, back to the time machine.

It’s April 20, 1961, a month before JFK addresses Congress. The President sends something across LBJ’s desk.

No, it’s not a Jules Verne novel. It’s something much more pragmatic.

It’s a memo.

It’s only a page long, and it asks pointed questions about where we are, what is practical, and what can be done to accelerate progress.

Do we have a chance of beating the Soviets? Are we working 24 hours a day? Are we achieving necessary results?

These kinds of focused questions are a great start, but we also have other tools at our disposal.

Fiction can help us here, this time in its form, not its content. It can help us to clarify our own thoughts, help us test our ideas before we build them, and help our teams stay focused on the things that matter.

First, let’s talk about fiction for thinking.

We all have our methods for coming up with new ideas. It can be difficult to keep those ideas grounded and relatable in the early stages, though.

How do you stay concrete and specific before you even know what form your idea will take?

I draw inspiration from objects that encode possibility.

Like Sun Ra’s business card.

Can you imagine Sun Ra handing this to you? You’d probably have no idea what Afrofuturist avant garde jazz sounds like, but you’d certain realize you were in for something unlike anything you’d heard before.

Not to mention the brilliant copy. Oh my god, you’re right, I‘ve been buying old sounds this whole time!

Or consider the Golden Record that’s carried on Voyager.

We shot this into space 37 years ago, and it’s still going.

The fact that it’s a record makes it concrete and relatable, but it’s not really about that. It’s an object that encodes the possibility of intelligent life beyond Earth, an object that makes us think about what that might mean.

So how do we help our designs embody possibility? How do we think about our ideas in a concrete way before we know what form they take?

Fortunately, we don’t have to be as clever as these objects. We can use existing fictional forms to help us.

The power of fiction lies in its ability to embody the specific, the concrete, the everyday, without having to create the entire universe it inhabits.

Even though we’re usually creating software in the end, we also write narratives, craft storyboards, and make movies. This is useful for our own thought process, not just for broader communication.

I’ll show some examples of those later, but I’d like to highlight one technique here that I find particularly sharp.

Werner Vogels, from Amazon, calls it working backwards. He says that the first step in your product definition process should be writing the press release.

You could view this as the most utilitarian science fiction possible.

It has the nice properties of forcing a customer focus, and making the experience and the high level advantages the first things you clarify, leaving the details of the solution for later.

As he says, this is about achieving clarity of thought, prior to the building stage.

So this is helpful, but it’s just the beginning. Let’s get back to Ray Bradbury.

He might as well have been talking about design when he said this.

In this context, I consider “possible” to mean an idea that actually succeeds and finds its place in the world, not just the question of technical feasibility.

Assuming we can figure out how to build it, is it the right thing to build?

So how do we actually know if something is possible or not?

I’m pretty sure that sufficiently advanced design is indistinguishable from clairvoyance, but we don’t have that option available to us.

So what do we do instead? We simulate the future.

This has the advantages of being more practical than both clairvoyance and time travel, and it’s way cheaper than building the thing in its entirety.

Test the future you’re imagining before it’s real.

We have a number of existing techniques in this realm.

Paper prototyping is a great way to do this. Alexandra Holloway made this paper prototype for an advanced document editor concept we were working on.

By walking a potential user through a real task with a paper interface, we can simulate future use. It makes the real work and real context unavoidably present before we write a single line of code. We can spot problems in the assumptions of the interface and how it fits with existing workflow.

We can also step back even further by using storyboards.

These are storyboards made by Garrett Johnson, also in our group, to evaluate some concepts around collaborative engineering models.

We use these storyboards to make sure we’re understanding the problems we’re trying to solve, and to see if our high level approaches make sense.

Storyboards can help our potential users to think about a specific context that’s familiar to them, and to imagine a very particular scenario and how it might change with new tools.

And then there’s the Wizard of Oz technique. Our friend L. Frank Baum is back!

This is an ingenious technique where you build out part of the software, but leave some element to be controlled manually.

A classic example is testing software with voice input. Instead of writing a speech recognition system, you can simply create keyboard shortcuts for the various behaviors. One person, the wizard, listens for the speech keywords and presses the corresponding button.

Since it’s the experience of using it that we want to test, it’s fine (and actually preferred) to fake major elements of the software.

The last category is fiction for leading, to help focus and motivate.

This is Nichelle Nichols. She sang with Duke Ellington and Lionel Hampton, but I think it’s safe to say you know her from Star Trek.

If you were to guess that Star Trek influenced a huge number of NASA employees, including astronauts, you would be right. And that influence shaped the future.

After the first season, she was going to leave the show. Martin Luther King, Jr. encouraged her to stay.

He said, “You’re an image for us. We look on that screen and we know where we’re going.”

But Nichelle Nichols was more than a broadcast from the future. She also worked as a recruiter for NASA.

She recruited Guion Bluford and Sally Ride.

And Judith Resnik, and Ronald McNair.

And Charlie Bolden, who is currently the head of NASA.

But you don’t have to write or act in a television show to lead using fiction.

Another tried and true technique is the Audacious Goal.

This starts as fiction, and may even be somewhat arbitrary, but it can be quite effective.

Here are a few examples:

Or, to be more accurate:

Now, JFK knew exactly what he was doing with this statement. As he said, we do these things

“…not because they are easy, but because they are hard; because that goal will serve to organize and measure the best of our energies and skill.”

And so to this list, I’ll add, immodestly:

So let’s talk about the role of fiction in the development of OnSight.

Let’s rewind one last time.

Let’s go back to August 2012, right about the time of the Curiosity landing.

It was an exciting time. In the midst of this excitement, Jeff Norris started to write a piece of science fiction.

And this image went with it:

While the idea was partially inspired by the Holodeck, this image was inspired just as much by Chesley Bonestell.

Bonestell was an artist and space enthusiast who started creating images like this one in the 1940's.

His images transport you to a first person view of a possible future, and inspired a whole generation of scientists.

(There’s a crater named after him on Mars.)

Once we heard about the HoloLens, the fiction got slightly more specific, and we pitched the project. As part of that pitch, Garrett Johnson drew up some storyboards.

You put on an augmented reality visor, and Mars comes into view in your office.

You walk away from your desk, wandering around in the terrain as if it were on Earth.

You select new potential targets, and collaborate with your colleagues.

These storyboards were a critical step in communicating the potential of the project. We got the green light, and OnSight moved forward.

The initial development goal was a demo, in the true sense: a focused, scripted experience meant to convey the potential of a future platform. Not yet a product, this version still gave you the experience of working on Mars.

There’s no way for me to show you here what that demo felt like. Instead, I’ll show you some quotes from the some of the science team members who saw it firsthand.

And this is where your skeptic’s ears should be perking up.

That one word: could.

Could transform. Now we need to make that happen. We need it to work for real, in context.

Honestly, the team lost some momentum after the demo. That laser focus was gone, replaced with the burden of building a complete product.

How could we regain our focus? Back to fiction.

Jeff wrote a new narrative, still relatively brief, but getting into the actual workflow of our still-fictional tool.

Now we could focus again, even though we still didn’t know how everything would look or behave.

The next breakthrough came from Laura Massey, one of our collaborators at Microsoft. It was her idea to track our progress in the document itself.

Green means we’ve completed it, yellow is in progress, red means we haven’t started, and gray means it’s just part of the narrative and doesn’t imply features.

This got us back on track.

Now I’d like to show you something special. It’s a third person capture of a group of scientists using OnSight in a collaborative session for the first time.

The design has changed a bit since then, and there are some artifacts from the third person capture, but this is worth sharing because it captures a key moment.

Sometimes, when you build a product, you get to see people use it in a way that is unexpected, and even superior to your intention.

Not only did this happen, but we were lucky enough to record it when it happened.

So here we are on Mars.

You’ll see a bunch of targets near the rover. This nearby workspace is where we anticipated most of the work happening. The scientists started by talking about different targets and planning activities close to the rover.

Then they started talking about a drive direction Mastcam. Mastcam is the high resolution color camera, and “drive direction” means that they’re capturing images of a possible drive path for the upcoming day.

In this video, you can see them planning a new grid of images, looking over the capture area.

Then, the first unexpected thing happens: someone walks right up into the Mastcam grid. Immediately, we start hearing exclamations: Oh! I didn’t realize how big that area was!

But then, the really surprising thing happens.

“I wonder what we’ll see when we get there.”

Someone walks right past the Mastcam footprint, and onto the hill.

From that crest, they see where the rover will be on the next day.

You can see that it’s lower resolution, but we still have data. That’s because we have orbital imagery and elevation of the whole planet, thanks to the Mars Reconnaissance Orbiter.

With OnSight’s integrated terrain, seeing from that perspective is as easy as walking up a hill. (Or teleporting, if you’d rather.)

So where are we now? Well, we’re back to Building 264.

We’re in the middle of our first beta test — the first time members of the science team can use OnSight in their own offices, on their own time.

While we have plenty of improvements to make, the short version is that it’s working. People are getting a richer experience of the Mars terrain, they’re seeing new things, and they have increased confidence in their decisions.

But while we’re designing for space exploration, these techniques can be used in any domain.

So let’s keep using fiction, not just to be inspired by the ideas and the stories, but also the methods.

They allow us to stand on that hill and see where we’re going.

And let’s do our best to make the right things, and to be pragmatic futurists, whether we’re designing for space or Earth.