NASA’s “New Apollo Moment”: Naturally Guiding Robotic Avatars In Space Exploration

How NASA integrates video game technologies into human-guided robotic space exploration


No matter how completely technics relies upon the objective procedures of the sciences, it does not form an independent system, like the universe: it exists as an element in human culture and it promises well or ill as the social groups that exploit it promise well or ill.

— Lewis Mumford


I had the opportunity a couple weeks ago to speak with NASA’s Jeff Norris about the agency’s ongoing development of human-robotic interfaces for space exploration. Norris heads up the Planning Software Systems Group (Planning and Execution Systems) at NASA JPL. His group is doing the most advanced (and the coolest) research in the world on how to integrate video game and consumer technologies into the technical framework of human-guided, robotic space exploration.

Norris was able to confirm that a Sony-Magic Lab interactive space exploration module on the PS4 would play a “major part of the strategy that we’re pursuing in this area.” He added that a lot of the technologies being developed in the video game industry are highly applicable to the work being done at NASA. So what kind of research is his group at NASA JPL up to?

Norris leads a set of projects in a project group that is known as HRS (Human-Robotic Systems). This is the group designing the future of human-robotic communication. It’s called R.A.P.I.D., a communication protocol that NASA has developed that establishes a consistent way for robots to communicate with the systems that control them. He says that as we consider the future of human space exploration, it is more of a cooperation between humans and their robotic tools, robots that are supporting us in our human exploration.

The video below shows the JACO robot arm being manipulated in real time using the Xbox One Kinect, combining Kinect’s position tracking and the Oculus’s rotational tracking, the operator receives a first-person view. Future work will include integrating sensor array data into the scene deploying the Robonaut 2 humanoid on the ISS with the same technology.

Courtesy NASA/JPL-Caltech

The expectation, Norris says, is that there will be many kinds of robots that are specialized for different purposes. To try and make mission robots easier to control by astronauts and mission control, one of the developed protocols is for robots to speak in a consistent fashion. R.A.P.I.D. was developed by NASA-JPL, Ames and Johnson Space Centers and open-sourced, so that anyone with an interest in robotics can adapt it to their own robots. Within the Human-Robotics Systems project, Norris maintains specialized areas of interest. He is involved with the development of interfaces — the visualization technologies that will make humans beings more effective when they are controlling robots — when they are interacting with the data that those robots return to mission control.

I asked Norris what specific video game technologies NASA is using in the larger NASA enterprise of integrating it into its most advanced research and development efforts. These involve some core video game technologies and some that is what he calls “on the fringe” of video game tech. The list includes projects with the Oculus Rift head-mounted display, and a long-term working relationship with Microsoft, with NASA having done software development work with the v.1 generation Kinect sensor before it was released to the general market. This led to a number of projects, among them the XBOX Live video game, Mars Rover Landing, NASA’s first console video game, released in July 2012, just before the Rover landing. NASA has had some discussions at a high level with Nintendo and is working with Sony’s Magic Lab, as mentioned previously.

NASA is currently working with Sixense, the company behind the STEM System’s wireless motion tracking technology. Norris mentioned that they had been sharing data with the company and some of the applications they had been working on. Sixense took some of that and they adapted their STEM sensors to allow an animated astronaut model to walk around a Martian scene with the Rover. STEM System’s motion tracking and control are added to the virtual Martian landscape in the video, showing the level of immersion and interaction that can be experienced by the user/operator.

Courtesy NASA/JPL-Caltech

Norris also mentions NASA/JPL’s work with Leap Motion, not strictly speaking, a device restricted to gaming. It’s a 3-D motion and gesture controller for 3D game applications, instructional 3D music applications, 3D design and 3D learning environments. But he emphasizes that gaming in particular is not NASA/JPL’s core focus, but really consumer technology. These are the devices have had so much money invested in them [so as] to be highly usable, and [that] NASA is finding to be quite applicable to current projects.

Another company that NASA/JPL is working with, Norris adds, is ZSpace, a company that make 3D holographic imaging displays that allow interaction with simulated objects in virtual environments. Both Leap Motion and ZSpace are being tested as interface technologies for future NASA robots such as the All-Terrain Hex-Limbed Extra-Terrestrial Explorer (ATHLETE) Rover, pictured below in an advanced research task of “rough and steep terrain lunar surface mobility.”

ATHLETE climbs a rough hill near Meteor Crater, Arizona Courtesy NASA/JPL-Caltech

Norris also mentions that NASA/JPL makes a lot of use of PrimeSense’s sensors, (3D “depth sensing”) 3D-range sensing technology he says is very similar to the sensor inside the first-generation Kinect. NASA/JPL is also a contributing member of the Google Glass development program.

Radhakrishnan, the chairman of India’s ISRO, had commented in November 2013 (with much brouhaha in the press) on how conducting limited, more comprehensive ground tests has helped the Indian space agency operate on a lean, cost-effective budget for the Mangalyaan mission.

I asked Norris how feasible it would be for NASA to use virtual reality technology to conduct simulated ground tests and perhaps save money during costly functional testing of Mars mission craft. Norris told me he remains passionate about using virtual reality in space exploration for both testing and mission execution. But he hesitates to say that VR has the potential to replace many of NASA spacecraft ground tests. Many of the tests deal with the performance of physical components of the system and the ways that the software, the avionics and the hardware of the vehicle interact with each other, especially in the extreme environmental conditions that are encountered in space, and in places like planet Mars.

Virtual reality’s promise is to engage a human operator with a task in a way that is highly natural to them, in a way, Norris adds, that is very similar to what they engage with and interact with in the natural world. NASA has used virtual reality in mission rehearsals for crew members at the Dallas Space Center for many years. It’s reaching a little too far for VR technology to replace, for example, an environmental test for a spacecraft, Norris says. It is difficult to justify or to declare how much testing is enough, he adds. But, Norris points out, NASA engineers still discover new things they didn’t expect—that have threatened and even ended missions. He cautions that those tests, while they may seem expensive and cumbersome, do have a purpose.

I asked Norris how NASA could use modular, multi-use robotic exploration techniques (as in the Modular Common Spacecraft Bus) to reach Deming’s quality ideal of “faster, better, cheaper” while minimizing risks to mission astronauts and hardware. He is quick to point out several examples at NASA that point to some “creative reuse” of mission components, such as the Phoenix Mars Mission (August 2007), which was very similar in many ways to the failed Mars Polar Lander Mission (January 1999). Some of the flexible bare components from the Mars Polar Lander were used — pieces of hardware were used in flight (that were meant to be discarded), were used again. Norris adds that other things had changed about the spacecraft, including many of those instruments, so the system had to accommodate the reuse of those components. Norris points out that one can find many other examples throughout NASA missions where they had to get creative in such a way to simply control costs, and get more done.

Norris’s specialty, however, lies in developing “ground software,” having spent 15 years developing the systems that control NASA spacecraft. In that area, they have embraced and have directly benefited from the“modular architecture” I mentioned earlier. For example, Norris points out that a significant portion of the control software for Spirit and Opportunity, Phoenix and the Curiosity Mars Science Laboratory Rover are built on top of an architecture called OSDR — involving Eclipse, a component-based framework for the Java programming language. Norris mentions that NASA is now looking forward towards more web-based architecture and from there is again emphasizing component-based architecture, and his team is looking to choose one of them which is advantageous to the needs of his NASA team.

Norris reminds me of one of the philosophies floating around NASA. It is trying to extract what NASA workers call “multiple-mission capabilities”— the parts of the operation systems that apply to multiple missions into the core packages, and then is0late the mission-specific code that they need for a particular mission: things that might pertain to a particular instrument, or a particular destination. He adds that they have had a lot of success reducing costs using that strategy.

I asked Norris what he would tell the wo(man) on the street about what robots will be able to do for us in space. How does he manage to characterize the promises of robotic space exploration to the public? This is clearly the thing Norris is most passionate about: thinking about how people are going to interact with robots in the future of space exploration.

We want to go to a lot of different places. Mars is interesting, and we want to go there very much, but there are so many other places in the solar system. The ability to build a robot that is perfectly suited to a potentially very hazardous environment, that’s going to go swimming in the rains of Saturn, or something like that. The ability to build a robot that is optimized for that task, and then to control it in a way that makes you feel like you are there, to me feels like a very powerful competence. Because, here we are, able to use technologies that make us feel present in that environment, but in a way of inhabiting a robotic avatar that is perfectly attuned to that environment. That’s pretty phenomenal.

Norris is quick to emphasize that, right now, with our existing technologies, it wouldn't be desirable to put a human being, no matter how nice their space suit, in the rings of Saturn. But NASA can put a robot there and control it in a way that makes us feel like we are right there, floating in the midst of the Saturnian rings.

Courtesy NASA and E. Karkoschka (University of Arizona)

Norris says that robotic avatars are taking scale to [the point of] including so many people in the experience, in the journey of exploration. He adds that he would love to see a future, as he puts it,

looking back on the 1969 Apollo moment when 600 million people sat in front of a television and watched Neil Armstrong taking his first steps on the moon. I think there was a magic about that — that was wrapped up in the fact that not only were we doing something that had never been done before, but so many people were there with us. And they were there because we found a medium, in television, that engaged them and let them feel a part of it, in a way that they had never experienced before.

Looking to the future, Norris believes that robots and the kinds of interfaces NASA is working on can deliver us a “new Apollo moment” and envisions how

We look forward to the day when we put human boots on the soil of Mars. It will be a human accompanied by robots who are supporting them. And I want a billion human beings to be standing right there beside the astronaut, inhabiting those robotic avatars, almost welcoming them to the surface of Mars. That, I think, is the promise of these technologies.

Norris adds that then, when we look forward to exploring beyond the solar system, to other places, that one of the nice things about using robots is that we can send them in many different directions at once. He explains that even if the robots take many years to reach their destination, we can basically wait for each of the robots to arrive, and then just flit, just jump between them and consume the data that they are returning for us. Robotic data jumping transter, as it were.

Norris sees robots as “marvelous tools” for exploration — a great support to and a companion to human space exploration, which he finds also very exciting.

I asked Norris about the SuperBall Bot Tensegrity Planetary Lander project, the space exploration robots known as “tensegrity robots.” Norris sees this NASA Ames project as a great example of the diversity and the ingenuity of the people who develop robots at NASA. He muses that he doesn't spend his time thinking about new kinds of robots, but he thinks about how to drive them.

Courtesy Adrian Agogino/NASA ARC

Norris maintains that one of the things that makes his job very fun is that he has learned that there is no limit to the ingenuity of the people who think about new kinds of robots for him to learn how to drive. The tensegrity robot is in an early stage of development and must pass through phases of having the robots actually locomote, be packed and be deployed on a mission. When they reach a point where they are starting to think seriously about how to accomplish missions, Norris says that we better believe he will be very excited about trying to help them control the robot and interact both with it and the environment it is exploring!

I asked Norris what he would say to those who are skeptical about the great promise of robotic technology. He says that humans are marvelous explorers. He recalls that we have a great history of humans of exploring, of just being drawn to unknown situations. He would call attention to the amazing ability that humans have, to rapidly understand environments just by being in them. He likens this human ability to our experience of turning on a light in a dark room, and orienting quickly to the size and configuration of that physical space:

When we think about exploring other places, part of the reason for that is because of our natural abilities. The challenge of exploring other environments, distant environments, is that we have to think about the environments that are not safe, the radiation in the environment and the distances make them not appropriate places for humans to go right now. When we think about sending a robot there, if we want those humans to be as effective as an explorer in that distance, in that environment as they are here on Earth, then we have to find a way to engage all those natural abilities that humans are endowed with, as effectively in this task as if they were exploring a canyon in Arizona. The way we do that is, I believe, building interfaces that connect the features of the robot and the abilities of the robot to a human in a way that is so natural that the humans’ natural abilities work in their advantage and not against them.

If we look at unimaginative human-robotic interfaces, just having someone stare at pictures on a screen and use a general-purpose interface like a mouse and keyboard to try and control it, Norris contends that those interfaces are not designed to engage the natural abilities of a human as effectively as interfaces that use virtual reality or body tracking or other kinds of interfaces. What’s happening to that person when they are using those more traditional interfaces, Norris concludes, is that they are having to constantly convert from the abstract that they are seeing on the screen, or the abstract input that they are accomplishing through a keyboard or a mouse into what really is happening on the other side.

We are trying to remove that abstraction. It’s not that we’re trying to fool them into thinking that they’re there, that’s not the part. But just to let their natural abilities operate as if they were there. That’s what it’s about. I see this as this is the way that we make our robotic assets something that can allow us to naturally explore space and distance environments that we’re visiting as naturally as we explore the places we explore on Earth.

It’s relevant to remind ourselves of NASA’s continuous commitment to cutting-edge technologies for robotic space exploration. The list includes Robonaut 2, the humanoid robot with “dexterous manipulation,” the ability to use one’s hand to do work with a dexterity superior to a suited astronaut’s, NASA and CSA’s Dextre, the ISS robotic handyperson that refuels satellites on the exterior of the ISS and RASSOR (Regolith Advanced Surface Systems Operations Robot), the robotic moon miner that autonomously drives around the Moon. It can scoop and haul up to 40 pounds of moon regolith to a processing plant on a larger lander (that, in turn) extracts water, hydrogen, and oxygen from the moon.

NASA’s Super Ball Bots are deployable robots that bounce on the planetary surface, deform and roll to any location during surface exploration missions. NASA’s second-gen Centaur 2 Rover (integrated with the Robonaut R2A torso) has robotic mobility and the world’s most advanced dexterous “mobile manipulation” system of hybrid rover/arm manipulation. R2's climbing legs were tested in December for movement on the ISS (future features included prospecting sensors, deeper excavation implements and devices for converting planetary raw materials into useable products).

NASA’s car-sized Curiosity Rover robot just did a 329-foot/100.3-meter backward drive over Martian terrain.

Courtesy NASA/JPL-Caltech

The series of nine images in the animation were taken by the rear Hazard-Avoidance Camera (rear Hazcam) on the rover as it drove over a dune spanning the “Dingo Gap” area. Curiosity has driven 937 feet (285.5 meters) on the Martian surface since the rover’s Feb. 9 dune-crossing, for a total odometry of 3.24 miles (5.21 kilometers) since its August 2012 landing.


There is a great movement of researchers all over the world who are working to realize the dream of operating robots with “telepresence” — to lead the future exploration of space. Norris speaks of “telexploration” in a 2013 Von Karman lecture. Making low-cost holodecks for every NASA scientist to see, hear and touch other, distant worlds. And for all of us who want to explore the Great Expanse.

We will be accompanied by robotic explorers, guided by human operators (via teleoperation) and will be freely immersed in distant planetary environments — robotic explorers that are remote-controlled with something like Norris’s human-machine interface. Human researchers will perceive the colors, light, sound and touch (with interactive haptic technology) of planets that are light-years away, acting naturally in them as if they were physically present.

George Bernard Shaw once said, “You see things; and you say, ‘Why?’ But I dream things that never were; and I say, “Why not?

This seems to be the modus operandi of NASA’s new business, bent on using advanced technologies that help humans scale up to the future challenges of deep space exploration. Of an agency that, by default, researches, creates, tests, deploys and quality checks technologies for their future potential — seeking to make real what was once a figment in our cultural imagination.

I am looking forward to this cooperative, human-robot future. A new Apollo moment, of human explorers fulfilling the great technological promise on the Mars surface. Where NASA astronauts might brush the rust-colored regolith off their boots, and in the dim sunlight of the early morning mist, join their colleagues, the vanguard of NASA robotic explorers, standing on a red, sand-covered rocky mound. The astronauts approach their robotic avatar colleagues and say, “Do you see my friends — we are still dreaming of tomorrow!”