AI x Space: The Future Of Space Exploration

Here’s what can be achieved when we put together two of the most out-of-this-world technologies—artificial intelligence and space tech



Photo by David Menidrey on Unsplash

I’m sure we’ve all heard crazy theories about artificial intelligence robots taking over the Earth. But as crazy as it sounds, AI is taking over space, too.

Artificial intelligence is now being integrated into so many parts of space exploration as NASA starts to partner with companies like IBM, Intel and more to build more AI solutions.

It’s expected that AI solutions will facilitate space exploration in different aspects in the coming years. The ideas and expectations with this new technology are endless, but let’s look at four different ways in which AI is expected to change.

Before we get into the applications, let’s take a quick look at what AI is!

A Quick Look at Artificial Intelligence

I’m sure you’ve heard the term artificial intelligence (AI) thrown around everywhere over the past few years. We’ve heard of its applications from recommending new Netflix shows to early cancer detection. But despite what many think, AI is not a magical cure to humanity’s every problem. So what is it?

Here’s how John McCarthy, a founder of this field, put it in one of his papers:

It is the science and engineering of making intelligent machines, especially intelligent computer programs.

In our endeavour as human beings to achieve this goal of giving machines “human intelligence” (this term itself requires discussion as there’s some disagreement around what intelligence truly means but that’s a topic for another time), we’ve been able to get them to plan, learn, reason, problem solve, and make decisions.

Now’s the time to talk about a few important distinctions in the AI field. Most of what we’ll be talking about with regards to space tech would be considered Artificial Narrow Intelligence (ANI). This type of AI is the only one that is currently in use; it’s very good at the task it’s designed to do but it can’t go beyond that.

Some examples of ANI being used include Siri (or other virtual assistants), autonomous vehicles, facial recognition software, Google recommendation boxes and search result rankings, email spam filters, and services that recommend what you’ll likely enjoy (like Netflix, Spotify, and Amazon recommendations).

The next category—Artificial General Intelligence (AGI)—is where things get really interesting. If you thought Amazon “knowing” exactly what product you’d like is cool, then prepare to have your mind blown.

AGI would have the same intellectual capacities as a human and be able to apply their skills to any problem, again like human beings. The concept may seem scary to many and it definitely brings up countless ethical considerations. But for now, we haven’t been able to build an AGI.

Artificial Narrow Intelligence v.s. Artificial General Intelligence. Image by Author.

The last category is Artificial Super Intelligence (ASI), where machines would surpass the intelligence of humans. This is purely hypothetical but if you’re interested in learning more, I recommend checking this blog post out.

One last distinction before we talk about space applications: the difference between AI and machine learning or ML.These terms are pretty much used interchangeably but that’s incorrect!

Put simply, AI is an umbrella term that encompasses ML. We already discussed what AI is so what’s ML?

Machine learning algorithms can “learn” from the data that’s fed to them and recognize patterns within them. In other words, we humans aren’t explicitly teaching these algorithms.

Some examples of current applications of ML include image and speech recognition, algorithms that can generate new songs, cancer detection through medical imaging, and much more.

With all of that background out of the way, it’s time to look at how AI intersects with space exploration.

Autonomous Rovers

What Are They?

One of the most crucial applications of AI creating autonomous rovers. These rovers can run without direct human navigation.

As Marco Pavone, the Director of the Autonomous Systems Lab at Stanford University, explains, one of the biggest ways in which AI can help space exploration is through these autonomous systems. The goal is to move from automated machines to autonomous ones.

Automated machines are able to complete repetitive, predictable tasks very well. Think of machines that assemble cars in factories as an example.

Autonomous machines can make decisions on their own even when faced with an unexpected situation. Here, think of self-driving cars aka autonomous vehicles.

Pavone’s work focused on landing and moving rovers on smaller space bodies like comets and moons but not planets, so that’s what we’ll be focusing on too. Something interesting to discuss here before AI comes into play is robotics.

Movement on these smaller bodies brings an interesting challenge as their gravitational pull is much weaker than larger bodies. This means using wheels as we often do we Mars rovers for instance, isn’t an option. What is? Hopping!

Here’s Where AI Comes In

There are three major capabilities that AI enables to help these rovers move and make good decisions.

  1. Dynamics: “How actions influence future states”
  2. Localization: “Estimating the rover’s position on the surface”
  3. Planning: “Choosing good actions given a set of mission objectives”


Let’s first talk about dynamics. The rovers Pavone worked on, appropriately named hedgehogs as it appears to have spikes, use flywheels that propel it up and forward. Of course, this is very hard to test on Earth since our planet has a much stronger gravitational pull, as you’ve probably noticed.

This is what a hedgehog looks like. Image by NASA/JPL-Caltech/Stanford.

Using traditional methods to model the complex interactions between the movement of the rover, its interaction with the environment/terrain, and how this action influences the next is very difficult. AI makes it much easier to model such things.


As we’ll talk about a bit more in the “Space Navigation” section, we require AI, and more specifically ML, so these rovers can estimate where they are. The same is true with self-driving cars; they use ML to figure out their location so they know where to go.


This is the most interesting capability in my opinion and a rather complicated one too. The rover needs to be able to plan how to best achieve the mission objectives given its environment or as Pavone puts it

What is the best hop to perform, given a set of mission objectives, an estimate of the rover’s location, an understanding of its capabilities, and a model of the world?

The rover is able to run simulations of how different courses of action will or will not lead it to its mission objectives, something it can’t achieve without the use of AI.

Space Navigation

Though we have GPS to navigate roads on Earth, there is no such technology for space. In order to combat this issue, researchers from NASA and Intel collaborated in 2018 so we could explore and navigate planets using AI.

Just think about how difficult life was before Google or Apple Maps was widely available or even existed—or maybe you don’t even remember. It becomes clear just how important this shift could be for space navigation.

Astronaut Assistants: Assistive Free-Flyers

Microsoft’s Cortana and Apple’s Siri are just a few examples of popular AI assistants that are well known to everybody. But did you know that researchers predict that in the coming years “AI-based assistants” will help out astronauts with different tasks like spacecraft knowledge, improvement in space missions and more?

As well, they can help alleviate some stress from the astronauts by completing trivial tasks such as unloading cargo and even with servicing and repairing space technology.

Shown in orange is one type of assistive free-flyer called SPHERES. Image by NASA.

Another aspect these free-flyers could help with is the clearance of space “junk” or debris. Space junk is on of the biggest challenges faced during space exploration, and AI solutions are being designed actively to combat this problem by disintegration this junk and thereby avoiding collisions.

Analyzing Telescope Images

You may remember having or seeing those small telescopes as a kid, or maybe the ones you asked your parents for every year. Well, those are not the telescopes we’re talking about.

We’re talking about telescopes like NASA’s James Webb Space Telescope, which has a primary mirror 21.3 feet across and will be launching in just a month (Dec. 2021), allow astronomers to observe the nearby universe and even determine the chemical compositions of exoplanets.

The data we gather from these telescopes is noisy. That means there’s a lot of data, most of which is useless. Finding the important data is difficult and time consuming, but it’s something AI can help speed up.

A certain ML model could even tell astronomers the certainty of its prediction. In other words, the model could say “Here’s my prediction but I’m not so sure that it’s correct” (in more mathematical terms of course), which is very valuable.

3D Modelling Asteroids in Real-Time

Another way NASA has been employing ML is in creating 3D models of asteroids which takes quite a long time when done by humans. As Bill Diamond, SETI’s president and CEO — SETI is an organization partnering with NASA — said

An adept astronomer with standard compute resources, could shape a single asteroid in one to three months.

Using ML, astronomers and engineers were able to bring this number down to four days. But they weren’t done. Right now, in an observatory in Puerto Rico, asteroid modelling is being done nearly in real time.

This topic was brainstormed by Brianna Clark. This blog post was researched, written, and edited by Velika Freesia, Asmita Malakar, and Parmin Sedigh.