Landing on Asteroids: The Value Story of Deep Learning

Taylor Jensen
5 min readSep 13, 2022

--

Over 18,000 hours spent annotating images. Synthetic data combined with deep learning could change this.

OSIRIS-REx Asteroid Bennu 3D Shape Model Credit: NASA/Goddard/University of Arizona

Asteroids can contain secrets about the origin of the universe and can have commercial applications including mining, space stations, and commerce. The asteroid Bennu was selected by NASA in 2016 as a part of the OSIRIS-REx Mission because it may provide information on the origins of our solar system and increase understanding asteroids that could impact earth.

However, landing a probe on an asteroid can require thousands of hours of work to identify safe areas on the surface. This can be even more complex when there are rocks the size of your fist all the way up to the size of a house or larger.

Historically, gathering the image data necessary is difficult to accomplish because of the lack of images annotated with rocks, the diverse nature of asteroid surfaces, and the inclusion of human interpretation bias in manual identification of rocks. The definition of what a “rock” is can mean different things to different people.

In this article, I review the value of solving this problem. In the next article in this series, I evaluate several deep learning architectures and their ability to find landing areas on the Asteroid Bennu while being trained only with synthetic image data. In the end, I built an image segmentation model that is better than industry-standard U-Net, while being only 4.5% of the size and complexity.

Nightingale Boulder Count — Northern Region Credit: NASA/Goddard/University of Arizona

This is a Big Deal

The OSIRIS-REx spacecraft launched to Bennu on September 8th, 2016. The asteroid was expected to have a sandy surface, like a beach. However, when the OSIRIS-REx arrived at Bennu in 2018 the surface was extremely rocky. This posed a difficult challenge for NASA. To identify potential landing sites, individual boulders across the surface of Bennu had to be counted by the human eye, using a single image at a time. To address this massive task, NASA utilized about 3,500 volunteers from the CosmoQuest Bennu Mappers citizen science campaign.

Volunteers took up to 45 minutes per image, with the top 98 volunteers marking at least 100 images. If the top 98 volunteers took 45 minutes per image, this means that at least 18,375 hours were spent manually labeling rocks.

Since the annotation process is based in collective data sets and repetitive manually work, it becomes apparent that machine learning can have a massive impact on the speed, quality, and effectiveness of NASA missions.

Why Synthetic Data for Asteroid Missions

You might be wondering, why use synthetic images? There are two main reasons for this — the accessibility and the quality of the available images.

Synthetic Images and Classification Masks, gathered from Romain Pessia’s and Genya Ishigami’s Artificial Lunar Landscape Dataset

Quality of Space Images

As you might imagine, given the relative infrequency of missions to asteroids and other celestial bodies, image quality varies greatly. This can be due to the technology available at the time of the mission, the type of mission performed (such as exploration, extraction, etc.) and distance from the asteroid itself.

The NEAR Shoemaker mission in 2001 even crashed the probe into the surface of the asteroid 443 Eros. Images were taken on the way down and probe continued to operate. On top of all of this, the quality from a 2001-era camera taken from a moving probe in space and sent back to Earth is not entirely conducive to machine learning.

In fact, there have only been about three successful major missions to asteroids in our history. This contributes to the lack of images of the surface of asteroids available.

Those missions are the:

  • NEAR Shoemaker mission to Eros (~2001)
  • Hayabusa mission to Itokawa (~2005)
  • Hayabusa2 mission to Ryugu (~2018–2019)

Accessibility of Space Images

While most images are openly available in mission archives from NASA, not all programs or data gathering is created equal. There are different goals to different asteroid missions. Some missions aim to collect samples, take photographs, do scientific tests, study the behavior of asteroids, do all of these, or none of these. If visual images are not of sufficient quality, then they might not be published. Of course, all of these depend on what space agency is performing the study. There are also usually many papers that are published but datasets and images may not be openly available for use.

However, the OSIRIS-Rex photos of the surface of the asteroid Bennu are the highest resolution images of a celestial body that have been gathered thus far. With NASA scouring the surface of the asteroid manually in search for landing areas, and the synthetic data available, this provides the optimal conditions to train a machine learning model and test it on the real surface of an asteroid. Hopefully, this mission is setting the standard for image collection quality.

The Global Mosaic of the Surface of Bennu Credit: NASA/Goddard/University of Arizona

To see how I used images of the surface of Bennu from the USGS Astrogeology Science Center and the Artificial Lunar Landscape Dataset from Kaggle to build a model to find rock-free areas on Bennu, check out my next article in this series: Deep Learning & Synthetic Data for Asteroid Landings.

If you would like an in-depth dive, you can find my full paper here: Autonomous Rock Detection for Asteroid Missions Using Deep Learning and Synthetic Data | Northwestern University

Interested in more data science content? Follow me on here on Medium or connect with me on LinkedIn.

--

--

Taylor Jensen

Data Scientist and dedicated nerd in Chicago. All views are my own. LinkedIn: https://bit.ly/3Mq2DYI