The Moon made twice, at home.

About making 3D ray-traced model of the Moon, in Python, using real measurements.

1. Traveling Moon.

This summer a giant, 7m-diameter Moon installation by Luke Jerram traveled around European cities. It also visited Poland, where it was presented outdoor at the Copernicus Science Centre in Warsaw. The event had a great atmosphere, with people relaxing around, meeting friends, and taking photos. I also was there to explore lunar craters. The detail level of this 7m-tall construction means that 5km of the real Moon surface is projected on one centimeter of the model. When I approached it closely, I had the impression that below the centimeter scale, there are not many details, although it looked great from a distance. The installation made me curious about what data is available to make the Moon model of this size. I decided to make a data-viz project at RnD Team Studio and share results here and on Behance.

Here is what I have found: publicly available resources, from NASA obviously, are incredibly detailed. The elevation map highest resolution is 59 meters! Also, the actual colors of the surface are available at the high resolution of 480m. Shadows of the surface features are removed there, and the brightness is corrected for the inclination with respect to the light (Sun) direction. So, this is all you need to make a stunning 3D model for yourself. Actually, you should worry if this is not too much.

In this tutorial, I am going to make the Moon model using these public domain data and a few Python packages: Numpy for basic data preprocessing, OpenCV for 2D image scaling, Matplotlib for displaying images, and PlotOptiX for ray-tracing in 3D. Code presented in this tutorial is available on GitHub, in Jupyter notebooks. You can run notebooks on your local computer, or remotely, e.g. on the Google Cloud Platform for which I’ll give installation instructions on GitHub.

The tutorial aims to let you create as detailed Moon as your hardware can stand.

First, I made the Moon from a standard triangular mesh with nodes at positions corresponding to the sphere surface modified according to the elevation data. This approach turned out to be memory consuming, and I was not satisfied with the close-ups made of heavily downscaled data. Therefore I tried another way and used the elevation map to determine the sphere surface displacement on the fly, while the image is computed. This worked perfectly. In the tutorial, I’ll show both approaches, and explain differences in the algorithms used behind the scenes.

2. Collect data.

The Moon surface elevation model

Surface elevation.

The Moon surface elevation model that I found on the web is based on data from the Lunar Orbiter Laser Altimeter (LOLA). Data is provided by LOLA Science Team in 118m resolution (8GB), and by LOLA/Kaguya Team in 59m resolution (22GB), if you’d like to go for a crazy detailed Moon (and you have computing resources appropriate for such data volume). Data is stored as a single image file, in 16-bit grayscale TIFF format so you’ll need a good tool to read this into the memory and do some preprocessing. Python code to grab this file is simple (note, it may take a while to get these gigabytes):

import urllib.requestelevation_file=“Lunar_LRO_LOLA_Global_LDEM_118m_Mar2014.tif”
urllib.request.urlretrieve(url, elevation_file)
The Moon surface color map.

Surface color.

Next, let’s get the Moon’s surface color. If you look at the sky, the Moon does not seem to have many colors, maybe just brighter and darker areas. But this is due to Earth’s atmosphere. There are colors to take into account if we want to see photo-realistic images, and some features were even surprising to me. I did not expect to see bright plaster-like spots on some craters.

The color data comes from the Lunar Reconnaissance Orbiter Camera (LROC). It covers an area of +/-70 degrees latitude. The polar regions I completed with lower resolution textures and prepared a single file. Color details are most significant when viewed from the light source direction. With the side lighting, shadows of the terrain quickly bring out much more details, so we are fine with the ~480m resolution of the color map. Let’s download the file from my Google Drive with the PlotOptiX utility function:

from plotoptix.install import download_file_from_google_drivecolor_file=“moon_color_10k_8bit.tif”
download_file_from_google_drive(fid, color_file)
Star map in celestial coordinates.

Star map.

The last image, the 4𝜋 map of stars in celestial coordinates, is for the decoration. We’ll use it in the background, as the environment map. It is also real data, and, as the rest of resources, available on NASA pages. Credit goes to NASA/Goddard Space Flight Center Scientific Visualization Studio. Few lines to download the map are:

import urllib.requeststarmap_file=“starmap_16k.tif”
urllib.request.urlretrieve(url, starmap_file)

You can find the full code downloading data in notebooks, section 1. Before we proceed to the data preprocessing, let’s discuss the two options for modeling the Moon surface in 3D.

3. Surface shading algorithms.

The most common solution for modeling an irregular object in 3D is to describe it with a mesh of triangles, small enough to make an impression that the surface is smooth. The shading algorithm needs to know the surface orientation relative to traced rays and light sources. This orientation is described by the surface normal vector. The object would look edgy if its normals were constant over patches of the mesh. To make the surface looking smooth on the triangle edges, and hide the mesh structure, one can interpolate the orientation of the surface normals (k) over the triangle area based on normal vectors pre-calculated at vertices (v). Normal at each vertex, in turn, can be assumed as the mean of normals (n) of triangles attached to that vertex. This kind of shading is usually flawless if you don’t look at the object edges too close, where the triangulation may be visible.

Triangle normals (n), normals at vertices (v), and interpolated normals (k).
Normal vectors on the mesh.

That is what the vast majority of 3D software is doing. The Turing architecture of NVIDIA GPU’s has calculations of ray-triangle intersections encoded in the hardware which speeds up ray tracing enormously. And that’s why my first choice was to translate the elevation data into the mesh. But let’s see how much memory is needed for one point on the Moon surface: 3x 4-byte single-precision value for each mesh node coordinates, again 3x 4-byte value for the normal vector at each node, and 8x 4-byte integer value for indexes encoding which triangles are using the node. To cover the mesh with colors we also need the texture 2D coordinates for each node or assign RGB values directly to nodes (so another 2 or 3 numbers more). That’s a lot, one can squeeze in only about 15M mesh points / 1GB of the GPU memory this way… and there must be some space left for the stars in the background and the output image buffers… I had to downscale a lot the original elevation map.

Normals k calculated at intersection points p.
Normals calculated from the surface displacement map.

The other technique applicable in our case is called surface displacement. In this approach, I start with the ideal sphere, and the 2D elevation map is used to position the Moon surface at any given point relative to that sphere. The normal vector (k) is calculated on the fly, using surface positions at small angular distances around the given point (p). This saves a lot of memory: it takes just one 4-byte, single-precision value per data point. This means about 260M points / 1GB. Taking into account color data, projected on the surface independently from the displacement calculations, I managed to display more than 20x more data! But it is not for free. The ray-surface intersection algorithm is a long loop progressing in small steps from the ideal sphere to the intersection with the displaced surface. Ray tracing frame rate drops a lot, but the result is worth this slowdown.

4. Prepare data.

Let’s start with the elevation data. OpenCV did not manage to read it for me, which I attribute to the 8GB size of the file. The PlotOptiX utils, fortunately, can handle such sizes:

from plotoptix.utils import read_image
elev_src = read_image(elevation_file)

8GB can be too much, even for the modern gaming GPU’s. Let’s downscale the map by a factor of 3 for the surface displacement algorithm, or more, e.g. 20x, for the mesh approach. Since data is signed short integers, we’ll assign this type and normalize values to the [-1; 1] range.

import numpy as npelev_src.dtype = np.int16
scale = 1. / np.iinfo(np.int16).max
downscale = 20h = elev_src.shape[0] // downscale
w = elev_src.shape[1] // downscale
elevation = elev_src.reshape(1, h, downscale, w, downscale).mean(4, dtype=np.float32).mean(2, dtype=np.float32).reshape(h, w)
elevation *= scale # in-place, save memory!

Now the map is ready to show in the figure:

import matplotlib.pyplot as pltplt.figure(1, figsize=(9.5, 5.5))
imgplot = plt.imshow(elevation)

The surface color file is smaller, so it works well with the OpenCV package:

import cv2color_src = cv2.imread(color_file)# convert from BGR to RGB, 32bit floating point:
color_src = color_src[..., ::-1].astype(np.float32)
# adjust brightness range, to your taste:
color_src = 0.2 + (0.75 / 255) * color_src

I rescale the color map to the mesh size, so it is easier to feed data to PlotOptiX (skip this step in the displacement map approach):

color_map = cv2.resize(color_src, dsize=elevation.shape[::-1], interpolation=cv2.INTER_CUBIC).astype(np.float32)# clamp out-of-range interpolation results:
_ = np.clip(color_map, 0, 1, out=color_map)

And display the map:

plt.figure(2, figsize=(9.5, 5.5))
color_plot = plt.imshow(color_src)

The star map is treated in the same way, with downscaling if it’s needed to save memory:

# target for downscaling from the original 16k width:
star_map_width = 8192
star_src = cv2.imread(starmap_file)
star_src = star_src[..., ::-1].astype(np.float32)
star_src *= 1 / 255
if star_map_width < star_src.shape[1]:
star_map_height = int(star_src.shape[0] * star_map_width / star_src.shape[1])
star_map_size = (star_map_width, star_map_height)
star_map = cv2.resize(star_src, dsize=star_map_size, interpolation=cv2.INTER_CUBIC).astype(np.float32)
np.clip(star_map, 0, 1, out=star_map) # clamp out-of-range interpolation results
star_map = star_src
plt.figure(3, figsize=(9.5, 5.5))
# plot sqrt to boost faint lights:
color_plot = plt.imshow(np.sqrt(star_map))

5. Setup ray tracing.

Image for post
The Moon, ray-traced in natural colors.

Now that all data is ready, let’s prepare a 3D scene, and compare the two modeling approaches.

The Moon elevation values start from ~9.1km below the mean radius and reach ~10.8km above it. The mean radius of the Moon is 1737km. That gives the elevation spread of about 1.15% of the mean radius. I’ll use a bit higher value of 2%, so the terrain features are more prominent.

In both variants, I align the Moon in the same way and put the camera in the same position. A single, quite distant, spherical light mimics the Sun. Star map is configured as a background texture.

The ray tracer initial setup is common for the two approaches. This part looks slightly different in the notebooks prepared for running on the remote server, where instead of Tkinter GUI you’ll see a headless NpOptiX object updating the output in a figure inlined between cells.

from plotoptix import TkOptiX
from plotoptix.utils import make_color_2d
rt = TkOptiX()# accumulate up to 50 frames to remove the noise:
# add tonal correction:
exposure = 0.9; gamma = 2.2
rt.set_float(“tonemap_exposure”, exposure)
rt.set_float(“tonemap_gamma”, gamma)
# star map in the background:
rt.set_background(star_map, gamma=gamma)
# setup camera with a good point of view:
rt.setup_camera("cam1", cam_type="DoF",
eye=[-13, -22, 3],
target=[0, 0, 0],
up=[0, 0.9, 0.5],
# add the Sun to light up the scene:
rt.setup_light("sun", pos=[-50, 0, 0], color=60, radius=6)

From this point, go to the path A or B.

A. Triangular mesh.

Let’s calculate the 3D positions of the mesh nodes. I use a parametric formula for the spherical surface, with the radius at each node modulated by the elevation scaled to vary on the level of 2% of the sphere radius.

def sphere(u, v, r):
sinu = np.sin(u)
x = sinu * np.cos(v)
y = sinu * np.sin(v)
z = np.cos(u)
return r * np.array([x, y, z], dtype=np.float32)
iu = np.arange(elevation.shape[0], dtype=np.float32)
iv = np.arange(elevation.shape[1], dtype=np.float32)
V, U = np.meshgrid(iv, iu)
U *= np.pi / elevation.shape[0]
V *= 2 * np.pi / elevation.shape[1]
rmin = np.min(elevation)
rmax = np.max(elevation)
rv = rmax - rmin
# rescale "in place": 0.98 + (0.02/rv)*(elevation+rmin)
elevation += rmin
elevation *= 0.02 / rv
elevation += 0.98
elevation *= 10 # max radiusS = sphere(U, V, elevation).T

Add mesh nodes from the S array to the scene. PlotOptiX will create required structures: arrays connecting triangles to vertices and corresponding normals. It will also assign colors from the map to nodes (remember? color map was rescaled to the elevation map shape in the data preparation section).

Color correction applied here preserves the original values that would be affected by the gamma correction otherwise.

Try disabling pre-calculation of normals, you’ll see the mesh structure in close-up’s.

“moon”, S,
c=np.swapaxes(make_color_2d(color_map, gamma=gamma), 0, 1),

And… start ray-tracing:

Close-up with normals interpolation off and on.
Close-up with the mesh normals interpolation off (left) and on (right).

Here is a summary of mouse and keys actions for Tkinter GUI. Have fun, close ray tracing at the end:


B. Surface displacement.

Let’s start with an ideal sphere. I explicitly set the diffuse material, as I am going to modify it right after.

mat="diffuse", # actually, a default material
geom="ParticleSetTextured", # spherical, texture-ready shape
geom_attr="DisplacedSurface", # use displacement-ready shader
pos=[0, 0, 0],
u=[0, 0, 1],
v=[-1, 0, 0],

Use the color map as a texture on the diffuse material:

from plotoptix.materials import m_diffuse
from plotoptix.utils import make_color_2d
# upload surface color texture to GPU, account for gamma correction
make_color_2d(color_map, gamma=gamma, channel_order="RGBA"))
# apply texture to the material
m_diffuse["ColorTextures"] = [ "moon_color" ]
rt.update_material("diffuse", m_diffuse)

Actually… the scene is ready to start. You should see a smooth, spherical Moon at this point.


Finally, apply the displacement map to make it almost real:

rmin = np.min(elevation)
rmax = np.max(elevation)
rv = rmax - rmin
# rescale “in place”: 0.98 + (0.02/rv)*(elevation+rmin)
elevation += rmin
elevation *= 0.02 / rv
elevation += 0.98
rt.set_displacement("moon", elevation, refresh=True)
Image for post
Maximum level of details I got on a 6GB GPU with the mesh (left) and the displacement map (right) approach.

Remainder: mouse and keys actions for Tkinter GUI. Note, the view is more detailed, but the frame rate is lower! Close ray tracing at the end:


6. Summary.

The tutorial shows how to prepare the Moon model for ray-tracing in Python. Steps, from data collection, through the preprocessing, up to ray-tracing setup, are covered. I’m sure you can find data to make the Earth or other planets in the same way (and I’m going to this by myself!).

I encourage you to explore the moonscape. There are many interesting regions and you can create high-quality images, printable in large formats!

If you’d like to see some more Moon pictures I’ve made with RnD Team for this project, you’re welcome to visit Behance page.

The Moon with artificial colors.
Elevation mapped to color, RdYlBu palette.

Written by

Generative and machine learning art. Data science. Particle physics background.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store