Introducing Astrape — Volume 2/3

Sude @ Vorticity
Vorticity
Published in
4 min readSep 18, 2024

In our first part of this series we introduced Astrape, the world’s fastest and highest-resolution AEM inversion software, built on well-established scientific principles such as the moving-footprint technique for domain decomposition and finite volume methods for simulation. In this second installment, we focus on demonstrating the processing capabilities of Astrape. In the final part we demonstrate how this acceleration allows us to carry out highly-valuable accurate inversions.

A GPU-based inversion algorithm for modern geophysics

Over the last decade, advancements in chip manufacturing, particularly in Graphics Processing Units (GPUs), have significantly enhanced computational capabilities. Originally designed for graphical calculations, GPUs have proven exceptionally effective for non-graphic, embarrassingly parallel problems due to their inherent parallel structure. Astrape harnesses these modern computational advancements, enabling rapid processing of survey data. As Moore’s law scaling comes to an end, computational scientists have turned to these massively parallel architectures to push the boundaries of scientific computing speeds.

CPU Benchmark

To benchmark Astrape against existing CPU-based codes, we reference a 2018 inversion on a 2015 time-domain SKYTEM survey conducted in the Northeast corner of Western Australia. This case involved 6 survey lines with a total of 1,177 transmitter/receiver pairs. The forward meshes consisted of approximately 12,000 cells, while the inversion mesh contained around 3.9 million cells, with the smallest cell size being 7m x 7m x 3.5m. The inversion took approximately 48 hours to complete on 48 cores of an Intel Xeon E5–2690 v3 processor (2.60 GHz) with 256 GB of memory.

GPU Benchmark

For the GPU benchmark, we used a comparable setup to the CPU result. The inversion was performed with 1,194 source-receiver pairs, each associated with a local mesh of 12,320 cells. The global inversion mesh covered a 1 km x 3 km area with 100m line spacing, sampling source-receivers every 25m. This inversion was processed using an NVIDIA DGX A100 server equipped with 8 Tensor Core A100 GPUs and 640GB of GPU memory. This hardware is readily available on major cloud platforms like AWS, Azure, and Google Cloud, which are designed to meet the stringent security requirements of highly sensitive operations.

In our example we perform 5 iterations of the inversion process. For each forward and gradient calculation we solve the quasi-static Maxwell’s equations on every 3D local mesh. For this benchmark, we discretize time in 261 time steps. We use the projected conjugate gradient Gauss-Newton algorithm for nonlinear optimization.

Scaling: Leveraging GPU Architecture for Efficient Inversion

In this example, we selected a small section of the full survey to perform an inversion comparable to the CPU benchmark. One of the primary advantages of a GPU-based approach is the ease with which the software can scale. The architecture of GPU hardware is inherently suited for distribution across many servers, similar to the architectures used in training large neural networks. Due to the design of our algorithms, the inversion time depends only on the local mesh size and the number of local meshes, rather than the global mesh size, allowing us to scale distribution both effectively and rapidly.

To achieve a comparable result in a similar time frame using a CPU-based algorithm would require over 9,200 CPU cores running in parallel — a computational challenge that may not be feasible and would be prohibitively expensive.

Why do we care about acceleration?

This demonstration highlights how a GPU-based approach not only achieves faster inversion times but also offers more efficient scaling. This capability significantly benefits mineral exploration, enabling faster turnaround times for complex survey inversions, thus meeting urgent exploration deadlines.

Check back in next Wednesday for the third and final case study, where we will explore how these advanced capabilities can enhance the value of voxel-based inversions through model space exploration and uncertainty quantification.

References

  • McMillan, M., Haber, E., & Marchant, D. (2018). Large scale 3D airborne electromagnetic inversion: Recent technical improvements. ASEG Extended Abstracts, 2018, 1. https://doi.org/10.1071/ASEG2018abT6_1F.
  • Haber, E., & Schwarzbach, C. (2014). Parallel inversion of large-scale airborne time-domain electromagnetic data with multiple OcTree meshes. Inverse Problems, 30(5), 055011. https://doi.org/10.1088/0266-5611/30/5/055011.
  • Cox, L. H., Wilson, G. A., & Zhdanov, M. S. (2010). 3D inversion of airborne electromagnetic data using a moving footprint. Exploration Geophysics, 41(4), 250–259. https://doi.org/10.1071/EG10003.

--

--