Unveiling the Dynamic Universe with Neural Radiance Fields: A Journey into Black Hole Tomography

Ansh Mittal
7 min readApr 27, 2023

--

This article is the first in the series of articles about Neural Radiance Fields and their applications in Astronomy.

DALL E 2 images for the prompt “Universe as observed across the event horizon of a medium sized black hole, quantum decoherence, gravitational lensing with colors on accretion disk using Neural Rendering with Neural Radiance Fields in 3D Computer Vision”

In the exciting world of astronomical research, one of the most fascinating areas of study centers around black holes. These elusive celestial bodies, known for their immense gravitational pull, have captivated scientists and the public alike. Recent advancements in artificial intelligence and computer vision have opened up new avenues for exploration and understanding, particularly in the field of black hole tomography. This blog delves into one such advancement: Neural Radiance Fields and their application to study the dynamic environments around black holes. Drawing from a number of recent studies and technological advancements, we will journey through the extraordinary science of black hole tomography and its groundbreaking contributions to our understanding of the universe.

Earlier this year, scientists studying supermassive black holes at the centers of elliptical galaxies uncovered the cosmological coupling between the increasing mass of Central Black holes and the phenomenon known as Dark Energy [1] (also referred to as Quantum Vacuum Energy, which is a possible reason for the accelerated expansion of Universe as we move farther away in time) with a 2-sigma correspondence (99.98%). This discovery prompted me to contemplate how AI and Computer Vision can be instrumental in determining such unknown or unobservable relationships.

As a passionate reader of astronomy, computer vision, and computer graphics, I was thrilled to embark on an exploration of the possibilities that arise from new advancements in reconstructing 3D emissive structures (such as the matter in a black hole’s accretion disk) surrounding black holes with the research “Gravitationally Lensed Black Hole Emission Tomography” [2]. These 3D emissive structures can help us closely estimate the masses and accretion disks around the above Supermassive Central Black Holes to validate the above results. In recent years, the Event Horizon Telescope (EHT) has provided us with groundbreaking images of black holes (such as Sgr A* and M87* black holes), opening up a new frontier for understanding these enigmatic objects. In the context of this research, emission tomography maps the 3D distribution of light emitted by the matter surrounding a black hole, such as its accretion disk and X-ray jets.

Gravitational lensing is a phenomenon where the gravity of a massive object, like a black hole, bends light from a distant source, causing the light to follow a curved path.
The Event Horizon Telescope is a global network of radio telescopes that work together to capture high-resolution images of black holes.
Emission Tomography refers to a technique that enables researchers to reconstruct the distribution of emitted radiation, such as light or X-rays, in a three-dimensional space.

Fig1. The authors formulate and present an approach for novel tomography: recovering the emission distribution of flares in orbit around a black hole using observations captured from a single viewpoint over time. [IMAGE SOURCE: [2]]

The Black Hole Neural Radiance Fields (BH-NeRF) model, as proposed by the authors of this paper [2], paves the way for delving deeper into the dynamic environments around black holes, such as their accretion disks. The Image and Measurement Formation for the Forward model draws from the previous work in Neural Radiance Fields while utilizing the concepts of gravitational lensing and EHT measurements to create a comprehensive model for reconstructing 3D emission around black holes. At the heart of this innovative approach is the “Forward” model comprising Emission Dynamics to describe evolution over time represented using the continuous emission function:

E(t,x) = E₀(Rᵪ,ᵩ x)

where Rᵪ,ᵩ is the rotation matrix of angle φ about axis Χ, and the position and time-dependant angle are:

Φ (t, r) = tω(r)

where r is the distance of the object (emissions) from the Black Hole center, and t is the time. This model depends on Gravitationally-Lensed Ray Tracing to describe the integration of 3D emission along curved rays forming a 2D image. For N×N pixels, the vector I represent the image plane (discrete due to different points on the image plane):

I (t) = [p¹(t), p²(t), …, pᴺ×ᴺ(t)]

This model computes the intensity along the ray, s (since ray path in General Relativity is 4D (time + 3D of space)), given the ray-path for nᵗʰ pixel Γₙ = (t(s), x(s)).

pⁿ(t) = ∫e(t, x)ds

Fig 2. 2D Illustration of an emission hot-spot orbiting around a black hole and the resulting 1D image plane projection. As time progresses, t₀ → t₁, the spot is sheared due to the inner radii moving faster than the outer radii. [IMAGE SOURCE: [2]]

Hence, by accounting for the gravitational lensing effect, the BH-NeRF method can accurately model how light is bent and manipulated in the vicinity of a black hole, thereby helping to reconstruct the 3D emissive structures around it. Further, by considering the black hole motion and the continuous emission representation, this approach demonstrated marked improvements over existing simulation techniques used in astronomy. And using the measurement model that describes how EHT Measurements are related to the images, it becomes possible to establish the Image and Measurement Formation Model. Complex Visibilities in the context of EHT are represented by:

y(t) = FₜI(t) + ε

where Fₜ is 2D Discrete-time Fourier Transform (DTFT) containing frequency components from EHT, ε is measurement thermal noise which is gaussian distributed and is dependent on the sensitivity of the telescope pairs in EHT.

Fig 3. A map of the EHT Stations active in 2017 and 2018 (shown with connecting lines and labeled in yellow, sites in commission (green), and legacy sites (red). [IMAGE SOURCE: [3]]

The BH-NeRF tries to represent emission as Neural Network with positional encoding using exponentially increasing frequencies for sinusoids. The loss function takes both EHT measurements (ground truths for the 2D image plane) and gravitational lensing effects discussed earlier for 3D Emission Tomography. Hence, the whole schematic of this approach (Fig 4). The specific details of the BH-NeRF model and the objective function used are discussed in the BH-NeRF paper [2].

Fig 4. BH-NeRF models an initial 3D emission e₀ as a continuous function using an MLP (parameterized by θ). The input to the network is 3D coordinates x (transformed by a positional encoding). The orbit dynamics, parameterized by a rotation axis, dictate how the initial emission evolves e0(x) → e(t, x) (Sec. 3.1). BH-NeRF solves for θ and Χ jointly using a physically motivated loss that accounts for material orbit and gravitational lensing effects. EHT measurements constrain the optimization over θ and Χ. [IMAGE SOURCE: [2]]

The authors also carry out a series of experiments to showcase the effectiveness of the BH-NeRF method in various scenarios. Their results reveal that BH-NeRF outperforms alternative emission representations and is robust in model mismatch, even when Gaussian Noise perturbs the correlated velocity profile.

Fig 5. (Left) Comparison of BH-NeRF with two alternative representations. The image displays the recovered emission for three experimental setups where the ground truth emission increases in complexity from a single to four to eight hotspot (four flaring up before t = 0). In all approaches, the experimentation recovers emissions from image-domain measurements (projection plane indicated in green) and visualizes the emission recovered at time t = 0. (Right) Simulation results for three experiments with different rotation axes and emission patterns. The top two experiments show recoveries of Gaussian emission-hotspot at two times. The third experiment (bottom) shows the recovery of 3D digits that illustrates the flexibility of this model approach arbitrary 3D emission. [IMAGE CREDIT: [2]]

However, every scientific breakthrough comes with a few limitations that lead to other breakthroughs in the way we understand the world around us. The BH-NeRF discussed by the authors of this paper is not free of such limitations and assumptions. It relies on assumptions of the black hole’s spin and mass (to pre-compute the curved trajectories of rays and velocity profile for ray tracing), the Keplerian dynamics model, and the absence of flares during the observation window. Addressing these limitations may open up new possibilities for future research and pave the way for even more discoveries in the world of black hole research. For instance, backpropagating through the Gravitationally-lensed Ray Tracing can be addressed in future works that can obviate the need for explicit knowledge of Black Hole spin and mass.

In conclusion, the BH-NeRF method represents a breakthrough in our ability to study black holes and their environments. By leveraging a forward model that integrates gravitational lensing, emission distribution, and EHT measurements, this novel approach is a significant improvement compared to existing techniques. As scientists gather more data from the EHT and subsequent iterations of 3D computer vision models are released, our understanding of black holes will continue to grow. The BH-NeRF method offers immense potential for expanding our knowledge of these mysterious celestial objects and unraveling the secrets of our Universe. The future of black hole research is promising, and Neural Rendering and Computer Vision are crucial tools in our ongoing exploration of the cosmos. So, let us venture into this new frontier, ready to unravel the enigma of black holes and other wonders of our Universe. The BH-NeRF method promises to deepen our understanding of black holes and inspire further innovation in Astronomy, Artificial Intelligence, Computer Vision, and Machine Learning. As we continue to explore this Universe, we must develop and refine such tools to probe more mysteries.

In the years to come, we can expect even more fascinating discoveries and breakthroughs as researchers continue to develop advanced methods like BH-NeRF. By working at the intersection of Technology, Artificial Intelligence, and Astronomy, we can push the boundaries of human knowledge and unlock the secrets of our Universe.

ACKNOWLEDGEMENTS

A special thanks to the authors of this paper who let me write this blog.

REFERENCES

[1] Farrah, D., Croker, K. S., Zevin, M., Tarlé, G., Faraoni, V., Petty, S., … & Weiner, J. (2023). Observational evidence for cosmological coupling of black holes and its implications for an astrophysical source of dark energy. The Astrophysical Journal Letters, 944(2), L31.

[2] Levis, A., Srinivasan, P. P., Chael, A. A., Ng, R., & Bouman, K. L. (2022). Gravitationally lensed black hole emission tomography. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 19841–19850).

[3] Akiyama, K., Alberdi, A., Alef, W., Asada, K., Azulay, R., Baczko, A. K., … & Ramakrishnan, V. (2019). First M87 event horizon telescope results. II. Array and instrumentation. The Astrophysical Journal Letters, 875(1), L2.

--

--

Ansh Mittal

USC Grad | AI/ML/CV Engineer | Astronomy Enthusiast | Reading and Following Astronomy and Physics News