When Neural Networks saw the first image of a Black Hole.

Anuj shah (Exploring Neurons)
Analytics Vidhya
Published in
6 min readApr 19, 2019

On April 10th, scientists and engineers from the Event Horizon Telescope team achieved a remarkable breakthrough in their quest to understand the cosmos by unveiling the first image of a black hole. This further strengthens Einstein's theory of general relativity — “ massive objects cause a distortion in space-time, which is felt as gravity”.

Well, I am not a physicist or astronomer to comprehend and explain in detail about this but like me, there are millions and millions of people who despite being in different fields are fascinated by the cosmos and especially the black holes. The first image of a black hole has sent waves of excitement all over the world. I am a Deep learning engineer who mainly works with convolution neural network and I wanted to see what AI algorithms think about the black hole picture and this blog is about that.

This excerpt from The Epoch Times describes black hole — Black Holes are made up of “a great amount of matter packed into a very small area,” mostly formed from “the remnants of a large star that dies in a supernova explosion.” They have so strong gravitational fields that even light can’t escape it. The pictured M87 Black Hole is shown below. The black picture is very well explained in the blog by vox-How to make sense of the black hole image, according to 2 astrophysicists.

Black Hole — M87 — Event Horizon Telescope
Different Regions of a black hole. Screenshot from vox video — Why this black hole photo is such a big deal

Kindly visit this blog post which shows cool animation explaining what a black hole picture looks like — How to make sense of a black hole.

  1. What does CNN think about the black hole image

CNN — Convolution Neural Network is class of deep learning algorithms which are quite efficient in recognizing real-world objects. CNNs are the best neural nets for interpreting and understanding images. These networks are trained on millions of images and they have learned to recognize nearly 1000 different kinds of real-world objects. I thought of showing the black hole image to two of such trained CNNs and see what they interpret from the image, to which real-world object the picture of the black hole resembles according to the network. This is not a smart idea as the black hole image was generated after interpreting and integrating a lot of different signals from space, but I just wanted to see how is the interpretation with just the picture, without any other signal information.

Prediction by VGG-16 Network — Match Stick
Prediction by VGG-19 Network — Match Stick
Prediction by ResNet-50 Network — Candle

As we can see from the above picture pre-trained vgg16 and vgg19 predict the black hole image as a match stick and ResNet50 thinks it's a candle. If we make some analogy we can see that it makes some sense as both the burning matchstick and candle have a dark center surrounded by a strong bright yellow light.

2. What Features CNN learns from the black hole image

Another thing I did was visualize what the intermediate layers of the VGG16 were generating. Deep learning networks are called deep because they have a number of layers and each layer learns some representation and features of the input image. So let’s see what different layers of the network learns from the input image. The results are quite beautiful.

64 feature maps of the first convolution layer of VGG16

If you look closely you can see that the lower bright region of the black hole is a strong feature and is being learned by many of the filters. some of the interesting filter outputs are shown below and they already look like some celestial object.

4 of the above 64 feature maps from the first convolution layer
64 feature map of the second convolution layer of VGG-16

Let's zoom in on some of the interesting feature maps of the second convolution layer.

6 of the above 64 feature maps from the second convolution layer

Now let's go deeper and have a look at the third convolution layer

128 feature maps of the 3rd convolution layer of VGG16

Zooming in we kinda see a similar kind of pattern

8 of the above feature maps from 3rd convolution layer

Further going deeper, we get something like this

6 of the 128 feature maps from the 4th convolution layer of vgg 16

As we go deeper we get higher-level abstract information and when we visualize the 7th, 8th, and 10th convolution layers we will see only high-level information.

Feature map of 7th convolution layer

As we can see many of the feature maps are dark and are only learning specific high-level features required for recognizing that class. This becomes more prominent in further deeper layers. For now, let's zoom in and see some of the filters.

6 of the above feature maps

Now let's see the 512 feature maps of the 10th convolution layer

Feature maps of the 10th convolution layer. Most of the filters are dark and only learn the specific higher-level information required for recognizing this object

Now you can clearly see that in most of the output feature maps, only a region of the image is being learned as a feature. Those are high-level features seen by the neurons. Let's see some of the above feature maps in a little large size

some of the feature maps of the 10th convolution layer increased in size.

Now that we saw what CNN is trying to learn from the black hole image, let's try passing this image to some other popular neural networks algorithms like Neural Style Transfer and DeepDream

3. Trying out Neural Style Transfer and Deep Dream on the black hole image

Neural style transfer is a smart network that transfers the style of a style image to a source image and generates an artistic image out of it. If it doesn’t make sense, the results below will totally elucidate the concept. I used the website deepdreamgenerator.com to generate different kinds of artistic images out of the original black hole image. The pictures are quite alluring.

Style Transfer. It was generated using the website deepdreamgenerator.com
Style Transfer. It was generated using the website deepdreamgenerator.com

DeepDream as mentioned in Wikipedia -

DeepDream is a computer vision program created by Google engineer Alexander Mordvintsev which uses a convolutional neural network to find and enhance patterns in images via algorithmic pareidolia, thus creating a dream-like hallucinogenic appearance in the deliberately over-processed images.

Deep Dream. It was generated using the website deepdreamgenerator.com

This video showing deep dreams has quite a hallucinating effect — Journey on the Deep Dream

Well, this is it for now, I was quite excited seeing the first picture of the black hole and hence this blog post. It may not be that useful but the pictures generated above are totally worth it. Enjoy the pictures!!

If you find my articles helpful and wish to support them — Buy me a Coffee

--

--