When Neural Networks saw the first image of Black Hole.

On April 10th, scientists and engineers from Event Horizon Telescope team achieved a remarkable breakthrough in quest to understand the cosmos by unveiling the first image of black hole. This furthers strengthens Einstein theory of general relativity — “ massive objects cause a distortion in space-time, which is felt as gravity”.

Well I am not a physicist or astronomer to comprehend and explain in detail about this but like me there are millions and millions of people who despite being in different fields are fascinated by cosmos and specially black hole. The first image of black hole has send wave of excitement all over the world. I am a Deep learning engineer who mainly works with convolution neural network and I wanted to see what AI algorithms thinks about the black hole picture and this blog is about that.

This excerpt from The Epoch Times describe black hole — The Black Holes are made up of “a great amount of matter packed into a very small area,” mostly formed from “the remnants of a large star that dies in a supernova explosion.” They have so strong gravitational fields that even light can’t escape it. The pictured M87 Black Hole is shown below. The black picture is very well explained in the blog by vox-How to make sense of the black hole image, according to 2 astrophysicists.

Black Hole — M87 — Event Horizon Telescope
Different Regions of black hole. Screenshot from vox video — Why this black hole photo is such a big deal

Kindly visit this blog post which shows cool animation explaining why black hole picture looks like — How to make sense of the black hole.

  1. What CNN think about the black hole image

CNN — Convolution Neural Network are class of deep learning algorithms which are quite efficient in recognizing real world objects. CNNs are the best neural nets for interpreting and understanding images. This networks are trained on million of images and they have learned to recognize nearly 1000 different kind of real world objects. I thought of showing the black hole image to two of such trained CNNs and see what they interpret from the image, to which real world object the picture of black hole resembles according to the network. This is not a smart idea as black hole image was generated after interpreting and integrating lot of different signal from space, but I just wanted to see how is the interpretation with just the picture , without any other signal information.

Prediction by VGG-16 Network — Match Stick
Prediction by VGG-19 Network — Match Stick
Prediction by ResNet-50 Network — Candle

As we can see from above picture that pre-trained vgg16 and vgg19 predicts the black hole image as match stick and ResNet50 thinks its a candle. If we make some analogy we can see that it make some sense as both the burning matchstick and candle has a dark center surrounded by a strong bright yellow light.

2. What Features CNN learns from the black hole image

Another thing I did was to visualize what the intermediate layers of the VGG16 was generating. Deep learning networks are called deep because they have number of layers and each layer learns some representation and features of the input image. So let’s see what different layers of the network learns from the input image. The results are quite beautiful.

64 feature maps of first convolution layer of VGG16

If you look closely you can see that the lower bright region of the black hole is a strong feature and is being learned by many of the filters. some of the interesting filter output are shown below and they already look like some celestial object.

4 of the above 64 feature map from first convolution layer
64 feature map of the second convolution layer of VGG-16

Lets zoom in to some of the interesting feature maps of second convolution layer.

6 of the above 64 feature maps from the second convolution layer

Now lets go deeper and have a look at the third convolution layer

128 feature maps pf the 3rd convolution layer of VGG16

Zooming in we kinda see the similar kind of pattern

8 of the above feature maps from 3rd convolution layer

Further going deeper, we get something like this

6 of the 128 feature maps from 4th convolution layer of vgg 16

As we go deeper we get higher level abstract information and when we visualize the 7th, 8th and 10th convolution layer we will see only high level information.

Feature map of 7th convolution layer

As we can see many of the feature maps are dark and are only learning specific high level features required for recognizing that class. This becomes more prominent in further deeper layers. For now lets zoom in and see some of the filters.

6 of the above feature maps

Now lets see the 512 feature maps of the 10th convolution layer

Feature maps of the 10th convolution layer. Most of the filters are dark and only learning the specific higher level information required for recognizing this object

Now you can clearly see that in most of the output feature map only a region of the image is being learnt as feature. Those are high level features seen by the neurons. Lets see some of the above feature maps in a little large size

some of the feature maps of 10th convolution layer increased in size.

Now that we saw what CNN is trying to learn from the black hole image, lets try passing this image to some other popular neural network algorithms like Neural Style Transfer and DeepDream

3. Trying out Neural Style Transfer and Deep Dream on the black hole image

Neural style transfer are smart networks which transfer the style of a style image to a source image and generates an artistic image out of it. If it doesn’t make sense, the results below will totally elucidate the concept. I used the website deepdreamgenerator.com to generate different kind of artistic image out of the original black hole image. The pictures are quite alluring.

Style Transfer. It was generated using the website deepdreamgenerator.com
Style Transfer. It was generated using the website deepdreamgenerator.com

DeepDream as mentioned in wikipedia -

DeepDream is a computer vision program created by Google engineer Alexander Mordvintsev which uses a convolutional neural network to find and enhance patterns in images via algorithmic pareidolia, thus creating a dream-like hallucinogenic appearance in the deliberately over-processed images.

Deep Dream. It was generated using the website deepdreamgenerator.com

This videos showing deep dream has quite a hallucinating effect — Journey on the Deep Dream

Well this is it for now, I was quite excited seeing the first picture of black hole and hence this blog post. It may not be that useful but the pictures generated above are totally worth it. Enjoy the pictures!!