Solar Event Classification using Convolutional Neural Networks

This blog post is about one of my recent publication “Solar Event Classification using Convolutional Neural Networks”. This article will be abstract and probably not useful for an expert but it should be interesting to see another application area of deep learning.

Over the last decade, deep learning literally takes over the computer vision world. The story starts from Yann LeCun and his invention of convolutional neural networks (CNN)(see Yoshua Bengio and Geoffery Hinton, if you want to backtrace more). When LeCun invented CNN in the 90s, the computation capacity of computers was restricted, therefore it only can be used for a small task like handwritten digit recognition. As computation power increased, we were able to train deep model in reasonable times. The first widely recognized success of deep learning was 2012 Imagenet competition, which Alex Krizhevsky applied convolutional neural network to Imagenet dataset using GPU acceleration.

Until Imagenet success, deep learning was not popular as it is today but there is various research done by deep learning groups that are managed by Bengio, LeCun, and others. Those publications build a foundation for most of the on-going research today (e.g. development of back-propagation algorithm). This should be an excellent feeling for Bengio and others, who advocate a significant amount of their time on a topic and it becomes the most prominent way to real artificial intelligence.

In this essay, I will not give more background information about deep learning but if you are interested, I would recommend a couple of keywords: Generative Adversarial Network, Open AI, and Tensorflow.

Deep learning is started to be used in almost every domain that is dealing with images. However, nobody is tried to use deep learning in the solar image domain. In this publication, we used deep convolutional neural networks on solar images.

SDO AIA image in wavelength 193

NASA sent a satellite called Solar Dynamic Observatory (SDO) to the Earth’s orbit in 2010 to capture a high-resolution image of the Sun without any blocking of clouds. SDO produce 4K images every 10 seconds with 10 different wavelengths. Solar physicist work on those images to get a better understanding of the Sun’s dynamics. Besides the scientific curiosity, a better understanding of the Sun is a must because of the dangerous solar activity, which may hit the Earth and have fatal consequences. To do so, they annotate the events happening on the Sun using the image takes by SDO. As you can image, it is very hard to annotate every image taken by the satellite so there are modules developed to detect those events.

Our approach was to classify those annotated images using well-known deep learning architectures (LeNet, CifarNet, AlexNet, and GoogLenet) and comparing those architectures among each other. To classify images, we extracted small patches (bounding box of a solar event) from full disk images and resized them to fit in models. Our results show that all models work well on the classification task while LeNet and AlexNet slight performs better when you consider the accuracy and training time together.

If you want to learn more about the project, you can visit the link (be available soon). Even if this project seems a small step on solar imaging, it was enough for me to believe that one day we will have a model for the solar images which can recognize events perfectly and predict them in advance!

Note: To investigate the Sun more, you can check our new tool ISD: