Solar Event Classification using Convolutional Neural Networks
This blog post is about one of my recent publication “Solar Event Classification using Convolutional Neural Networks”. This article will be abstract and probably not useful for an expert but it should be interesting to see another application area of deep learning.
Over the last decade, deep learning literally take over the computer vision world. Story starts from Yann LeCun and his invention of convolutional neural networks (CNN)(see Yoshua Bengio and Geoffery Hinton, if you want to backtrace more). When LeCun invented CNN in 1990s, computation capacity of computers were restricted, therefore it only can be used for small task like handwritten digit recognition. As computation power increased, we were able to train deep model in reasonable times. The first widely recognized success of deep learning was 2012 Imagenet competition, which Alex Krizhevsky applied convolutional neural network to Imagenet dataset using GPU acceleration.
Until Imagenet success, deep learning was not popular as it is today but there are various research done by deep learning groups that are managed by Bengio, LeCun and others. Those publications build a foundation for most of the on-going research today (e.g. development of back-propagation algorithm). This should be an excellent feeling for Bengio and others, who advocate significant amount of their time on a topic and it become the most prominent way to real artificial intelligence.
In this essay, I will not give more background information about deep learning but if you are interested, I would recommend couple of key words: Generative Adversarial Network, Open AI, and Tensorflow.
Deep learning is started to use in almost every domain that are dealing with images. However, nobody is tried to use deep learning in solar image domain. In this publication, we used deep convolutional neural networks on solar images.
NASA send a satellite called Solar Dynamic Observatory (SDO) to the Earth’s orbit in 2010 to capture high-resolution image of the Sun without any blocking of clouds. SDO produce a 4K images in every 10 seconds with 10 different wavelengths. Solar physicist work on those images to get better understanding of the Sun’s dynamics. Besides the scientific curiosity, better understanding of the Sun is a must because of the dangerous solar activity, which may hit the Earth and have fatal consequences. To do so, they annotate the events happening on the Sun using the image takes by SDO. As you can image, it is very hard to annotate every image taken by the satellite so there are modules developed to detect those events.
Our approach was to classify those annotated images using well-known deep learning architectures (LeNet, CifarNet, AlexNet, and GoogLenet) and comparing those architectures among each others. To classify images, we extracted small patches (bounding box of a solar event) from full disk images and resized them to fit in models. Our results show that all models work well on classification task while LeNet and AlexNet slight performs better when you consider the accuracy and training time together.
If you want to learn more about the project, you can visit the link (be available soon). Even if this project seems a small step on solar imaging, it was enough for me to believe that one day we will have a model for solar image which can recognize events perfectly and predict them in advance!
Note: To investigate the Sun more, you can check our new tool ISD: http://isd.dmlab.cs.gsu.edu/