Basic neural networks patterns it is worth for a researcher to know, by platform

Each deep learning platform has its own implementations of basic neural networks’ patterns. So it is worth to know the differences and make the best choice each time.

From 27 February to 10 April 2017, I have followed the “Practical Deep Learning for Coders” course with Professor Jeremy and Rachel through their FAST AI program as an international fellow. It requires about 10 hours a week (70 hours for the 7 lessons) of work on differents notebooks implementations, read papers and try some ideas. The course itself takes 2–3 hours each Monday.
This course encourages team works, as one of their requirements is to do at least one project in a team. One more valuable particularity of this course is that Professor Jeremy shows to its students hints on how to convert research papers into projects.

Several platforms allow to perform Deep Learning implementations, but each one has its pros and cons. Sometimes it exists built-in functions to go fast with, instead of creating new ones. Just as an example, TFLearn allows Residual Block as a built-in function. There are other differences such as how they perform loop operations on GPU or CPU. Tensorflow computes loop operations entirely inside the GPU when Pytorch prefers loop out of the GPU (so it does it on CPU) and then send the result to it. So it is important to know the basic patterns employed with each framework, and make the best choices according to what you want to do, either a research paper replication or a personal project.

Basic patterns in ENet- A Deep Neural Network Architecture for Real-Time Semantic Segmentation

There are several basic patterns in neural networks : Activations (Relu, PRelu, Sigmoid, Tanh, etc.), Nets (Conv layers, Residual block, etc.), Operations (sums, products, dropout, dropmax, etc.), Etc.

The bold point is that when you implement a research paper with a selected framework, you should perform the conversion line by line and test it step by step until you finish the work. So, you should be able to guess if you have to use the built-in functions or sometimes create your own ones according to the research paper complexity. The basic patterns comparison will allow you select the best frameworks and functions for each task and so drastically reduce the number of code lines.

So I share with you the Basic patterns for the following frameworks : TENSORFLOW, TFLEARN, KERAS, PYTORCH and THEANO.
You could find it here : https://docs.google.com/spreadsheets/d/1PxU7_ZRVjnhZY8tMpEYxNSWoqNqysWVni9c6eGkbuaw
In the beginning, I did it for a private use. But the need of this is often expressed on various forums, so I decided to share it with everyone.
Please feel free to send any comment you find necessary to complete the list (I still continue to update).

Finally, I personally like Tensorflow but also appreciate how it could be used in addition to other platforms such as Keras and Pytorch (even, Keras is now integrated into Tensorflow through tf.contrib.keras and you could find TFLearn patterns in tf.contrib.learn).
So, my intention here is not to say what is the best, but to show the equivalent of the basic patterns when you move from a framework to another. It is your own job to analyze and pick the framework that suits to your work, or eventually combine some of them, as we did during this course.

For the teamwork, I work with my team colleague Gidi on Age and Gender Classification in video. Later, we will share the final results with you via another post.

Thanks for reading this. If you liked this, click the 💚 below so other people will be informed of it here on Medium.

--

--

Kouassi Konan Jean-Claude
Basic neural networks patterns it is worth for a researcher to know, by platform

Machine Learning Engineer (Udacity), Passionate of Cognitive Computing Research, Artificial Intelligence Ph.D. student at BIU.