Writing a deep learning repo #1
So I have actively been using a lot of deep learning techniques recently. There were quite a lot of papers I ended up implemented and used in various different ways. So I thought why not just set up a deep learning library which records these results.
I will be covering the following implementations :
- GAN, EBGANs, WGANs
- LSTM and Convolution based tweet processing DL modules
- Character and word level models for language
- Variational Autoencoder GANs and it’s variants
- Basic information retrieval methodologies in DL
The implementations will include :
- Basic optimizations for learning using TF
- Training methodologies (hacks and theory) for better learning
- Results (in most cases) and why it did or did not work.
I will be covering every module of the library as I do them in consequent blog posts. They will be marked as “#n” as in the title of the said blog post.
All implementations will be using TF. The code would be available under MIT License. I will be citing the referred paper(s) along with the code and results.
Why do this? Because there is a large set of people out there looking to get into Machine Learning who always do not understand most of the code available freely. So the only way to handle is that to set up good documentation. More importantly, it would help me learn more and contribute back to the community while I do so.
But all papers give their codes Yes true, but that is not enough. Some of the codes are in Torch / PyTorch not TF and vice versa. Also there are a lot of intricacies that need to be handled that might not have been discussed, overlooked or not paid attentioin to.
Why open source? Because it is not like any of the work I am implementing is my own and I am myself thankful to the OpenSource community for making it available to me.
What results and how to retrain? The models would be plug and play to retrain. I will report my hyper-parameter details and which ones worked best along with a good representation of the results.
Let the games begin!
