It has been great 4 years contributing to open source. I got the opportunity to work with organizations like Mozilla, DuckDuckGo, OpenMF and of course TensorFlow. One of my main motive behind contributing to a project is learning. So a year back, when I was exploring the field of Machine Learning, I stumbled upon a project called deeplearn.js. It was recently open sourced by Google + PAIR Code initiative and focused on bringing machine learning to the web. I decided to get involved in the project for multiple reasons:
- Machine Learning on the client side is great for privacy and would enable real-time experiences and visualizations. Also, +1 for federated learning.
- It was fairly new and didn’t have many operations and algorithms implemented. This would give me the opportunity to implement them.
- It made use of technologies I didn’t have experience with and wanted to learn them at some point in time.
Working on a new code base is always overwhelming at first. I started by reading the documentation and trying to understand how individual methods work and how they exploit WebGL for fast computations. While doing this, I found a few bugs on the website and thought of fixing them. The contribution process was just like any other open source project:
- Open an issue elaborating on the bug you found and how you think it can be fixed.
- Prepare a well-written, well-tested and well-documented pull request explaining what the fix does exactly.
The community was very welcoming. I had my first pull request merged in no time. This motivated me to fix more and more bugs. When I was comfortable with the codebase, I started implementing methods that were missing from the API including mathematical, logical and control flow operations.
deeplearn.js provided a graph-based API (similar to TensorFlow) which was replaced by the eager mode during the migration to TensorFlow.js. I worked on implementing gradients for various operations and optimizers enabling them to be invoked eagerly. A good part of my contributions went into fixing the documentation and development aspects of the project.
With more people getting involved and building projects over TensorFlow.js, the project became more developer focused. This also required building examples and models that would showcase the capabilities of the project. I shifted my focus to developing them for the project website. The team shared a well-curated list of examples with me. While building these examples, I found some loss and reduction operations to be missing and implemented those.
It has been an amazing learning experience contributing to TensorFlow.js and I’ve really learned a lot during the process. Shout out to the whole team, especially Nikhil Thorat, Daniel Smilkov, Shanqing Cai and Stanley Bileschi for building such a wonderful project and guiding me throughout my contributions. You all are awesome! I’m really excited to see new contributors getting involved in the project and other projects being built over TensorFlow.js. The future of machine learning on the web is bright and I’m happy to be a small part of it.
Finally, thanks to the Google Open Source team for recognizing my efforts, honoring me with the Google Open Source Peer Bonus award and giving me the opportunity to share my experience with everyone through this post.
EDIT: I’m currently looking for research roles in Machine Learning. Please feel free to reach out if you think I can add value to your research. I’m manrajgrover on Github and @manrajsgrover on Twitter.