High School,Ted-X,Ted-Ed Speaker|Mentor,@tfugmumbai|@Microsoft Student Ambassador|International Speaker

Hello developers 👋, If you have worked on building Deep Neural Networks earlier you might know that building neural nets can involve setting a lot of different hyperparameters. In this article, I will share with you some tips and guidelines you can use to better organize your hyperparameter tuning process which should make it a lot more efficient for you to stumble upon a good setting for the hyperparameters.

Very simply a hyperparameter is external to the model that is it cannot be learned within the estimator, and whose value you cannot calculate from the data.

Many models have important parameters which cannot be directly estimated from the data. This type of model parameter is referred to as a tuning parameter because there is no analytical formula available to calculate an appropriate value.…

This blog demonstrates how you can get started with on-device ML with tools or plugins specifically launched with Android 11. If you have earlier worked with ML in Android, you will explore easier ways to integrate your ML applications with your Android apps. If you have not worked with ML in Android earlier, this could be a starting point for you to do so and start super powering your Android app with Machine Learning. In this blog, I majorly demonstrate the two biggest updates with Android 11: ML Model Binding Plugin and the new ML Kit. …

This blog demonstrates how you can get started with on-device ML with tools or plugins specifically launched with Android 11. If you have earlier worked with ML in Android, you will explore easier ways to integrate your ML applications with your Android apps. If you have not worked with ML in Android earlier, this could be a starting point for you to do so and start super powering your Android app with Machine Learning. In this blog, I majorly demonstrate the two biggest updates with Android 11: ML Model Binding Plugin and the new ML Kit. …

- Get started with TensorFlow and Deep Learning
- Computer Vision with TensorFlow
- Using Convolutional Neural Networks with TensorFlow
- Extending what Convolutional Neural Nets can do
- Working with Complex Image data for CNNs

All the code used here is available in my GitHub repository here.

This is the fifth part of the series where I post about TensorFlow for Deep Learning and Machine Learning. In the earlier blog post, you saw how you could apply a Convolutional Neural Network for Computer Vision with some real-life data sets. It did the job pretty nicely. This time you’re going to work with more complex data and do even more with the data. I believe in hands-on coding so we will have many exercises and demos which you can try yourself too. I would recommend you to play around with these exercises and change the hyper-parameters and experiment with the code. If you have not read the previous article consider reading it once before you read this one here. …

Model creation is definitely an important part of AI applications but it is very important to also know what after training. I will be showing how you could serve TensorFlow models over HTTP and HTTPS and do things like model versioning or model server maintenance easily with TF Model Server. You will also see the steps required for this and the process you should follow. We will also take a look at Kubernetes and GKE to autoscale your deployments.

All the code used for the demos in this blog post and some additional examples are available at this GitHub repo-

I also delivered a talk about this at GDG (Google Developers Group) Ahmedabad, find the recorded version…

One of the best things about AI is that you have a lot of open-source content, we use them quite frequently. I will show how TensorFlow Hub makes this process a lot easier and allows you to seamlessly use pre-trained convolutions or word embeddings in your application. We will then see how we could perform Transfer Learning with TF Hub Models and also see how this can expand to other use cases.

All the code used for demos in this blog post are available at this GitHub repo-

I also delivered a talk about this at GDG (Google Developers Group) Nashik, find the recorded version…

Overfitting is a huge problem, especially in deep neural networks. If you suspect your neural network is overfitting your data. There are quite some methods to figure out that you are overfitting the data, maybe you have a high variance problem or you draw a train and test accuracy plot and figure out that you are overfitting. One of the first things you should try out, in this case, is regularization.

All the LaTex Code used in this blog is compiled on the GitHub repo for this blog:

The other way to address high variance is to get more training data that is quite reliable. If you get some more training data, you can think of it in this manner that you are trying to generalize your weights for all situations. And that solves the problem most of the time, so why anything else? But a huge downside with this is that you can not always get more training data, it could be expensive to get more data and sometimes it just could not be accessible. …

When implementing a neural network, backpropagation is arguably where it is more prone to mistakes. So, how cool would it be if we could implement something which would allow us to debug our neural nets easily. Here, we will see the method of Gradient Checking. Briefly, this methods consists in approximating the gradient using a numerical approach. If it is close to the calculated gradients, then backpropagation was implemented correctly. But there is a lot more to this, let us see that. Sometimes, it can been seen that the network get stuck over a few epochs and then continues to converge quickly. We will also see how we can solve this problem. …

One of the problems of training neural network, especially very deep neural networks, is data vanishing or even exploding gradients. What that means is that when you are training a very deep network your derivatives or your slopes can sometimes get either very very big or very very small, maybe even exponentially small, and this makes training a lot more difficult. This further could even take more time to reach the convergence. …

These tips were formulated by me when I was participating in 10 Days of ML Challenge, a wonderful initiative by TensorFlow User Group Mumbai to encourage people to learn more or practice more about ML and not necessarily TensorFlow. The flow was such that you would be given a task at the start of the day and you were expected to complete the task and tweet about your results. Tasks could be anything from pre-processing the data to building a model to deploying your model. I would advise you to go over the tasks of the challenge and try them out yourself, it does not matter if you are a beginner or practitioner. I will go on to share my major takeaways from these tasks. …