AI Saturdays Week 3 — It’s not too late
OK, we get it. This week went by a lot faster than you expected, and guess what, we will be zipping fast into AI with the help of Fast.ai in coming weeks.
If you haven’t started yet, and you’re a bit worried about what you should know and be able to do, we’ve got you covered! What follows is a step-by-step explanation of what you need to do to catch up in the least amount of time. But before that let’s reflect on what we have been up to last week.
Thanks to NVIDIA India for providing us a wonderful venue for the meetup.
In week 3 meetup, we went thru Course 3 of Deeplearning.ai. We learned how to build a successful machine learning project. If you aspire to be a technical leader in AI and want to set a direction for your team’s work, this course will show you how.
- Understand how to diagnose errors in a machine learning system
- Be able to prioritize the most promising directions for reducing error
- Understand complex ML settings, such as mismatched training/test sets, and comparing to and/or surpassing the human-level performance
- Know how to apply end-to-end learning, transfer learning, and multi-task learning
We discussed about:
Orthogonalization: Andrew Ng discusses the importance of orthogonalization in machine learning strategy. The basic idea is that you would like to implement controls that only affect a single component of your algorithms performance at a time. For example, to address bias problems you could use a bigger network or more robust optimization techniques. You would like these controls to only affect bias and not other issues such as poor generalization. An example of a control which lacks orthogonalization is stopping your optimization procedure early (early stopping). This is because it simultaneously affects the bias and variance of your model.
Importance of a single number evaluation metric: Andrew Ng stresses the importance of choosing a single number evaluation metric to evaluate your algorithm. You should only change the evaluation metric later on in the model development process if your target changes.
Precision: A metric for classification models. Precision identifies the frequency with which a model was correct when predicting the positive class. That is:
Recall: A metric for classification models that answers the following question: Out of all the possible positive labels, how many did the model correctly identify? That is:
In binary classification, accuracy has the following definition:
Train/dev/test distributions: Always ensure that the dev and test sets have the same distribution. This ensures that you are aiming at the correct target during the iteration process. This also means that if you decide to correct mislabeled data in your test set then you must also correct the mislabeled data in your development set.
Size of the dev and test sets:
- An old way of splitting the data was 70% training, 30% test or 60% training, 20% dev, 20% test.
- The old way was valid for a number of examples ~ <100000
- In the modern deep learning if you have a million or more examples a reasonable split would be 98% training, 1% dev, 1% test.
Approximating Bayes optimal error: Andrew Ng explains how human level performance could be used as a proxy for Bayes error in some applications. For example, for tasks such as vision and audio recognition, human level error would be very close to Bayes error. This allows your team to quantify the amount of avoidable bias your model has. Without a benchmark such as Bayes error, it’s difficult to understand the variance and avoidable bias problems in your network.
Improving your model performance:
The two fundamental assumptions of supervised learning:
- You can fit the training set pretty well. This is roughly saying that you can achieve low avoidable bias.
- The training set performance generalizes pretty well to the dev/test set. This is roughly saying that variance is not too bad.
To improve your deep learning supervised system follow these guidelines:
- Look at the difference between human level error and the training error — avoidable bias.
- Look at the difference between the dev/test set and training set error — Variance.
- If avoidable bias is large you have these options:
Train bigger model.
Train longer/better optimization algorithm (like Momentum, RMSprop, Adam).
Find better NN architecture/hyperparameters search.
- If variance is large you have these options:
Get more training data.
Regularization (L2, Dropout, data augumentation).
Find better NN architecture/hyperparameters search.
Error Analysis: Andrew Ng shows a somewhat obvious technique to dramatically increase the effectiveness of your algorithms performance using error analysis. The basic idea is to manually label your misclassified examples and to focus your efforts on the error which contributes the most to your misclassified data.
For example, in the cat recognition Ng determines that blurry images contribute the most to errors. This sensitivity analysis allows you see how much your efforts are worth on reducing the total error. It may be the case that fixing blurry images is an extremely demanding task, while other errors are obvious and easy to fix. Both the sensitivity and approximate work would be factored into the decision making process.
Transfer learning: Transfer Learning is transfer of knowledge from a pre-trained task, say A to another task which generally have less amount of data and is of same type(like images). Implementing transfer learning involves retraining the last few layers of the network used for a similar application domain with much more data.
When to use multi-task learning? Multi-task learning forces a single neural network to learn multiple tasks at the same time (as opposed to having a separate neural network for each task). Andrew Ng explains that the approach works well when the set of tasks could benefit from having shared lower-level features and when the amount of data you have for each task is similar in magnitude.
In next weeks, we will be following the fast.ai Deep Learning MOOC week-by-week.
What is fast.ai? And what is this workshop series about?
According to the co-founders of fast.ai, their objective is to teach ‘how to build state-of-the-art models without needing graduate-level math, but also without dumbing anything down’
And the first step in this whole process, is to get a hands-on training to train neural networks using fast.ai library.
Over the next seven weeks, we shall be following the 2018 version of the fast.ai course, Practical Deep Learning for Coders, Part 1. (http://course.fast.ai/lessons/lessons.html)
(Please note that we have no official connection with fast.ai whatsoever; we are just using their materials as a basis for our own workshops.)
How are these workshops going to work?
We will briefly introduce each of the key topics from that week’s lecture, and then invite everyone to contribute their insights and questions about it and work through the important aspects of the notebook.
What would my role be as a participant in the workshop?
Self study is hard. Studying with the community makes the process easier and even more rewarding! We hope that everyone will be able to contribute something while they are learning.
Please note that we are not formally “teaching” deep learning in these workshops, but rather providing a participation-oriented environment for everyone to learn together.
What kind of time commitment is involved?
Prior to each workshop, we would request that you at least watch the lecture for that week (typically these are around 2 hours long; shorter if you play them back at 1.25x speed!). If you are able to spend a few more hours each week working on the supplementary course materials, that would be ideal.
What tools does the course use?
- Python (familiarity with numpy and pandas needed)
- fast.ai (a PyTorch library)
I’m in. What do I need to do before the first workshop?
- Sign up!
- Watch the first video at http://course.fast.ai/lessons/lessons.html
- Install the PyTorch and fastai packages.
- (Ideally!) Prepare any questions you may have and/or insights that you would like to share with the group.
Tell us how was your week 03 in our #bangalore Slack channel or in comments below.
Don’t forget to:
- Sign up here to attend next meetups.
- All the discussed materials related to the meetup can be found on Github repo.
- Follow AISaturdays Bangalore on twitter.
- You can find me on twitter.
See y’all next Saturday.