Codelabs from #GoogleMLSummit

Here are the codelabs that I led today at the Google ML Summit in Tokyo.

Lab 1 — Hello World

In this colab, the student will learn the basics of creating a neural network, and how it can learn the rules that define a behavior. They’ll have a very simple scenario with limited data — a set of numbers in a linear relationship, and they’ll see how the neural network will fit these to each other creating a model which can then be used to infer the correct values for later data.

Run the colab here.

Lab 2 — Basic Computer Vision

This colab will take the student through a basic Computer Vision problem — using exactly the same techniques as Lab 1 — where they will train a neural network to recognize various items of clothing. It uses a dataset of 70,000 images, but is able to train in a few seconds! They’ll see how the concepts of a neural network definition and the training loop are very similar to what they had with the ‘hello world’ lab from earlier.

Run the colab here

Lab 3 — What are Convolutions and Pooling?

The computer vision problem in the previous lab was powerful, but it was limited in that it required the images to be centered. It didn’t really understand what makes up the difference between a shoe and a shirt, something that can be done by extracting the features that make up what a shoe is, or what a shirt is. In this lab they’ll take a look at what Convolutions are, as well as how to compress an image with pooling. It will give them the basics they need to then move onto Convolutional Neural Networks (in Lab 4) that train based on these features.

Run the colab here

Lab 4 — Enhance your Computer Vision with Convolutional Neural Networks

In Lab 2, you build a basic Deep Neural Network (DNN) that classified clothing images. Then, in Lab 3, you saw what convolutions and pooling were. In this lab you’ll add convolutional layers on top of your neural network to make it more efficient at classifying the contents of the image based on features that it spots!

Run the colab here

Lab 5 — Using complex images

In Lab 5 you used a convolutional neural network to make vision of the fashion images better. But you still had the problem of the images being centered in a 28x28 frame. In this lab you’ll build on that to train with some more complex data — a set of horses and humans. In this case the features will determine the classification, as the subject may not be centered in the image and may be in a variety of different poses.

Run the colab here

Lab 6 — Avoiding overfitting with large data sets

In Lab 5 you trained a classifier on horses-v-humans, and despite it being very accurate on the training set, when you started using real-world images, you likely saw a number of mistakes. This was due to overfitting, the concept where, if the training set is small, the network will become very specialized — only able to recognize the type of image that it was trained on. To avoid this, there are several techniques, but there’s nothing better than a large data set! So in this lab you’ll see the impact of using much bigger datasets. Warning — training in this lab might take a very long time!

Run the colab here

Note again that this lab might have a lot of mistakes in classification, despite being trained on larger datasets. Avoiding this overfitting is a massive part of AI research, and to avoid it, you should explore augmentation and regularization as a strategy to avoid it! There’s a great resource on that here.