Image Classification with ImageMonkey
In this short blog post I would like to show you how to export data from the ImageMonkey dataset and train your own simple image classifier based on cat
and dog
images.
What’s ImageMonkey?
ImageMonkey is a public open source image dataset which contains over 100k CC0 licensed images, more than 120k labeled objects and about 100k annotated objects.
One of the most powerful features of ImageMonkey is its tight integration of existing Machine Learning frameworks. This allows you to train your own neural net with just a handful of commands.
In case you want to read more about ImageMonkey itself, have a look at this blog post.
Explore Dataset
Before we train our own image classifier, it makes sense to get a feeling for the training data first. We will therefore use ImageMonkey’s visual browse mode to visually examine our training data (e.g browse dog images). If we stumble across any invalid data we can correct it directly online in the label mode.
After we’ve confirmed that the data we are using for training is sane, we can start with the actual training process.
Train Image Classifier
First, pull the latest imagemonkey-train
docker image from Dockerhub and start the container.
CPU Version:
docker pull bbernhard/imagemonkey-train:latest
docker run -it bbernhard/imagemonkey-train:latest
GPU Version:
docker pull bbernhard/imagemonkey-train:latest-gpu
docker run --runtime=nvidia -it bbernhard/imagemonkey-train:latest-gpu
Inside the docker container, use the monkey
script to start the training of your image classifier.
monkey --help
usage: PROG [-h] [ — verbose VERBOSE]
{train,list-labels,list-validations,list-annotations,test-model}
…
positional arguments:
{train,list-labels,list-validations,list-annotations,test-model}
train train your own model
list-labels list all labels that are available at ImageMonkey
list-validations list the validations together with their count
list-annotations list the annotations together with their count
test-model test your modeloptional arguments:
-h, — help show this help message and exit
— verbose VERBOSE verbosity
So, in order to train our image classifier on cat
and dog
images, we use the following command:
monkey train --labels="cat|dog" --type="image-classification"
The monkey script then automatically downloads all images tagged with cat
or dog
and uses transfer learning on a pre-trained inception-v3
model to teach the neural net about cats and dogs.
After the training is done, the trained model (together with a TensorBoard screenshot of the model’s characteristics) can be found in the /tmp/image_classification/output
folder.
Test Image Classifier
Okay, so we’ve successfully trained our image classifier on cat
and dog
images. It’s now time to put the model to the test and feed it some test images.
We can use the monkey
script’s test-model
functionality to test individual images.
Usecurl
to download a dog
image inside the docker container.
e.g:
curl ‘https://images.pexels.com/photos/850602/pexels-photo-850602.jpeg?auto=compress&cs=tinysrgb&dpr=2&h=650&w=940' -o /tmp/test_image_dog.jpg
Then, we use the following monkey
script command to test the model:
monkey test-model --type="image-classification" --model=/tmp/image_classification/output/graph.pb --labels=/tmp/image_classification/output/labels.txt --image=/tmp/test_image_dog.jpg
As you can see, our image classifier successfully classified the downloaded dog image as dog.
If you prefer a visual output like the one below you can pass the --output-image
argument to the above monkey
script call (e.g: --output-image=/tmp/test_model_output.jpg
):
That’s it. With just a handful of commands we were able to successfully train our first little image classifier.
If you found this blog post helpful, please share it on your favorite forums (Twitter, LinkedIn, Facebook).