This part is meant to be the continuation of Part 1: Data Preparation, and Part 2: Modelling.
Step 1. Download evaluation output, and training output from GCP Bucket.
# From the directory where you want to copy the files# create local folder for evaluation output
mkdir eval
# copy evaluation output to local folder
gsutil cp gs://vivienne-artifacts/object_detection_rat/eval/* eval/# create local folder for training output
mkdir train
# copy training output to local folder
gsutil cp gs://vivienne-artifacts/object_detection_rat/train/* train/
Training output is composed of .index
, .data
, and .meta
files for each checkpoint. Make sure that the checkpoint trio are complete.
Step 2. Check the latest checkpoint
Download, and open the checkpoint
file. The first line will reveal the latest checkpoint. This means that model.ckpt-20002
is the latest checkpoint
# Checkpoint file contains:model_checkpoint_path: "model.ckpt-20002"
all_model_checkpoint_paths: "model.ckpt-12316"
all_model_checkpoint_paths: "model.ckpt-14794"
all_model_checkpoint_paths: "model.ckpt-17271"
all_model_checkpoint_paths: "model.ckpt-19740"
all_model_checkpoint_paths: "model.ckpt-20002"
Step 3. Convert model checkpoints to a prototype model
Convert model checkpoints to Protobuf, in my case I chose the latest one.
# from /PATH/TO/tensorflow/models/researchpython object_detection/export_inference_graph.py \
--input_type image_tensor \
--pipeline_config_path /PATH/TO/ssd_mobilenet_v1_coco.config \
--trained_checkpoint_prefix /PATH/TO/model.ckpt-20002 \
--output_directory /PATH/TO/rat_object_detection/vivi_model
The code above produced a vivi_model
directory that contains the following files:
checkpoint
frozen_inference_graph.pb
model.ckpt.data-00000-of-00001
model.ckpt.index
model.ckpt.meta
pipeline.config
saved_model/saved_model.pb
variables/
Step 4. Use model for object detection
I used the object detection tutorial to test if my model was exported correctly, and if it can detect rats. This tutorial assumes that you know how to use jupyter notebooks
. Make sure that your jupyter notebook runs on the virtual environment for your object detection project.
# How to use your python virtual environment for your jupyter notebookworkon virtualenv_name
pip install ipykernel
ipython kernel install --user --name=virtualenv_name
Replace images in /PATH/TO/tensorflow/models/research/object_detection/test_images
Edit the path to your model, and label map
Run everything but skip the Download Model part
Here are the output for sample input images
Check the previous parts as well: Part 1: Data Preparation, and Part 2: Modelling.
Thanks!
Sources
- https://towardsdatascience.com/build-a-taylor-swift-detector-with-the-tensorflow-object-detection-api-ml-engine-and-swift-82707f5b4a56
- https://medium.com/coinmonks/part-1-2-step-by-step-guide-to-data-preparation-for-transfer-learning-using-tensorflows-object-ac45a6035b7a
- https://medium.com/coinmonks/modelling-transfer-learning-using-tensorflows-object-detection-model-on-mac-692c8609be40
- https://github.com/tensorflow/models/blob/master/research/object_detection/object_detection_tutorial.ipynb
- https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/exporting_models.md
- https://github.com/tensorflow/models/issues/2714
- Credit to: http://www.newcastleshow.com.au/wp-content/uploads/2015/01/mouse1.jpg, Date accessed: July 8, 2018
- Shutterstock, July 8, 2018
<https://www.shutterstock.com/image-photo/funny-white-rat-looking-out-cage-592100393?src=NgjpowHkxggr3KzCoYKH2w-1-26> - Shutterstock, July 8, 2018
<https://www.shutterstock.com/image-photo/two-black-white-rats-on-human-425507344?src=MT9S2CGbz5rFw1cCcTiu9Q-1-70>