Tensorflow object detection-Ubuntu

這篇文章,將在 Ubuntu 的 workstation 上,實現 Tensorflow object detection。(本文章內容經筆者實現驗證過)

第一個範例:Jupyter notebook for off-the-shelf inference 。

首先 Get the code,執行下列。

cd /你的工作目錄/tensorflow/git clone https://github.com/tensorflow/models.gitnvidia-docker run --rm -it -p 8888:8888 -p 6006:6006 -e PASSWORD=1234 -v /你的工作目錄/tensorflow:/workspace ray800/dl:tensorflow-ssd

Tensorflow Object Detection API 在使用之前,要先編譯 Protobuf 函式庫。

# 使用 docker 指令,進入上面 nvidia-docker 產生的 container
docker exec -it <container id> bash
cd /workspace/models/research/protoc object_detection/protos/*.proto --python_out=.

將 models/research 與 models/research/slim 加入 PYTHONPATH 環境變數中。

cd /workspace/models/research/export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/slim

使用瀏覽器,輸入 http://127.0.0.1:8888/ ,進入 Jupyter notebook,開啟 /workspace/models/research/object_detection/object_detection_tutorial.ipynb

執行後,會得到兩張偵測結果的圖片。

第二個範例:Training a pet detector 。

Getting the Oxford-IIIT Pets Dataset 。

cd /你的工作目錄/tensorflow/models/research/wget http://www.robots.ox.ac.uk/~vgg/data/pets/data/images.tar.gzwget http://www.robots.ox.ac.uk/~vgg/data/pets/data/annotations.tar.gztar -xvf images.tar.gztar -xvf annotations.tar.gz

Run the create_pet_tf_record script to convert from the raw Oxford-IIIT Pet dataset into TFRecords 。

cd /你的工作目錄/tensorflow/models/research/python object_detection/dataset_tools/create_pet_tf_record.py \
--label_map_path=object_detection/data/pet_label_map.pbtxt \
--data_dir=`pwd` \
--output_dir=`pwd`
# Note: It is normal to see some warnings when running this script. You may ignore them.

Two 10-sharded TFRecord files named pet_faces_train.record-* and pet_faces_val.record-* should be generated 。

接下來需複製檔案。

# 將底下的檔案,總共20個檔案
/你的工作目錄/tensorflow/models/research/pet_faces_train.record-*
/你的工作目錄/tensorflow/models/research/pet_faces_val.record-*
# 複製到
/你的工作目錄/tensorflow/models/research/object_detection/data/

Downloading a COCO-pretrained Model for Transfer Learning 。

cd /你的工作目錄/tensorflow/models/research/wget http://storage.googleapis.com/download.tensorflow.org/models/object_detection/faster_rcnn_resnet101_coco_11_06_2017.tar.gztar -xvf faster_rcnn_resnet101_coco_11_06_2017.tar.gz# 將檔案
/你的工作目錄/tensorflow/models/research/faster_rcnn_resnet101_coco_11_06_2017/frozen_inference_graph.pb
# 複製到
/你的工作目錄/tensorflow/models/research/object_detection/data/
# 將檔案
/你的工作目錄/tensorflow/models/research/faster_rcnn_resnet101_coco_11_06_2017/graph.pbtxt
# 複製到
/你的工作目錄/tensorflow/models/research/object_detection/data/
# 將檔案
/你的工作目錄/tensorflow/models/research/faster_rcnn_resnet101_coco_11_06_2017/model.ckpt.data-00000-of-00001
# 複製到
/你的工作目錄/tensorflow/models/research/object_detection/data/
# 將檔案
/你的工作目錄/tensorflow/models/research/faster_rcnn_resnet101_coco_11_06_2017/model.ckpt.index
# 複製到
/你的工作目錄/tensorflow/models/research/object_detection/data/
# 將檔案
/你的工作目錄/tensorflow/models/research/faster_rcnn_resnet101_coco_11_06_2017/model.ckpt.meta
# 複製到
/你的工作目錄/tensorflow/models/research/object_detection/data/

Configuring the Object Detection Pipeline 。

# 修改檔案 faster_rcnn_resnet101_pets.config
/你的工作目錄/tensorflow/models/research/object_detection/samples/configs/faster_rcnn_resnet101_pets.config
fine_tune_checkpoint: "PATH_TO_BE_CONFIGURED/model.ckpt"
# 修改成
fine_tune_checkpoint: "/workspace/models/research/object_detection/data/model.ckpt"
input_path: "PATH_TO_BE_CONFIGURED/pet_faces_train.record-?????-of-00010"
# 修改成
input_path: "/workspace/models/research/object_detection/data/pet_faces_train.record-?????-of-00010"
label_map_path: "PATH_TO_BE_CONFIGURED/pet_label_map.pbtxt"
# 修改成
label_map_path: "/workspace/models/research/object_detection/data/pet_label_map.pbtxt"
input_path: "PATH_TO_BE_CONFIGURED/pet_faces_val.record-?????-of-00010"
# 修改成
input_path: "/workspace/models/research/object_detection//\data/pet_faces_val.record-?????-of-00010"
label_map_path: "PATH_TO_BE_CONFIGURED/pet_label_map.pbtxt"
# 修改成
label_map_path: "/workspace/models/research/object_detection/data/pet_label_map.pbtxt"
# 複製底下檔案
/你的工作目錄/tensorflow/models/research/object_detection/samples/configs/faster_rcnn_resnet101_pets.config
# 複製到
/你的工作目錄/tensorflow/models/research/object_detection/data/faster_rcnn_resnet101_pets.config

Starting Training 。

# 修改檔案 faster_rcnn_resnet101_pets.config
/你的工作目錄/tensorflow/models/research/object_detection/data/faster_rcnn_resnet101_pets.config
batch_size: 1
# 修改成
batch_size: 2
# 執行指令
nvidia-docker run --rm -it -p 8888:8888 -p 6006:6006 -e PASSWORD=1234 -v /你的工作目錄/tensorflow:/workspace ray800/dl:tensorflow-ssd
# 使用 docker 指令,進入上面 nvidia-docker 產生的 container
docker exec -it <container id> bash
cd /workspace/models/research/object_detection/# 執行指令PIPELINE_CONFIG="/workspace/models/research/object_detection/data/faster_rcnn_resnet101_pets.config"MY_MODEL_DIR="/workspace/models/research/object_detection/data/my_model"CUDA_VISIBLE_DEVICES=0,1 python3 /workspace/models/research/object_detection/legacy/train.py \
--logtostderr \
--pipeline_config_path=${PIPELINE_CONFIG} \
--train_dir=${MY_MODEL_DIR}/train \
--num_clones=2 --ps_tasks=1

# 使用第三張 GPU 卡進行驗證。

# 使用 docker 指令,進入上面 nvidia-docker 產生的 container
docker exec -it <container id> bash
cd /workspace/models/research/object_detection/# 執行指令PIPELINE_CONFIG="/workspace/models/research/object_detection/data/faster_rcnn_resnet101_pets.config"MY_MODEL_DIR="/workspace/models/research/object_detection/data/my_model"CUDA_VISIBLE_DEVICES=2 python3 /workspace/models/research/object_detection/legacy/eval.py \
--logtostderr \
--pipeline_config_path=${PIPELINE_CONFIG} \
--checkpoint_dir=${MY_MODEL_DIR}/train \
--eval_dir=${MY_MODEL_DIR}/eval \
--num_clones=1 --ps_tasks=1

Monitoring Progress with Tensorboard 。

cd /workspace/models/research/object_detection/data/my_model/train# 執行指令
tensorboard --logdir='/workspace/models/research/object_detection/data/my_model/train' --port=6006

使用瀏覽器,輸入 localhost:6006

訓練完成後如下。

Exporting the Tensorflow Graph 。

cd /workspace/models/research# 將底下路徑內檔案 
# /workspace/models/research/object_detection/data/my_model/train/
model.ckpt-${CHECKPOINT_NUMBER}.data-00000-of-00001
model.ckpt-${CHECKPOINT_NUMBER}.index
model.ckpt-${CHECKPOINT_NUMBER}.meta
# copy 到 /workspace/models/research/
# 執行指令,將訓練好的模型匯出PIPELINE_CONFIG="/workspace/models/research/object_detection/samples/configs/faster_rcnn_resnet101_pets.config"MY_MODEL_DIR="/workspace/models/research"CHECKPOINT_NUMBER=200000CKPT_PREFIX=${MY_MODEL_DIR}/model.ckpt-${CHECKPOINT_NUMBER}python3 /workspace/models/research/object_detection/export_inference_graph.py \
--input_type image_tensor \
--pipeline_config_path=${PIPELINE_CONFIG} \
--trained_checkpoint_prefix ${CKPT_PREFIX} \
--output_directory my_exported_graphs

使用新模型進行預測。

# 修改檔案
/workspace/models/research/research/object_detection/object_detection_tutorial.ipynb
# 設定 model 與 label 對應檔路徑
"MODEL_NAME = 'faster_rcnn_nas_coco_2017_11_08'\n",
# 修改成
"MODEL_NAME = '/workspace/models/research/my_exported_graphs'\n",
# 刪除底下兩行
"MODEL_FILE = MODEL_NAME + '.tar.gz'\n",
"DOWNLOAD_BASE = 'http://download.tensorflow.org/models/object_detection/'\n",
"PATH_TO_LABELS = os.path.join('data', 'mscoco_label_map.pbtxt')"
# 修改成
"PATH_TO_LABELS = os.path.join('data', 'pet_label_map.pbtxt'\n')",
"NUM_CLASSES = 37"
# 找到下面內容然後全部修改如下
# "opener = urllib.request.URLopener()\n",
# "opener.retrieve(DOWNLOAD_BASE + MODEL_FILE, MODEL_FILE)\n",
# "tar_file = tarfile.open(MODEL_FILE)\n",
# "for file in tar_file.getmembers():\n",
# " file_name = os.path.basename(file.name)\n",
# " if 'frozen_inference_graph.pb' in file_name:\n",
# " tar_file.extract(file, os.getcwd())"
"PATH_TO_TEST_IMAGES_DIR = 'test_images'\n",
# 修改成
"PATH_TO_TEST_IMAGES_DIR = 'test_images/images'\n",
# 複製底下路徑內檔案
/workspace/models/research/my_exported_graphs/images/
# 複製到
/workspace/models/research/object_detection/test_images/images/
# 執行
/workspace/models/research/object_detection/object_detection_tutorial.ipynb

執行結果如下。

https://medium.com/ran-ai-deep-learning,ran1988mail@gmail.com

--

--

--

AI Deep Learning (人工智慧深度學習)

Recommended from Medium

What are the Differences Between Z-test and T-test?

Intruders detection and alerting system using facial recognition

A brief introduction to creating machine learning models for classification in python using sklearn

Talk to your models: Interacting with agents to solve complex tasks — Part I

A stack of red, orange, blue and green wooden blocks forming a 2-D pyramid.

Dog vs Cat Classifier using PyTorch

The Rise and Fall of the CNN

Evaluation Bias; are you inadvertently training on your entire dataset?

Transitioning from Supervised Learning systems to Multi-Agent Reinforcement learning for financial…

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Ran

Ran

Senior Electronic R&D Manager。(DL Algorithm、software and hardware),ran1988mail@gmail.com,https://medium.com/ran-ai-deep-learning

More from Medium

Static methods in a class: A nightmare in disguise!

03 — Storage Structure Explained in Plain English

ERROR Error: Requiring module “node_modules/react-native-reanimated/src/Animated.js”,

Do’s & Dont’s for Engineers