I am using nvidia GeForce GTX 1060 6GB GPU. However when you run your detection_predict.py (from stackoverflow), it will take time as it loads the model everytime in the memory. The model in this case will be huge, i had 180MB size model. That is why you have to load the model in the memory once and detect everytime from the loaded model. Using this it will take time only for the first time. following detections will be faster. You can do this using jupyter notebook.
Also while detecting using with statements everytime increases the detection time. In the given notebook
with tf.Session(graph=detection_graph) as sess:
change this to,
sess = tf.Session(graph=detection_graph)
and put in a different cell, and run once
and then do detection everytime in another cell
# Definite input and output Tensors for detection_graph
image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')
# Each box represents a part of the image where a particular object was detected.
detection_boxes = detection_graph.get_tensor_by_name('detection_boxes:0')
# Each score represent how level of confidence for each of the objects.
# Score is shown on the result image, together with the class label.
detection_scores = detection_graph.get_tensor_by_name('detection_scores:0')
detection_classes = detection_graph.get_tensor_by_name('detection_classes:0')
num_detections = detection_graph.get_tensor_by_name('num_detections:0')
for image_path in TEST_IMAGE_PATHS:
image = Image.open(image_path)
# the array based representation of the image will be used later in order to prepare the
# result image with boxes and labels on it.
image_np = load_image_into_numpy_array(image)
# Expand dimensions since the model expects images to have shape: [1, None, None, 3]
image_np_expanded = np.expand_dims(image_np, axis=0)
# Actual detection.
(boxes, scores, classes, num) = sess.run(
[detection_boxes, detection_scores, detection_classes, num_detections],
# Visualization of the results of a detection.
This will improve time very much.