Geek Culture
Published in

Geek Culture

Large Scale Object Detection & Tracking with YOLOv5 Package

Only 'pip install yolov5' away from you…

Simple as that.
  • Are you having a hard time installing latest YOLO object detector in Windows/Linux?
  • Are you getting errors during training/inference with your custom YOLOv5 models?
  • Are you looking for a real-time object tracker with only few lines of code?
  • Do you want to perform large-scale (drone surveillance/satellite imagery/wide-area surveillance) object detection in one click?

Keep reading this post and you will be able to handle all of these in seconds..

YOLOv5 Object Detector

YOLOv5 is the fastest and most accurate YOLO ever made and you can use it for any object detection problem you need.

Installation is simple: run pip install yolov5 in Windows/Linux terminal and you are ready to go.

Basic Usage

import yolov5# load pretrained model
model = yolov5.load('')

# or load custom model
model = yolov5.load('train/')

# set model parameters
model.conf = 0.25 # NMS confidence threshold
model.iou = 0.45 # NMS IoU threshold
model.agnostic = False # NMS class-agnostic
model.multi_label = False # NMS multiple labels per box
model.max_det = 1000 # maximum number of detections per image
# image
img = ''
# inference
results = model(img)
# inference with larger input size
results = model(img, size=1280)
# inference with test time augmentation
results = model(img, augment=True)
# parse results
predictions = results.pred[0]
boxes = predictions[:, :4] # x1, x2, y1, y2
scores = predictions[:, 4]
categories = predictions[:, 5]
# show results
# save results'results/')


Finetune one of the pretrained YOLOv5 models using your custom data.yaml . For detailed info, refer to:

$ yolov5 train --data coco.yaml --weights '' --batch-size 16                                                 

Visualize your experiments via Neptune.AI:

$ yolov5 train --data data.yaml --weights --neptune_project NAMESPACE/PROJECT_NAME --neptune_token YOUR_NEPTUNE_TOKEN


yolov5 detect command runs inference on a variety of sources, downloading models automatically from the latest YOLOv5 release and saving results to runs/detect :

$ yolov5 detect --source 0  # webcam
file.jpg # image
file.mp4 # video
path/ # directory
path/*.jpg # glob


You can export your fine-tuned YOLOv5 weights to any format such as torchscript, onnx, coreml, pb, tflite, tfjs:

$ yolov5 export --weights --include 'torchscript,onnx,coreml,pb,tfjs'

State-of-the-art Object Tracking with YOLOv5

You can create a real-time custom multi object tracker in few lines of code, here is the minimal example:

State-of-the-art YOLOv5 Object Tracker in few lines of code.

And here is the output:

YOLOv5 Object Tracking Demo.

In this colab notebook you can find a YOLOv5 object tracker in action. It performs high accuracy pedestrian and car tracking from any YouTube video! Refer here for the full YOLOv5 tracking code.

Large Scale Object Detection with YOLOv5

If you are working with huge satellite images or wide area surveillance images, inference with standard input sizes are not possible. Here comes the SAHI package with its sliced inference feature:

Sliced inference from SAHI.

In this demo notebook, you can see how to perform large scale sliced inference with YOLOv5 in few lines!

Or you can perform YOLOv5 sliced inference from CLI:

sahi predict --model_type yolov5 --source image/file/or/folder --model_path path/to/model

For detail on CLI arguments refer to here.


With this article, we have covered:

Feel free to ask questions if you have trouble at any step!



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store