Porting PyTorch based object detection model to mobile optimized format.

Madhur Zanwar
Eumentis
Published in
2 min readFeb 22, 2024
Porting web models to mobile devices

This article is the second in a series of four articles on object detection on Edge devices. The first article in the was about selecting an object detection model based on Edge device constraints.

In this article, we’ll talk about how to convert an object detection model trained on web device into a mobile-optimized format.

Our object detection model on the web was built using the YOLOv8 framework by Ultralytics. This framework provides a .pt file containing the weighted average of the best weights. To run it on mobile devices, we need to convert this file into .ptl format. To achieve this, we first convert our best.pt file into the torchscript format. YOLOv8 offers the flexibility to convert .pt files into torchscript format.

# Converting .pt file to torchscript format.
yolo export model=path/to/best.pt format=torchscript

Next, we convert the torchscript format into .ptl format.


import torch
from torch.utils.mobile_optimizer import optimize_for_mobile

torchscript_model = # path to torchscript file
export_model_name = # path to save the .ptl file

model = torch.jit.load(torchscript_model)
optimized_model = optimize_for_mobile(model)
optimized_model._save_for_lite_interpreter(export_model_name)

print(f"mobile optimized model exported to {export_model_name}")

Now, we are ready with the .ptl file which can be run on mobile devices. Check out our next post on how to run inference on mobile devices using .ptl file.

--

--

Eumentis
Eumentis

Published in Eumentis

A technology consulting company that specializes in AI & Digital Transformation for enterprises and product development for startups