Converting PyTorch models to mobile optimized format

Madhur Zanwar
Eumentis
Published in
2 min readApr 17, 2024
Optimizing code for mobile. Image credit: Adobe Firefly

In this article, we’ll talk about converting PyTorch models trained on the web to mobile optimized format.

We were aiming to convert an object detection model built using the yolov5 framework into a mobile optimized format.

YOLOv5 generates a best.pt file containing the weighted average of the best weights. To run this file on mobile devices, it needs to be converted into a mobile-optimized version known as .ptl format. We did this conversion from .pt file to .ptl format using the following code steps:

import torch
# load the model
model = torch.hub.load('pytorch/vision:v0.7.0', 'deeplabv3_resnet50', pretrained=True)
# set it to evaluation mode
model.eval()

scripted_module = torch.jit.script(model)
# Export full jit version model (not compatible lite interpreter), leave it here for comparison
scripted_module.save("deeplabv3_scripted.pt")
# Export lite interpreter version model (compatible with lite interpreter)
scripted_module._save_for_lite_interpreter("deeplabv3_scripted.ptl")

The above script generates the .ptl file with a warning. For the time being, we ignored the warning and loaded the model using the react-native-pytorch-core library in our React Native app.

Converts the model to .ptl format with a warning.

Loading the model in our react-native app failed and threw an error, indicating that the above process does not work correctly for our use case.

Error while loading the model in react-native app.

We tried the same approach with our custom YOLOv5 model, which failed even before being converted to the .ptl format.

Error while converting custom models to .ptl format

Hence, we tried to follow a different approach. Since YOLOv5 provides the flexibility to convert the .pt file to various other web-supported formats, we first converted our yolov5.pt file to a TorchScript-supported format.

!python export.py --weights yolov5s.pt --include torchscript

After converting the .pt file to a TorchScript-supported format, we proceeded to convert it to the .ptl format using the following code snippet.

import torch
from torch.utils.mobile_optimizer import optimize_for_mobile

torchscript_model = "yolov5s.torchscript"
export_model_name = "yolov5s.torchscript.ptl"

model = torch.jit.load(torchscript_model)
optimized_model = optimize_for_mobile(model)
optimized_model._save_for_lite_interpreter(export_model_name)

print(f"mobile optimized model exported to {export_model_name}")

This approach successfully converts the model to the .ptl format without any warnings. Additionally, the model loads successfully in react-native-pytorch-core and runs inference without any errors.

--

--