Deep learning on the edge series: Part 2 — Deploying on Raspberry Pi 4

Ibrahim Essam
YonoHub
Published in
6 min readAug 9, 2020

This article is part of a series. Check out the full series: Part 1

Welcome to the second part of our Deep learning for Embedded Linux Series. TFLite models are very suitable for many different hardware types, including mobile phones, embedded Linux devices, and microcontrollers due to low latency and small binary size.

In this Pipeline, I’m running the COCO SSD MobileNet v1 TFLite model on a Raspberry pi 4. The input for the model also comes from the camera, which is connected to the Raspberry pi.

The cool thing about this, besides the exceptional performance of the TFLite model, is that I’m here in Cairo, Egypt, and the Raspberry Pi is running in Munich, Germany. And I can quickly deploy my Blocks, and access the hardware remotely from the other side of the world.

This part, we will create a YonoArc Block for object detection, which uses TFLite and runs on our Raspberry Pi 4 instead of one of YonArc machines. Then we can easily integrate it into our existing pipelines.

Deployment Region (DR):

Managing access to shared resources (e.g., servers and robots) and maximizing their utilization are key challenges faced by many teams. An even harder challenge is to set up these resources and deploy complex systems, such as autonomous vehicles. Would it be great if your algorithms than run using simulation data at Yonohub can run using real-life data on the vehicle/robot with no changes. That’s exactly what you can achieve using Yonohub’s deployment regions. You can even run some blocks (e.g., data acquisition or simple processing) in a deployment region, while the rest of the pipeline is running at Yonohub.

1. Create region

Before starting, we need to create a deployment region on Yonohub to connect our RPI4. The flow is effortless using Yonohub UI, from the main view open deployment regions or directly through this link. Then press on “Create region” from the top right. Fill the region info as below, then press “Add.”

2. Edit the created region

Now we have our deployment region let’s choose it and press on the “Edit” button from the top left. Then we can create a new node and specify the following properties, and “Save” our DR.

3. Add resources to the DR

The next step is to add resources to our DR; these resources could be devices such as (cameras, sensors) or mounting folders. From the deployment region page, choose “Special Resources” Tab, and from the top girth press on “Create Resource,” we will use this resource to mount a folder where our TFLite model is saved.

Let’s return to our DR “edit” it, and attach a new resource to our RPI node.

The mount path is the TFLite model path we wish to mount.

4. Start the created region

Now we have our DR configured and ready to be launched. Let’s “Save” and press on “Express Launch” to start it.

After the DR’ status changes from waiting to Active, press on “View.” You will notice that our RPI4 Node is inactive. We need to configure our RPI to connect to the DR.

5. Run the DR script on our hardware

Press the Download button next to the Node name and copy the downloaded script to your RPI.

This script is used to install all the dependencies on the RPI and connect it to our DR.

Use the following command to run it.

chmod +x connect_script.sh && sudo ./connect_script.sh --connect

The script will start preparing everything for us, and after some minutes, the node status will change from “Inactive” to “Active.”

Congratulation, now you have your RPI connected to Yonohub, and you can use it as a machine for everything on Yonohub, like running and building custom apps, and more.

6. Building the environment:

Let’s Create an ARM32 environment and install TFLite runtime. From Base name, let’s choose an ARM32 base with python3. Add a command pip install and give it the URL of TFLite runtime from the official TFLite python quick support page.

We don’t need any more dependencies for our current block, so let’s build the environment.

7. Object Detection Block:

By now, we have prepared all the dependencies for our Object detection block.

The source code of my block is here. We can start creating a block based on the same code, but a simple change has to be made. The model and the path of the labels line 27,28.

self.model_path = "/home/yonohub/tflite/detect.tflite"

self.labels_path = "/home/yonohub/tflite/labelmap.txt"

This is where I mounted my special resources in the previous step. You have to change it to your mounting path.

Then we can build our block from the block manger as the previous part.

Define the following properties which are used in our block to control the detection threshold, input mean, and standard deviation.

Also, from the Block manager, we should add this special resource to our block. Because we Can’t load it form MyDrive as MyDrive is not mounted in DR for security reasons.

Now we are ready to save our block and release it.

About YonoHub:

Yonohub is a web-based cloud system for development, evaluation, integration, and deployment of complex systems, including Artificial Intelligence, Autonomous Driving, and Robotics. Yonohub features a drag-and-drop tool to build complex systems, a marketplace to share and monetize blocks, a builder for custom development environments, and much more. YonoHub can be deployed on-premises and on-cloud.

Get $25 free credits when you sign up now. For researchers and labs, contact us to learn more about Yonohub sponsorship options. Yonohub: A Cloud Collaboration Platform for Autonomous Vehicles, Robotics, and AI Development. www.yonohub.com

If you liked this article, please consider following us on Twitter at @yonohub, email us directly, or find us on LinkedIn. I’d love to hear from you if I can help you or your team with how to use YonoHub.

--

--