Car license plate recognition with TinyML
From the Edge to the Cloud with Edge Impulse and Arduino
This article refers to a workshop organized by Italian Embedded. For the recorded session (only for Italian speakers, sorry 😅), you can find it here.
This article provides a detailed overview of a car license plate recognition system designed for detecting EU and Swiss plates. Leveraging computer vision and machine learning techniques, the system identifies and classifies license plates, displaying the results on a screen. This system is intended for use in a paid parking scenario where different rates are applied based on the vehicle’s origin.
Users can also view statistics on the number of entries into the parking area through a remote web dashboard.
How to do it?
The implementation of this system involves a combination of hardware, software, and cloud services using Arduino and Edge Impulse.
- Arduino is an open-source platform based on easy-to-use hardware and software. It offers also a cloud service, Arduino IoT Cloud, which allows for easy connectivity, data storage, and remote management of devices.
- Edge Impulse is a platform to create and deploy machine learning models for embedded devices. It enables developers to create, train, and deploy machine learning models directly onto hardware like Arduino.
The hardware system is composed of three main nodes:
- Nicla Vision: It’s the node dedicated to image acquisition and plate recognition using the ML model created with Edge Impulse. It captures images and processes them through the trained machine learning model. The recognized plate type data is then transmitted to the Giga R1 WiFi board.
- Giga R1 WiFi: It handles the reception of data from the Nicla Vision node and manages communication with the Arduino IoT Cloud to manage the remote dashboard. It also controls the display for visualizing the local dashboard.
- Giga Display Shield: It is the display component connected to the Giga R1 WiFi board. It operates as the interface for local visualization of the dashboard.
From the Edge…
Edge computing involves processing data closer to its origin rather than sending it to remote servers for elaboration.
In the parking management system, the Nicla Vision acts as the edge node, processing images and executing the machine learning model locally.
The practice of running machine learning models on edge devices is called TinyML.
TinyML involves deploying machine learning models on small, low-power devices like microcontrollers.
The ML model of this project takes an image as input and classifies it into two categories: EU and Swiss.
This task is known as image classification and belongs to the category of supervised machine learning. Supervised machine learning involves training a model on labeled data, where each input is associated with a corresponding output label.
How to build an ML model?
Building a Tiny Machine Learning model involves 4 phases:
- Data Collection
- Data Preprocessing
- Model Training
- Deployment on Edge
Data Collection with OpenMV and Edge Impulse
During this phase, relevant data is gathered for training the machine learning model. In the context of this project, involves collecting a set of images containing EU and Swiss license plates using OpenMV.
OpenMV is a microcontroller-based platform designed for machine vision tasks, compatible with various camera sensors and running MicroPython. It offers a simple and efficient solution for capturing images directly on the edge device.
Using OpenMV, it’s possible to create quick MicroPython scripts that efficiently create numerous image datasets.
Edge Impulse
As described before, Edge Impulse supports the entire process of creating a TinyML model, assisting developers in all phases.
Data collecting and preprocessing
OpenMV further simplifies the data collection process by integrating with Edge Impulse, allowing for direct uploading of image datasets without the need for external tools.
After collecting data, it is important to perform feature extraction: a process where raw data is transformed into a set of features.
In ML, features refer to specific characteristics extracted from raw data that are relevant for solving a particular task. In the image processing problem, features may include edges, textures, shapes, or patterns present in the image. Edge Impulse is capable of automatically extracting features from images, this simplifies the model training process by identifying and focusing on the most relevant information within the data.
Model Training
Model training involves feeding the preprocessed data into the machine learning algorithm to learn relationships between input features (image characteristics of plates) and output labels (EU or Swiss).
Edge Impulse offers the possibility to select prebuilt or customized model architectures and adjust model settings such as training cycles and learning rate.
The effectiveness of a model is evaluated using the accuracy metric (F1 Score), which indicates how precise the model is in its predictions.
At the first attempt, we achieved an accuracy of 72%, which is not yet acceptable. To improve the F1 score, you can increase the training cycles, change the model architecture, or increase the quality of the dataset (means reducing noise, different backgrounds, accurately labeling, increasing quantity, …).
Deployment
Edge Impulse is hardware-agnostic, allowing you to deploy your model to any edge device.
You can build and download an Arduino library compatible with most Arduino boards, containing the model translated into .c and .h files in the /src directory, along with various examples in the /examples directory.
The Edge sketch
The main functionalities of the sketch running on Nicla Vision include:
- Image acquisition from the on-board camera.
- Run the machine learning model to determine the license plate class.
- Sending the processing result via serial communication to the node for managing both local and remote dashboards.
// Include Edge Impulse model library
#include <eu-swiss-plate-recognition_inferencing.h>
void setup()
{
// ...
// Initializing on-board camera
if (ei_camera_init() == false) {
ei_printf("Failed to initialize Camera!\r\n");
while(1) ;
}
// ...
// Initializing serial communication with dashboard node
Serial1.begin(115200);
}
void loop()
{
// ...
// Image acquisition from the on-board camera
if (ei_camera_capture(...) == false) {
ei_printf("Failed to capture image\r\n");
return;
}
// ...
// Running the ML model
ei_impulse_result_t result = { 0 };
EI_IMPULSE_ERROR err = run_classifier(&signal, &result, debug_nn);
// ...
// Parsing the prediction result
if (strcmp(..., "swiss") == 0) {
pred_result = 'S';
break;
} else if(strcmp(..., "eu") == 0) {
pred_result = 'E';
break;
}
// ...
// Sending processing result via serial to the dashboard node
Serial1.print(pred_result);
}
…to the Cloud
Arduino IoT Cloud is Arduino’s solution for developing IoT projects in the cloud. It allows users to store and retrieve data sent by devices for real-time monitoring and control.
It offers notification features and provides a user-friendly interface for creating control dashboards with pre-built widgets, simplifying data visualization and management.
The platform is compatible with a wide range of devices, including Arduino boards and third-party devices like ESP32.
The Cloud sketch
In the Arduino IoT Cloud platform, the sketch for the device is automatically generated based on the configuration and settings specified by the user during the project design.
// File containing Cloud variables and network credentials
#include "thingProperties.h"
void setup() {
// ...
// Initialize properties defined in thingProperties.h
initProperties();
// Connect to Arduino IoT Cloud
ArduinoCloud.begin(ArduinoIoTPreferredConnection);
// ...
}
void loop() {
ArduinoCloud.update(); // Refresh Cloud variables and communication
// ...
if (Serial1.available()) {
char inByte = Serial1.read(); // Read result coming from the Edge node
if (inByte == 'E') {
region = REGION_EU; // 'region' is a Cloud variable
// ...
} else if (inByte == 'S') {
region = REGION_SWISS;
// ...
}
// ...
}
// ...
}
Dashboard
Arduino IoT Cloud offers the ability to create intuitive dashboards with ready-to-use widgets. Users can easily link these widgets to the cloud variables used by their devices, simplifying the process of visualizing and managing data.
About the GUI
The graphical user interface is managed by the Giga Display Shield, which displays a local dashboard showing the recognized license plate type. The display management is controlled by the Arduino_GigaDisplay_GFX library.
The Arduino_GigaDisplay_GFX is a library built on top of the Adafruit_GFX library. It provides functions for drawing individual pixels, lines, rectangles, and other geometrical shapes. Additionally, it supports printing numeric values and strings.
#include "Arduino_GigaDisplay_GFX.h"
GigaDisplay_GFX display;
void setup() {
// ...
display.begin();
display.setRotation(1);
// ...
}
void loop() {
if (Serial1.available()) {
// ...
if (inByte == 'E') {
// ...
drawEUFlag();
} else if (inByte == 'S') {
// ...
drawSwissFlag();
}
// ...
}
}
void drawEUFlag() {
display.fillScreen(BLUE);
display.setCursor(350, 400);
display.setTextSize(10);
display.print("EU");
for (int i = 0; i < 12; i++) {
drawStar(CENTER_X + (UE_FLAG_RADIUS * cos(i * PI / 6)), CENTER_Y + (UE_FLAG_RADIUS * sin(i * PI / 6)));
}
}
void drawStar(uint x, uint y) {
display.setCursor(x, y);
display.setTextColor(YELLOW);
display.setTextSize(3);
display.print("*");
display.setTextColor(WHITE);
}
void drawSwissFlag() {
display.fillScreen(RED);
display.setCursor(350, 400);
display.setTextSize(10);
display.print("CH");
display.fillRect(CENTER_X - SWISS_FLAG_THICKNESS / 2, CENTER_Y - SWISS_FLAG_LENGTH / 2, SWISS_FLAG_THICKNESS, SWISS_FLAG_LENGTH, WHITE);
display.fillRect(CENTER_X - SWISS_FLAG_LENGTH / 2, CENTER_Y - SWISS_FLAG_THICKNESS / 2, SWISS_FLAG_LENGTH, SWISS_FLAG_THICKNESS, WHITE);
}
References
- Code:
https://github.com/csarnataro/arduino-tinyml-plate-recognition - Italian Embedded event: https://www.italianembedded.com/events/arduino-ml/