Santosh Reddy Vuppala
4 min readMar 31, 2020

How i built a number plate detector using just FirebaseMLVision Framework

Vehicle number plate (or registration plates in some countries)detection has been a topic of interest for a lot of AI/ML enthusiasts, and have been using a version of Object detection to detect a number plate in a given image.

Today i am going to write about how to detect and recognize number plates in an image using onDeviceTextRecognizer included in the Firebase MLVision library under ML Kit in an iOS app built using Xcode 11.3.1 and Swift4.

Number plate recognizer using Swift, Xcode and FirebaseMLVision

This entire post is divided into the following:

  1. Creating project in Xcode
  2. Setting up UI elements for the project
  3. Integrate Firebase Vision Framework
  4. Fire up the framework to recognize and detect number plates in an image
  5. Alternate ways of achieving this

Creating project in Xcode

If you already dont have Xcode installed on your Mac download it from developer.apple.com and signup for a apple Id or download it from App store. Once you have Xcode open it and click on 'Create a new Xcode Project'

Click on 'Create a new Xcode project'
Select 'Single View App'
Fill in all details and click on Next. On the next screen click Create to save your project on your Mac

Setting up UI elements for your project

Use storyboard to wire-up user interface

Without going into details of how to add the UI elements, i have used an ImageView to load and display the vehicle image and show a frame around the detected number plate. A button named Capture Image to load from Photo library(in this case because i am using a simulator, you can also get the photo from the camera source if you are using a iOS device) using UIImagePickerController. And the rest of the labels and text-fields to display details.

Integrate Firebase Vision Framework

First step is to include ML kit libraries in the project, for this you can Cocoa Pods to include the dependency in your project. To understand more about how to use Cocoa Pods use this reference: CocoaPods

Include the following details in the Podfile, save it, run $ pod install and open yourapp.xcworkspace to start working on your project.

podfile
$pod install

Open your viewcontroller.swift file, add import Firebase to the top of the file.

Fire up the framework to recognise and detect number plates in an image

To recognize text in an image there we can either use an on device text recognizer or a cloud based text recognizer. I have used the on device text recogniser, feel free to explore the cloud based text recogniser if that suits your app architecture.

Get an instance of the VisionTextRecognizer:

Create a VisionImage object and pass the image you captured using your phone camera using UIImagePickerController

Pass the image object created above to the process(_ completion:) method

Complete method to recognise text in image and draw bounding box on the detected area. This method recognises all text occurrences in the image and to match the Indian Number Plate format i have used a regular expression. This helps in filtering out number plate text from the rest of the text detected in the image.

If the text recognition operation (. process() method) succeeds, it will return a VisionText object which contains the full text recognised in the image and zero or more VisionTextBlock objects. You can drill down the VisionTextBlock object (which represents a rectangular block of text) to get the text recognised and bounding coordinates of the region. Please explore VisionTextLine and VisionTextElement for your reference. Here i used a regular expression to filter out text detected to match Indian Number Plate format, you can change it to match your needs.

I have also used a drawRectangleOnImage method to draw a bounding box around the text recognized.

And the final result is,

FirebaseMLVision reference for text recognition

Alternate ways of achieving this,

To recognise number plates in an image you can alternatively:

  1. Train an object detection model in CreateML, export it to mlModel and use it in your ios app using CoreML. This will be a separate post, please stay tuned
  2. Use YOLO V3 to train an object detection model on your custom dataset, export it to ONNX and then to a mlModel to use it in your iOS app. This will also be a separate post.