Implementing Google ML Kit in Flutter

Aayush Sharma
DSC KIET
Published in
5 min readNov 22, 2020

In this article, I’ll be guiding you through the implementation of Google’s ML Kit in flutter by building a translator application that can recognize text from an image and translate it into the desired language.

Google’s ML Kit comes in handy as it can empower your application with smart features like detecting barcodes, text, faces, and objects. ML Kit’s processing happens on-device. This makes it fast and unlocks real-time use cases like processing of camera input. It can also work offline. 😲

We’ll be using the following API’s offered by Google’s ML Kit for our translator App:

  • Text Recognition
  • Language ID
  • On-Device Translation

Project Overview

I suggest you go through the git repository along with this article and try to understand the code with the folder structure.

Also, I assume that you have prior knowledge about flutter and firebase.

Startup‍🚀

Well then let’s get started.

Following images show the basic UI of the application.

UI for the Flutter Translator App
Basic UI screens for our Text recognition and translation app

You can get the base UI of the application here.

Firebase🔥

With the UI part of the application finished, we can move onto setting up a new firebase project for our application.

  1. Head over to firebase console.
  2. Click on add project.
  3. Follow the instructions.
  4. A prompt to add an app will appear after the project creation.
  5. Go through with the instructions given, and voila, you have successfully added firebase to your application. 🎉

ML Kit💡

  • First and foremost add the required dependencies to app-level build.gradle file.

NOTE: Remove the Firebase-bom from the dependencies to use firebase ml vision as it can cause dependency resolving failed error.

  • You can configure your app to automatically download the ML model to the device when you install your app from the Play Store.
  • To do so, add the following declaration to your app’s AndroidManifest.xml file:

Packages Used 📦

Add the above packages in the pubspec.yaml file under dependencies.

Image Picker 📷

To implement text recognition, we need to capture an image, and that’s where image_picker package comes in play.

image_picker is the package that we use in this project to capture images.

  • First, add the dependency to the yaml file.
  • Instantiate the Imagepicker.
final picker = ImagePicker();
  • Now call the pickImage method on the picker object to capture the image. We can also specify the image source and quality of the image.
final pickedFile = await picker.getImage(source: ImageSource.camera, imageQuality: 50);
  • Return the image file if it is not null.

Text Recognition 🔍

  • First, instantiate an instance of firebase vision.
  • Now we need to create two objects — FirebaseVisionImage and textRecogniser. Pass the captured image as a file to the imageObject.
  • Finally, to recognise the text from the image, we need to call the processImage method on our recogniser object and pass in the imageObject.

This returns a VisionText object which contains our recognised text as a string.

  • Return the recognised text and call the close method on recogniser to stop unnecessary usage of our resources.

Identifying the Language ID 📝

  • First, instantiate FlutterLanguageIdentification
  • Now call the identifyLanguage method on languageIdentification and pass the recognised text to it.
  • Now you need to listen to platform calls to receive results. In simpler words, call the setSuccessHandler method on languageIdentification, which is a callback function and gives you the BCP-47 code of that language.
  • Create a map of supported languages so that you can map the language code to the appropriate language.
//for example
supportedLanguage['af'] // gives Afrikaans as output

Translate 🔈

Now we have the recognised text and the language code so, we can finally move on to translating it into our desired language.

  • First, instantiate our translator object.
final translator = GoogleTranslator();
  • Now select the language from the drop-down menu. The drop-down button will be of type String. The drop-down item contains the child as a Text widget of language String, and the language code(from the supported language map) is passed in the value property.
  • Now call the translate method on the translator object.
  • Pass the recognised text, its language code and the language code of target language.
  • Return the translated text from the translatedText object.
return translatedText.text;

Implementing the above functionalities in the app.

  • Firstly, when we tap the capture button, we should be directed to capture the image through the camera. To do that, let’s initiate our capture class which holds the method of capturing an image.
  • In the onTap property of the capture button following lines of code are executed.
  • After successfully capturing an image, we move to the translate screen where text recognition and translation happens.

BLOCs 🚧

  • To handle the text recognition, translation events, and state of the app, I have used bloc. You can use any state management technique that you are comfortable with.
  • First, let’s add flutter bloc to the project. (I have also used equatable)

Text Recognition Bloc

  • Let’s create the RecogniseText bloc. If you have the VS Code extension for bloc installed then, right-clicking on the directory panel gives you the option to create a new bloc.
  • Begin with defining the states of the bloc.
  • We only have one event in the bloc which is the RecogniseText event.
  • The implementation of RecogniseText event in the bloc looks like:

Translator Bloc

  • Let’s create the Translator bloc
  • Again begin with defining the states of the bloc
  • Implement the TranslateText event
  • The implementation of TranslateText event in the bloc looks like:

Implementing Blocs

  • As soon as the translate screen pops up, we would want the text recognition to start so, we wrap the translate screen in the MultiBlocProvider, and then, add the text recognition event to the bloc.
  • Now you can use the BlocBuilder to change the UI of the app as per the state emitted by the bloc.
  • To translate the text we need to trigger the TranslateText event when the translate button is pressed. To do that, we need to instantiate the bloc with the help of BlocProvider.
final bloc = BlocProvider.of<TranslatorBloc>(context);
  • Now in the onTap property of the button, we can trigger the TranslateText event
onTap: () => bloc.add(
TranslateText(text, fromLangCode, tolangCode),
)
  • Again, by using the BlocBuilder you can manage the UI changes to display the translated text.

Aaaand We’re done! 🎉

Let’s have a look at the final application empowered with Google’s ML Kit.

Let me know in the comments if you have any doubts or face issues. You can also reach out to me on Twitter and Instagram.

--

--