Hi Developer (or curious to develop),
What do you think about machine learning? Cool? Complex? Exciting? I want to talk a little about machine learning and learn how we can make use of it as an android developer.
So…What is Machine Learning?
Machine learning is an application of artificial intelligence (AI) that provides systems the ability to automatically learn and improve from experience without being explicitly programmed. Machine learning focuses on the development of computer programs that can access data and use it learn for themselves.
The primary aim is to allow the computers learn automatically without human intervention or assistance and adjust actions accordingly.
We are already using machine learning in our devices. For example; Google Asistant, Siri and Spotify etc.
We can examine to two subtitle for ML Transactions:
One of them is Cloud Intelligence, works on cloud. Ideal for strong transactions. The request-response situations may be slow.
On-device Intelligence, works on device. It’s fast because never no request-response state but device hardware may not be sufficient in all transactions.
In machine learning, the neural network is similar to the brain structure. It consists of three layers. Data from the input layer is synthesized in the hidden layer and data is sent to the output layer.
In this article, I’ll talk about two tools that help us use machine learning on Android… ML Kit and TensorFlow Lite!
What is ML Kit?
ML Kit is a mobile SDK that brings Google’s machine learning to Android and iOS apps in a powerful package. ML Kit is ready for production for common use cases. Cloud-based and device-based support features are also available. In addition, you can deploy custom models. ML Kit in beta now.
ML Kit makes it easy to apply ML techniques in your apps by bringing Google’s ML technologies, such as the Google Cloud Vision API, TensorFlow Lite, and the Android Neural Networks API together in a single SDK.
What can we do with ML Kit?
With ML Kit you can detect faces. It is possible to use face detection, which is a known feature of machine learning, in a simple way with ML Kit. With functions we can get the coordinates of the eyes, ears, cheeks, nose and mouth of the face ( and contours).
ML Kits have features. One of on ML Kits is understand to laughing and closed eyes. We can track the faces we perceive in real-time videos. Face detection works on the device. So there is no need for internet connection.
Get the name and geographic coordinates of the natural and structured landmarks, and the region of the image with the landmarks. Landmark Recognition works on cloud. For example : We suppose, you walk around Galata Tower. If you use Galata Tower photo, you would learn structure location is Istanbul.
You can simply use it with the response returned from the visionCloudLandmarkDetector method.
Reads most standard formats. It uses integrated barcode formats at the same time without having to specify a search format. Or, you can increase the scanning speed by limiting the detector to the intended focused formats.
Structured data is automatically parsed. Supported types of information include URLs, contact information, calendar events, email addresses, phone numbers, SMS message prompts, ISBNs, WiFi connection information, geolocation information.
You can detect the language in the texts. Supports detection of more than one hundred different languages.
By default, ML Kit returns a value other than when it defines the language only with a confidence value of at least 0.5. You can change this threshold by passing a FirebaseLanguageIdentificationOptions object to getLanguageIdentification ()
And recognize text… Ideally, for Latin text, each character should be at least 16x16 pixels. For Chinese, Japanese, and Korean text (only supported by the cloud-based APIs), each character should be 24x24 pixels.
You can control the text in blocks and lines.
Image Labeling is an API and model that can recognize entities in an image, and supply information about those entities in the form of labels. Each label has an accompanying score indicating how certain ML Kit is about this particular label.
The device-based API supports 400+ labels, such as the following examples, cloud based 10.000+
Let’s see how these can use these tools…
You must add the appropriate metadata to the manifest file. (example for image labeling “label”, for barcode “barcode”)
After add your dependencies codes.
Finally, define the photo and call the method of the vehicle you use.
What is TensorFlow Lite?
Tensorflow Lite is Tensorflow’s solution for mobile and embedded devices.
Tensorflow models are trained. It is then converted to TF Lite and becomes a .tflite file. These models are compatible with both Android and iOS architecture.
It also works with ML. If ML models are not enough for our transactions, we can integrate TFLite and custom models.
I hope help you on this topics. You can reach me :)
Thanks a lot…