Rishab Guggali
Analytics Vidhya
Published in
7 min readJun 30, 2020

--

TENSORFLOW AND USE CASES

TensorFlow

TensorFlow is an open-source library for deep learning and Machine learning.

It’s is originally developed by Google Brain Team within Google’s machine intelligence research organisation.

It provides primitive for defining functions for tensors and automatically compute their derivatives.

TensorFlow is mainly used for : Classification, Perception, Understanding, Discovery, Prediction, creation.

Who use TensorFlow?

Learn how TensorFlow solves real, everyday machine learning problem.

Explore how various companies from wide variety of industries implement ML to solve their biggest problems. From healthcare to social network and even e-commerce, ML Can be integrated into your industry and company.

The ubiquity, openness, and large community have pushed TensorFlow into the enterprise for solving real-world applications such as analyzing images, generating data, natural language processing, intelligent chatbots, robotics, and more. For corporations of all types and sizes, the use cases that fit well with TensorFlow include:

  • Speech recognition
  • Image recognition
  • Object tagging videos
  • Self-driving cars
  • Sentiment analysis
  • Detection of flaws
  • Text summarization
  • Mobile image and video processing
  • Air, land and sea drones

1. Google

Google has the most exhaustive database in the world. And they obviously would be more than happy if they could make the best use of this by exploiting it to the fullest. Also, if all the different kinds of teams — researchers, programmers, and data scientists — working on Artificial Intelligence could work using the same set of tools and thereby collaborating with each other, all their work could be made much simpler and more efficient. As technology developed and our needs widened, such a tool-set became a necessity. Motivated by this necessity, Google created TensorFlow — a solution that they have been long waiting for.

In 2015 Google initiated Google Handwriting Input, that made users to hand-write text on their Android mobile device as another input way for any Android app. In order to provide a flawless user experience and eliminate the need for switching input ways, last year they introduced support for handwriting recognition in more than 100 languages to Gboard for Android, Google’s keyboard for mobile devices.

From then, progress in machine learning has modified new model architectures and training methodologies and alternatively they built a single machine learning model that operates on the whole input and bring down error rates substantially compared to the old version .They introduced and had published the paper “Fast Multi-language LSTM-based Online Handwriting Recognition” that explains in more about this research.

Making it Work, On-device

In order to give the best user-experience, exact recognition models were not enough they also had to be fast. To achieve the lowest latency possible in Gboard, they convert their recognition models (trained in TensorFlow) to TensorFlow Lite models. This includes quantizing all their weights during model training such that instead of using four bytes per weight they use only one, which leads to smaller models as well as lower inference times. Moreover, TensorFlow Lite allows us to reduce the APK size compared to using a full TensorFlow implementation, because it is optimized for small binary size by only including the parts which are required for inference.

2. GE Healthcare

Intelligent Scanning Using Deep Learning for MRI

MRI is a 3D imaging technique that allows clinicians to visualize structures in the body non-invasive and without ionizing radiation. MRI is a widely used and powerful imaging modality due to its superior contrast between “soft” tissues, e.g. gray matter, white matter and CSF, as well as its unique ability to not only visualize anatomical structures but also depict physiology and function, e.g. blood flow, perfusion, and diffusion.

They utilized the TensorFlow library with the Keras interface to implement the DL based framework for ISP. They choose TensorFlow since it provided ready support for 2D and 3D Convolutional Neural Networks (CNN), which is the primary requirement for medical image volume processing, and the Keras API made it easy to rapidly develop and test their ideas.

Unlike prior approaches to automate slice placement, their ISP framework uses deep-learning (DL) to determine the necessary plane(s) without the need for explicit delineation of landmark structures or customization of the localizer images. It can also warn the user in case the localizer images do not have enough information for automatically determining best scan planes.

The localizer images are very low-resolution images with limited brain coverage and several fine structures are not easily identified. Moreover, as compared to the classical approaches, a DL-based approach is less affected by factors that affect MRI image quality or appearance. This makes their DL approach robust to differences in MRI hardware, site specific parameter settings, and in general patient positioning across different clinics and hospitals.

Benefits of TensorFlow

They chose TensorFlow as their development and deployment platform for the following reasons:

  • Support for 2D and 3D Cascaded Neural Networks (CNN) which is the primary requirement for medical image volume processing.
  • Extensive built-in library functions for image manipulation and optimized tensor computations.
  • Extensive open-source user and developer community which supported latest algorithm implementations and made them readily available.
  • Continuous development with backward compatibility making it easier for code development and maintenance.
  • Stability of graph computations made it attractive for product deployment.
  • Keras interface was available which significantly reduced the development time: This helped in generating and evaluating different models based on hyper-parameter tuning and determining the most accurate model for deployment.
  • Deployment was done using a TensorFlow Serving CPU based docker container and Rest API calls to process the localizer once it is acquired.

3. Coca-cola

For years Coke attempted to use off-the-shelf optical character recognition (OCR) libraries and services to read product codes with little success. Their printing process typically uses low-resolution dot-matrix fonts with the cap or fridge-pack media running under the printhead at very high speeds. All of this translates into a low-fidelity string of characters that defeats off-the-shelf OCR offerings (and can sometimes be hard to read with the human eye as well). OCR is critical to simplifying the code-entry process for mobile users: they should be able to take a picture of a code and automatically have the purchase registered for a promotional entry. They needed a purpose-built OCR system to recognize their product codes.

Their research led to a promising solution: Convolutional Neural Networks. CNN’s are one of a family of “deep learning” neural networks that are at the heart of modern artificial intelligence products. CNN’s also perform remarkably well at recognizing handwritten digits. These number-recognition use-cases were a perfect proxy for the type of problem we were trying to solve: extracting strings from images that contain small character sets with lots of variance in the appearance of the characters.

Advances in artificial intelligence and the maturity of TensorFlow enabled them to finally achieve a long-sought proof-of-purchase capability. Since launching in late February 2017, their product code recognition platform has fueled more than a dozen promotions and resulted in over 180,000 scanned codes; it is now a core component for all of Coca-Cola North America’s web-based promotions.

Coke saved millions of dollars by avoiding the requirement to update printers in our production lines to support higher-fidelity fonts that would work with existing off-the-shelf OCR software.

4. Twitter

As a global, public communications platform, Twitter strives to keep users informed with relevant, healthy content. Originally, Twitter presented Tweets in reverse-chronological order. As the community became more connected, the amount of content in users home timelines increased significantly. Users would follow hundreds of people on Twitter maybe thousands and when opening Twitter, they would miss some of their most important Tweets.

To address this issue, they launched a “Ranked Timeline” which shows the most relevant Tweets at the top of the timeline, ensuring users never miss their best Tweets. A year later they shared how machine learning powers the ranked timeline at scale. Since then, they have re-tooled their machine learning platform to use TensorFlow.

When ranking, each candidate Tweet is scored by a relevance model in order to predict how relevant it is to each user. “Relevance” is defined by multiple factors including how likely a user is to engage with the Tweet, and how likely it is to encourage healthy public conversation. This model uses thousands of features from three entities: the Tweet, Author, and viewing User.

In summary, adopting the TensorFlow platform for their ML models has unlocked significant wins for the Twitter Timelines team on multiple fronts. They have seen improved model quality and timelines quality for Twitter users, reduced training and model iteration time, and their ML engineering team benefits from the improved extensibility and maintainability of the platform itself. They look forward to further innovations in their next generation of models built on the capabilities of this powerful platform.

--

--