Deploying your Machine Learning Project

The final frontier of your Machine Learning Model

Siddharth Varangaonkar
AI Graduate
6 min readMay 7, 2019

--

You worked hard on the initial steps of ML pipeline to get the most precise results. You worked days and nights in gathering data, cleaning, model building and now you hope to just pull off the last one - The endgame. Without deployment these models are no good lying in your IDE editor or Jupyter notebook. Your creation needs to reach the customers to wield its full potential. Model deployment is the final but crucial step to turn your project to product.

imglfip.com

I have a model now what?

To deploy a machine learning model you need to have a trained model and then use that pre-trained model to make your predictions upon deployment. A pre-trained model means that you have trained your model on the gathered training, validation and testing set and have tuned your parameters to achieve good performance on your metrics. This post mostly deals with offline training. We can also train the model every time a new data is encountered after the model is deployed. Imagine you want to build a face recognition system to be deployed at an ATM vestibule. The difference between online and offline training is that in offline training the recognition model is already trained and tuned and it is just performing predictions at the ATM whereas in an online training scenario the model keeps on tuning itself as it keeps seeing new faces.

We can deploy machine learning models on various platforms such as:

  • Websites - Flask framework with deployment on Heroku (free)
  • Websites - Django framework
  • Android apps
  • Python GUI - Tkinter
  • Cloud-Based Services - AWS, Azure, Google Cloud Platform

The list above is by no means exhaustive and there are various other ways in which you can deploy a model. Python is the most popular language for machine learning and having numerous frameworks for developing ML models it also has a library to help deployment called Pickle. Pickle is used for import and export of files. The pickle library makes it easy to serialize the models into files. We can also load the model back into our code. This allows us to keep our model training code separated from the code that deploys our model.

To serialize our model to a file called model.pkl

To load a model from a file called model.pkl

An easily approachable way is to BUILD THE API. Convert your machine learning model into an API using Django or flask. Interaction of the machine learning model as an API is shown in image.

A few good resources to convert your model to API in Django and Flask

@dvelsner

Websites are the broadest deployment application for your model. Almost all the e-commerce websites, social media, search engines etc. use a machine learning model to power them. The image below shows the deployment of a recommender system by amazon.com.

amazon.com

Flask deployment with Heroku web server

The Python Flask framework allows us to create web servers in record time. Flask web server is used to handle HTTP requests and responses. These requests carry the data in the form of a JSON object.

Now there are two paths in which you can deploy on flask- the First one is through a pre-trained model which loads from the pickle trained the model to our server or we can directly add our model to flask routes. To do tha latter define the app.route decorator in flask file then add your deployment code in decorator function to make it work. The app.route decorator is a function which connects a path to the function on flask application. So when you visit the route or trigger the route with help of form action (HTML) then our machine learning model runs and predicts or returns the results. Refer to this video which explains the process with an example.

analyticsvidhya.com

The above image shows how flask interacts with the machine learning model and then makes it work after deployment. To deploy this flask application with ML model on Heroku cloud server you can refer this article. Heroku is a cloud hosting service which is free of cost.

I would prefer Flask over Django for ML model deployment as Flask initial study is easy and deployment is also plain.

Deployment on Django Framework

You can utilize Django’s cache framework to store your model. First, activate the local memory cache backend (Instructions)

Now, you’ll need to store your model in the cache. Now add the ML model in your views of Django URLs similar to the flask. The purpose of cache is to store our model and get the model when needed and then load it to predict results. Refer this for an example.

Django Deployment Flowchart

Mobile apps

Take a snap! Adding filters on your snap using snapchat or google assistant helping you to recognize music to search the song you want or Netflix app recommendation notifications all of them are examples of machine learning model deployment on mobile.

Object detection | source:oreilly.com

Object Detection, Face recognition, Face unlock, Gesture control are some widely used machine learning applications on every android phone today.

Firebase and TensorFlow are very good frameworks for a quick and easy development and deployment. You can use the following

Tensorflow Lite has an edge over Tensorflow mobile where models will have a smaller binary size, fewer dependencies, and better performance. Also, it works on both Android apps as well as iOS apps.

Tensorflow Lite | source:youtube.com

Using the Python GUI library - Tkinter

Prerequisites for this deployment are in-depth knowledge of Tkinter GUI programming libraries. All you have to do is to add your machine learning model in the defining functions of your code along with designing a user interface using any of these libraries. These are some references for you with examples- Tkinter ML.

Cloud-Based Services

There are some cloud-based services like Clarifai (vision AI solutions), Google Cloud’s AI (machine learning services with pre-trained models and a service to generate your own tailored models), and Amazon Sage maker Service made for ML deployment and also Microsoft Azure Machine learning deployment.

Amazon Sage maker one of the most automated solutions in the market and the best fit for deadline-sensitive operations. Amazon has a large catalog of MLaas (Machine learning as a service) which helps the developer to efficiently complete his task. The image below shows a machine learning trained model which predicts cats or dogs deployed on the cloud.

imagethink.net

I’ve tried to collate references and give you an overview of the various deployment processes on different frameworks. Hopefully this gets you started on converting your ML project to a product and helps you sail easily through the crucial final step of your ML project!

Additional Resources-

  1. Flask mega-tutorial Heroku
  2. Keras model on android
  3. Deploy Model with Kubernetes

X8 aims to organize and build a community for AI that not only is open source but also looks at the ethical and political aspects of it. More such simplified AI concepts will follow. If you liked this or have some feedback or follow-up questions please comment below

--

--

Siddharth Varangaonkar
AI Graduate

Philomath | Pedant | Pythonic | Flask | A.I. and machine learning geek | Implementation over theory | Writer Exploring Machine learning and its core |