Deploying Your Machine Learning Model as a REST API Using Flask

Emmanuel Oludare
Analytics Vidhya
Published in
4 min readJan 26, 2019
Photo by Chris Ried on Unsplash

There are thousands of online courses which teach you how to build and train a machine learning model or deep neural network, many of these tutorials end with you just training and building the model alone. This, however, ends you having a model file on your local computer, which isn’t the end of the story: the model needs to be deployed in order to be used by the clients or end-users.

As a Web developer and data scientist, I have a desire to build web apps to showcase my work. As much as I like to design the front-end, it becomes very overwhelming to take both machine learning and web development. So, I had to find a solution that could easily integrate my machine learning models with other developers who could build a robust web app better than I can.

By building a REST API for my model, I could keep my code separate from other developers. There is a clear division of labour here which is nice for defining responsibilities and prevents me from directly blocking teammates who are not involved with the machine learning aspect of the project. Another advantage is that my model can be used by multiple developers working on different platforms, such as web or mobile.

About Flask

Flask is a micro web framework for Python. It is called a micro framework because it does not require particular tools or libraries. While you would not guess this looking at its (simplistic) overview page, it is for example the main API technology of Pinterest. You can use a Flask web service to create an API call which can be used by the front-end, or even a full-on web application.

A very simple flask app would look like this:

In this article, I will build a simple Scikit-Learn model and deploy it as a REST API using Flask RESTful. This article is intended especially for data scientists who do not have an extensive computer science background.

It can be installed with the terminal using pip

pip install flask

Getting the Dataset

In this article, I am using a YouTube Spam Collection Data Set, it is a public dataset of comments collected for spam research. It has five datasets composed by 1,956 real messages extracted from five videos that were among the 10 most viewed on the collection period. I will be using just one out of the five datasets which is Youtube01-Psy.csv

Setting up your environment

I created a new file and save it as app.py

I then created two folders (templates and static )alongside the file app.py

Then inside the folder static created I created another folder called CSS, which houses all our CSS files (style.css).

a little overview of the file structure:

├── app.py  # Flask REST API script
└── data/
└── Youtube01-Psy.csv# data from UCI Machine learning Repository
├── templates
├── home.html
└── result.html
└── static
└── CSS
└── style.css

Import Libraries

The code below contains our necessary libraries needed for the task

from flask import Flask, render_template,url_for,request
import pandas as pd
import pickle
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.naive_bayes import MultinomialNB
from sklearn.externals import joblib

Initialize The App

app = Flask(__name__)#Machine Learning code goes hereif __name__ == '__main__':
app.run(debug=True)

Building the App

Creating the homepage

We need to create an HTML file in the templates folder, this is will be our homepage. I save mine as home.html

Then I also update app.py


@app
.route('/')
def home():
return render_template('home.html')

Creating the Result Web Page

I created an HTML file which is going to display the answer to the prediction if it a spam or not. I saved it with result.html

I then updated app.py with another function decorator having the model inside.

@app.route('/predict', methods = ['POST'])
def predict():
df = pd.read_csv("data/Youtube01-Psy.csv")
df_data = df[['CONTENT', 'CLASS']]
# Features and Labels
df_x = df_data['CONTENT']
df_y = df_data.CLASS
#Extract the features with countVectorizer
corpus = df_x
cv = CountVectorizer()
X = cv.fit_transform(corpus)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, df_y, test_size = 0.33, random_state = 42)#Navie Bayes
clf = MultinomialNB()
clf.fit(X_train, y_train)
clf.score(X_test, y_test)
#Save Model
joblib.dump(clf, 'model.pkl')
print("Model dumped!")
#ytb_model = open('spam_model.pkl', 'rb')
clf = joblib.load('model.pkl')
if request.method == 'POST':
comment = request.form['comment']
data = [comment]
vect = cv.transform(data).toarray()
my_prediction = clf.predict(vect)
return render_template('result.html', prediction = my_prediction)

Here is the full code

App.py

Run App.py

Our flask server is running on http://127.0.0.1:500/

This is how it looks on a web browser:

Below is the result of the comment entered into it

Get the full code here

--

--

Emmanuel Oludare
Analytics Vidhya

Full stack web developer and experience machine learning engineer.