Deploying a predictive python ML model with Flask and Heroku: Part 3

Alyssa Liguori
4 min readDec 23, 2019

--

In this post, we’ll set up the model to intake the user’s form inputs, generate a prediction, and serve that prediction to the user via html-jinja.

Check Part 1 and Part 2 to catch up to speed.

Photo by Nathan Dumlao on Unsplash

Navigate to your app.py file. It looks like this currently:

from flask import Flask
app = Flask(__name__)
@app.route('/', methods=['GET'])
def test():
return 'Hello World!'

In place of the function test, we will declare a function called render_main which will display main.html when a user makes a GET request to the server (goes to our web app’s URL).

@app.route('/', methods=['GET'])
def render_main():
return flask.render_template('main.html')

The render_main function calls flask’s render_template method and passes it the argument main.html. When a user navigates to this URL, this method will find our html file, generate a jinja2 template object from it, and return it to the browser.

Now, we will test render_main. In your terminal, in the root of your web app repo, execute the command flask run then open your browser to your localhost. When the server starts up, you should see your form in the browser. If you do not see it, go back to the terminal to check for error messages.

Next, let’s add a second decorator for POST requests made to the server. We also declare a function make_prediction that will:

  1. Get values from the form fields we need
  2. Run user’s through our model
  3. Return a prediction to the user

Below, we do #1 and test it by printing the submitted values to the terminal.

@app.route('/', methods=['POST']
def make_prediction():
res_age = request.form.get('age')
res_sex = request.form.get('sex')
res_cheese = request.form.get('cheese')
res_tj = request.form.get('milk1')
res_silk = request.form.get('milk2')
res_zen = request.form.get('milk3')
res_wf = request.form.get('milk4')
print(res_age, res_sex, res_cheese, res_tj, res_silk, res_zen, res_wf)

Next, we call generate_prediction from within the make_prediction function. But what is generate_prediction?

@app.route('/', methods=['POST']
def make_prediction():
res_age = request.form.get('age')
res_sex = request.form.get('sex')
res_cheese = request.form.get('cheese')
res_tj = request.form.get('milk1')
res_silk = request.form.get('milk2')
res_zen = request.form.get('milk3')
res_wf = request.form.get('milk4')
print(res_age, res_sex, res_cheese, res_tj, res_silk, res_zen, res_wf)
prediction = generate_prediction(res_age, res_sex, res_cheese, res_tj, res_silk, res_zen, res_wf)

Generate_prediction is the name you will give to your ML model that will be stored in a second py file. Let’s take a moment to set that up and import it into app.py so that our make_prediction function runs.

Go to your terminal and create a second py file. You can name it scripts.py. Take a moment to import all the necessary libraries needed to run your ML model. For example, you might import pandas, numpy, sklearn, etc. Also, be sure to import joblib as that will allow us to pipeline our model from our separate data-science repo.

Now, navigate to the separate repo you used to build your model. Import joblib then add the code to pickle your model. Here’s an example of what it might look like:

joblib.dump(clf, '../src/models/clf.pkl', compress=1)

The first argument is the name of the model that we want to pickle for use in our web app. The second argument is the filepath where we want the pickle file to be saved. The third argument is the keyword argument compress and we have set its value to 1. Read the joblib docs for more information.

Now, copy the clf.pkl file and paste it into your web app repo manually. Navigate back to the scripts.py file and load clf:

clf = joblib.load('./clf.pkl')

Here’s an example of what your scripts.py file could look like:

[import libraries here]
import joblib
from sklearn.linear_model import LogisticRegression
def generate_prediction(res_age, res_sex, res_cheese, res_tj, res_silk, res_zen, res_wf):
vals = [res_age, res_sex, res_cheese, res_tj, res_silk, res_zen, res_wf]
clf = joblib.load('./clf.pkl')
prediction = clf.predict(vals)
return prediction

It is likely that you will need at least one additional function in this scripts.py file. That function will convert the input values to a format that is usable by your ML model. You can declare and define the cleaning function in this file but then be sure to call it within generate_prediction because our web app only calls that function.

Back to our app.py file. When we last left off, it looked like this:

@app.route('/', methods=['POST']
def make_prediction():
res_age = request.form.get('age')
res_sex = request.form.get('sex')
res_cheese = request.form.get('cheese')
res_tj = request.form.get('milk1')
res_silk = request.form.get('milk2')
res_zen = request.form.get('milk3')
res_wf = request.form.get('milk4')
print(res_age, res_sex, res_cheese, res_tj, res_silk, res_zen, res_wf)
prediction = generate_prediction(res_age, res_sex, res_cheese, res_tj, res_silk, res_zen, res_wf)

Now, let’s import generate_prediction at the top.

from scripts import generate_prediction @app.route('/', methods=['POST']
def make_prediction():
res_age = request.form.get('age')
res_sex = request.form.get('sex')
res_cheese = request.form.get('cheese')
res_tj = request.form.get('milk1')
res_silk = request.form.get('milk2')
res_zen = request.form.get('milk3')
res_wf = request.form.get('milk4')
print(res_age, res_sex, res_cheese, res_tj, res_silk, res_zen, res_wf)
prediction = generate_prediction(res_age, res_sex, res_cheese, res_tj, res_silk, res_zen, res_wf)

Lastly, let’s set complete # 3 and return a prediction to the user.

from scripts import generate_prediction@app.route('/', methods=['POST']
def make_prediction():
res_age = request.form.get('age')
res_sex = request.form.get('sex')
res_cheese = request.form.get('cheese')
res_tj = request.form.get('milk1')
res_silk = request.form.get('milk2')
res_zen = request.form.get('milk3')
res_wf = request.form.get('milk4')
print(res_age, res_sex, res_cheese, res_tj, res_silk, res_zen, res_wf)
prediction = generate_prediction(res_age, res_sex, res_cheese, res_tj, res_silk, res_zen, res_wf)
return render_template('res.html', prediction=prediction)

Again, we used the render_template method from flask and passed it the argument of the output HTML file and the jinja2 variable prediction which is set equal to the value returned when we call generate_prediction. It’s a good time to edit our res.html file so we can see where prediction is going to be passed.

<!DOCTYPE html>
<html lang='en'>
<head>
<meta charset='utf-8' name="viewport" content="width=device-width, initial-scale=1">
<title>Results</title>
</head>
<body>
<div>
<p>Based on your responses...</p>
<p>The model predicted ...</p>
<p><b>{{prediction}}</b></p>
</div>

</body>
</html>

At this point, all of the functionality of the app will be working. Be sure to test it out and troubleshoot paying special attention to the format of your arguments.

Next post, we’ll deploy the model to Heroku so others can see your work and interact with the model!

--

--