A production-grade Machine Learning API using Flask, Gunicorn, Nginx, and Docker — Part 3 (Flask Blueprints)

Aditya Chinchure
Technonerds
Published in
3 min readApr 28, 2020

In Part 1, we implemented a very basic Falsk API for our ML model. In Part 2, we added the Gunicorn WSGI and Nginx and deployed it using a Docker container. In this part, we will clean up the API so that we can maintain multiple endpoints with ease.

At the end of this part, our folder structure will look like this:

flask-ml-api
|- api
|- __init__.py
|- endpoints
|- __init__.py
|- classification.py
|- models
|- model.pkl
|- app.py
|- wsgi.py
|- requirements.txt
|- Dockerfile
|- nginx
|- nginx.conf
|- Dockerfile
|- docker-compose.yml

Step 1: Each endpoint gets its own file

One of the biggest issues with managing multiple endpoints is the sheer amount of code that is necessary to maintain each endpoint. Stuffing it all into app.py is less than ideal, and Falsk’s blueprints can be used to solve exactly that.

So, let’s first make a new directory called endpoints in our API folder. Initialize it with a blank file named __init__.py .

Next, move the model.pkl file to the models folder, if you haven’t already done so in part 2. You can also choose to add all your other model files in this folder.

We can now move our existing endpoint code from app.py to classification.py

In Part 1, this is what we had in our app.py :

from flask import Flask, jsonify, request
from fastai.text import *
import json

app = Flask(__name__)

learner = load_learner('.', 'model.pkl')

@app.route("/classification")
def classification():
sample = json.loads(request.data)["text"]
return jsonify(str(learner.predict(sample)[0]))


if __name__ == '__main__':
app.run(host='0.0.0.0')

Let us copy this as it is to classification.py and make the changes that are bolded:

from flask import Blueprint, request, jsonify
from fastai.text import *
import json
classification_api = Blueprint('classification_api', __name__) learner = load_learner('api/models/', 'classification.pkl') @app.route("/classification")
def classification():
sample = json.loads(request.data)["text"]
return jsonify(str(learner.predict(sample)[0]))
  • We need to initialize our API as a blueprint instead of a Flask app. I chose to call it “classification_api”.
  • Since this is not the main file, we do not need to call the app.run(host=’0.0.0.0') function.

That’s it, the classification API is set up!

You can make similar endpoint files for each of your endpoints, and use the @app.route to establish the endpoint path. This is explained in Part 1.

Step 2: Register the endpoint in app.py

Now that we have the classification.py file to deal with the endpoint code, we can simply import the blueprint and use it in our Flask app.

Update app.py to this:

from flask import Flask
from .endpoints.classification import classification_api
app = Flask(__name__)
app.register_blueprint(classification_api)
if __name__ == '__main__':
app.run(host='0.0.0.0')

Now, when Flask starts up, it will register the classification blueprint and enable the /classification endpoint.

Step 3: Adding more endpoints

Now that you have the first endpoint set up, you can add additional endpoints by:

  1. saving your model in the models folder
  2. creating a new file in the endpoints folder, and creating a new Blueprint. You can implement your methods for prediction here, just like we have for classification.
  3. registering the new blueprint in app.py

Conclusion to Part 3

There you have it — a simple, easily extendable API to deploy your ML models. Making your API modular will allow you to maintain your code with ease.

As usual, you can find all the code I wrote on GitHub.

If you have any questions or feedback, feel free to drop them in the comments.

In this series:

Part 1: Setting up our API
Part 2: Integrating Gunicorn, Nginx and Docker
Part 3: Flask Blueprints — managing multiple endpoints
Part 4: Testing your ML API

Hello there! Thanks for reading. Here’s a tad bit about me. I am a Computer Science student at the University of British Columbia, Canada. I primarily work on machine learning projects, mostly NLP. I also do photography as a hobby. You can follow me on Instagram and LinkedIn, or visit my website. Always open to opportunities 🚀

--

--

Aditya Chinchure
Technonerds

CS at UBC | Computer Vision Researcher | Photographer