Building a REST API using Flask to deploy a Machine Learning Model on a back-end server

Prakharpandey
DeepKlarity
Published in
4 min readOct 8, 2020

This blog is a part of series Clean or Messy Classifier . Check out the entire series for detailed implementation of deploying a machine learning classifier into web app using various frameworks.

Why use a REST API to deploy a Machine learning(or deep learning) model?

As a data scientist if you desire to showcase your work in a web app or a mobile app you will require a back-end server where your model is loaded that can interact with your front-end. This is possible with the help of a REST API.
Another advantage of REST API is that your model can be used by multiple developers working on different platforms such as web app or mobile app.

About the model

For this article, we used a simple image classification model that is trained to predict whether the surrounding in the image is clean or not.

Working of the API:-
When a POST request is made by any front-end server to this API, the following are the processes that take place:-
1) Firstly, it is checked whether the file is of valid extension(in this case- jpg, jpeg, or png) or a base64 string.
2) The image is saved inside a folder for future use.
3) The saved image is then passed to a function where a Machine Learning model predicts whether the surrounding in the image is clean or not.
4) Once the result is being computed, a JSON response is returned back to the front-end.

NOTE
While working with image classifier models we need to pass a numpy array (or pillow image if using a wrapper) for prediction. It's easy to convert an image into a numpy array but what if an image is sent in the form of a base64 string or a base64 DataURI?
So in this article, an easy approach to handle this issue is also explained.

Creating the API and deploying model is explained in the following stages:-

1. Setting up the root directory, upload folder, model, and other constants.

app = Flask(__name__, root_path = 'cleanvsmessy/')
CORS(app)
api = Api(app)
model = load_learner('model/model_v0.pkl')
UPLOAD_FOLDER = 'uploads'
app.config['UPLOAD_FOLDER'] = UPLOAD_FOLDER
ALLOWED_EXTENSIONS = (['png', 'jpg', 'jpeg'])
def is_allowed_filename(filename):
return '.' in filename and filename.rsplit('.', 1)[1].lower() in ALLOWED_EXTENSIONS

model is loaded using ‘load_learner’ which is a function provided in fastai to load a pre-trained model for inference.

2. Logic for handling POST request

This stage holds the logic of handling POST request. First, we will store the file from the request into a variable. Now the logic is divided into three conditions described below:-
Condition 1- If the file received contains an empty filename then a JSON response is returned with an appropriate message with status code 400
Condition 2- If the file and filename both are valid then we will first save the image to a folder and then call the predict function for the computation of cleanliness. The return from the predict function is returned as the JSON response with status 201.
Condition 3- If the file selected is not an image file then a JSON response is returned with an appropriate message with status code 400.

file = request.files['file']
if file.filename == '':
resp = jsonify({'message': 'No file selected for uploading'})
resp.status_code = 400
return resp
if file and is_allowed_filename(file.filename):
filename = secure_filename(file.filename)
file.save(os.path.join(app.config['UPLOAD_FOLDER'], filename))
path = UPLOAD_FOLDER + '/' + filename
resp = predict(path)
resp.status_code = 201
return resp
else:
resp = jsonify({'message': 'Allowed file types are png, jpg, jpeg'})
resp.status_code = 400
return resp

After receiving a valid image, we simply save the image to the folder and call the predict function passing path of the image as an argument.

3. Handling base64 string and Model function

After receiving the image with a valid extension this is the final stage where we will pass the image to the model for the required results. The model loads an image and converts it into a numpy array which is passed as an argument for prediction. Now, the image can be a base64 string(or base64 data URI) or jpg(or any other valid extension), so both are explained below:-
base64 string- First let us see how the image as base64 DataURI looks like. So here is an example-
“data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAoAAAAPACAYAAACl4s….”
The encoded part after “base64,” is what we will use to first decode it using b64decode which can easily be converted into a numpy array.

im = Image.open(BytesIO(base64.b64decode(image.split(',')[1])))
im.save('image.png')
image_np = np.array(im)

JPG, JPEG, or PNG image- Images with these extensions are directly saved and converted to numpy array.
Below given code is a predict function that uses the loaded model for prediction.

def predict(img_path):
img = Image.open(img_path)
print(img)
img_np = np.array(img)
is_clean, _ ,probs = model.predict(img_np)
prob = float(list(probs.numpy())[1])
return {"is_clean": is_clean , "predictedVal": prob}
API tested on postman

You can refer to this link to know about sending a POST request of an image or a DataURI from a React frontend.

The code for the above Flask API is available at this link.

That’s it for this blog. Please comment and share if you face any issues or errors using this approach. Please share if you have used an alternate approach.

--

--