Day 18 of 100DaysofML

Charan Soneji
100DaysofMLcode
Published in
5 min readJul 4, 2020

Linking my front end to my model.
So a number of students try to link their ML models which they create using TensorFlow or any other architecture to their front end and that was one of the main motivations for writing this blog. I’m going to simplify or break down the steps to do the following but just keep in mind that you need to use Flask architecture for the following:
1. Create Flask web app
2. Download and check model file or use your own
3. Create form to take input from flask web app
4. Pass image to model

  1. Creating your Flask WebAPP: Flask has a specific way of arranging your files and all the pictures using in your html links are to be put in a seperate folder and the html links that you are referring to are put in a separate folders. The main file for linking your model to your webapp is the app.py file. If you read about resources, you will understand the file architecture of the Flask based web app that you create.
Flask file structure

2. Download and check your model file or use your own: So a lot of web developers do not take the hassle of developing their own model but stick to getting a pre-trained model from the internet. Its fine either ways, you just need to know the input and output formats to your model as in what kind of input your model is going to take (Image or text).

Save your model using Keras

You can try using the pretrained model from the link mentioned below which would give you the .h5 file for the same.

Or use the following link:

https://github.com/OlafenwaMoses/ImageAI/releases/download/1.0/resnet50_weights_tf_dim_ordering_tf_kernels.h5

To test our model, create a process.py file and add it anywhere because this is mainly for testing. After testing, we can delete this folder and add it to our app.py folder.

You can scroll to the end page of the following link (https://imageai.readthedocs.io/en/latest/prediction/index.html) to see the code for testing the model for a given image. Once this is done, the model can be tested by copying the entire code to process.py folder.

Just make sure you have Tensorflow installed or use Colab/Kaggle/Jovian and complete pip install imageai.

Process.py

Copy and paste the model file in app/static folder. Download any input image and place it in app/static folder. Edit line 10 and change image location to our downloaded image

predictions, probabilities = prediction.predictImage(os.path.join(execution_path, “static/input_image_name.jpg”), result_count=10)

Now run the process.py file and see the predictions that you get. Don’t be shocked to see a number of warnings in case you do not have a GPU.

3. Create form to take input from flask web app: This is basic HTML and you can create a file input to be able to upload your image to the form. This is something very basic and the code can be found online for the ones looking for it.

Route for linking to html page

The above screenshot is for linking your model to the HTML page by using FLASK.
APP.PY
This is the most crucial element of your entire Flask architecture where you would be defining your routes and functions.

app route for upload

In the above code, we have defined a function when the file is uploaded using the html form.

4. Pass Image to model: After your routes have been saved successfully, this is the next step whereby the input given to your html form would be tested by passing it to the Model trained. Check out the below github link to directly get the routes directly incase you are a Rookie with Flask.

Just keep in mind that the predict_img is what is going to test your image input and return the result. Your result can be further refined by another function if you wish to, else the entire result can be displayed. In case users are having difficulty with using TensorFlow because of GPU issues, try the below code and it should help.

NOTE: The following does not make use of any API calls. It is for linking a model present locally to a webapp that you are creating using FLASK architecture. API calls could be used in case of using an online model such as AWS Sagemaker or any of the pre trained models of Azure or Google Cloud.

For CPU users: To hide tensorflow warnings add code to process.py
At start:
import tensorflow
os.environ[
“TF_CPP_MIN_LOG_LEVEL”] = “3”
Inside predict_img function where model will be loaded:
tensorflow.compat.v1.logging.set_verbosity(tensorflow.compat.v1.logging.ERROR)

I have just given a vague overview of the process. Feel free to DM in case you need help in any of the stages or an overview of how to proceed with Flask. Keep Learning.

Cheers.

--

--