Creating a web application powered by a fastai model
--
using React, Flask, Render.com, and Firebase
Developing an end-to-end Machine Learning based web-app can appear quite daunting. However, with libraries like fastai, training models has become a lot easier compared to the past. Deployment takes less than ten minutes with sites like Render.com and Firebase.
This article describes a simple, step by step approach to building a simple flower classification app. It takes an image input and returns the predicted flower category along with the probabilities of each of the classes. Here’s a live demo of the application I’ve built:
Dataset
I’ve used the flower recognition dataset from Kaggle:
It consists of 700–1000 images each of five types of flowers: daisy, dandelion, rose, sunflower, and tulip. If you wish to construct your own dataset using Google images follow the steps mentioned in the following article:
Let’s begin building our web app!
Step 1: Train your model
I’ve explained in detail how to train a model using fastai through transfer learning in the following article:
After training, the accuracy of my model was 94.4%
Step 2: Export trained model
Once we have trained our model, to put our model in production we export the minimal state of our Learner
. A PKL file is created using pickle, a Python module that enables objects to be serialized to files on disk and deserialized back into the program at runtime.
learn.export(‘trained_model.pkl’)
Once exported, download the trained_model.pkl
file.
Step 3: Create a REST API using Flask
Flask is a popular Python web framework. To load our model, we use the load_learner
function from fastai.basic_train
module to load the .pkl
file we exported in step 2. We open our image obtained from the request using open_image from fastai.vision
module and then call the predict
function over the image to get our prediction. We then return this result in the form of a JSON object. The following code goes into our app.py
file:
Additionally for our server to know what dependencies to install we would need a requirements.txt
file that consists of:
Flask
gunicorn
fastai
torch
flask_cors
Step 4: Deploy the API on Render.com
For this step, we’ll require our app.py, requiements.txt and trained_model.pkl files to be on a Github repo. Once we have pushed our code into a GitHub repo (let’s call it flower_classifier), we do the following:
- Go to render.com and login/sign up.
- Go to services -> New web service.
- Connect the flower_classifier GitHub repo to Render
- Use the following values during creation:
Make sure you verify that the service is functioning the way it is supposed to once it’s deployed. I’ve used Postman to verify this:
Step 5: Build UI
I’ve used ReactJs to build my frontend and a fetch API call to get my prediction. You can check out my code here:
Step 6: Deploy web-app on Firebase
We’re almost done! The final and simplest step is to deploy our web-app on Firebase. To do this we first create a production build using:
npm run build
Then:
- Go to https://firebase.google.com/ -> login/sign up
- Go to Console->Add Project
- Enter the following commands on the command line:
$ npm install -g firebase-tools
$ firebase login
$ firebase init
4. Select ‘Hosting’ when prompted
5. Deploy using:
$ firebase deploy
And we’re done!
Few issues that I ran into:
1)I tried deploying the API using Heroku instead of Render (because it’s free), however because of the dependencies being too large I faced the following error:
Compiled slug size: 997.6M is too large (max is 500M)
2)I also came across a CORS related issue while trying to call the API from my local machine while developing the UI. I handled it using flask_cors.
No ‘Access-Control-Allow-Origin’ header is present on the requested resource