What Every Data Scientist should Know — Part 2

Deploying ML powered Flask API to AWS EC2

Gagandeep Singh
May 29 · 5 min read
Source: unsplash

If you haven’t read my previous post I would suggest to take a look of it before proceeding further. In that post I’ve talked about training your ML model and connecting it with Flask to serve as REST API. In this post we’ll be deploying it to AWS EC2 instance.

Lets look at the steps:

  1. Create an EC2 instance.
  2. Create virtual environment and install packages.
  3. Copy model files to instance.
  4. Configure supervisor config file.
  5. Install Nginx and configure some settings.
  6. Configure nginx config file
  7. Send requests

Launching an EC2 instance on AWS

So, here the steps:

  1. Create an account in AWS EC2 (if you already have an account then just skip this step).
  2. Create an EC2 instance. You will see a page like below. Click on launch instance.
AWS EC2 instance page

3. Select free tier instance or any other instance of your preference. I’m using free tier instance with Ubuntu Server 18.04 LTS.

Ubuntu Server 18.04 LTS instance

NOTE: AWS supports deep learning and ML instances but they don’t come under free tier. You can choose any instance as per your convenience.

4. I’m using t2 micro instance as it is free tier eligible.

t2 micro instance

Click on Review and launch

5. Now, you will see option to create new key-pair or use existing one. I’m creating new key-pair. From drop down menu select Create a new key pair. Type name for your key and download it. After that click on Launch Instances.

6. Your instance should be running now. Right click on it and select connect. Follow the steps to connect it.

You’ll see something like this.

7. Install all packages required. Type as below —

$ sudo apt update$ sudo apt install python3-pip$ sudo apt install python3-venv$ python3 -m venv root$ source root/bin/activate$ pip3 install sklearn flask gunicorn$ sudo apt install supervisor

8. Now, we need to copy app.py and model to server. Type as follows in your local computer terminal as follows:

ubuntu server

You should be able to see your model and app file. To make this post simple I’m not moving them anywhere.

9. Now, we’ll make sure that our environment is well setup. We’ll type python3 app.py . You should be able to see your model running.

Creating config file for Supervisor

  1. Next step is to create a supervisor script for enabling process monitoring. If you app.py fails then it will restart it again and you can also set number of retries. Type as follows
$ sudo nano /etc/supervisor/conf.d/api.conf

Enter following code in api.conf file

api.conf file
  • directory — place where your app.py file is stored.
  • command — we will be using gunicorn to run flask app. The command to run using virtual environment is as /home/ubuntu/root/bin/gunicorn app:app -bind localhost:8002
  • stderr_logfile — place where you want to create log file

3. Now type

$ sudo supervisorctl reread$ sudo supervisorctl reload$ sudo supervisorctl restart all

Now, you should see something like this

supervisor reloaded

Nginx for Reverse Proxy

  1. Change inbound rules of EC2 instance. The goal is to open port 80 to accept connections.
ec2 instance description

2. Click on launch-wizard-2 (you may see a different name) and make following changes in inbound rules.

inbound rules

3. Now, type

$ sudo apt update
$ sudo apt install nginx
$ sudo ufw app list
$ sudo ufw allow 'Nginx HTTP'

You should see that nginx by typing

$ sudo service nginx status

A problem occured while I tried to run on my server. It was related to PID. If you get the same error use — https://www.cloudinsidr.com/content/heres-fix-nginx-error-failed-read-pid-file-linux/

4. You can test it by typing

nginx welcome page

You will see nginx welcome page.

5. Next step is to create nginx config file. Type as following

$ sudo nano /etc/nginx/nginx.conf

Now, type the following in that config file.

nginx.conf file

Note: make sure to open port 80 in inbound rules as I have discussed above.

6. Open python interpreter in your local machine and try sending request to AWS server.

Congrats, you have successfully productionized your ML model.

Following the same methodology you can deploy any model.

Thank You for reading…!

Gagandeep Singh

Written by

Data Scientist at Zykrr. Geeky — https://www.linkedin.com/in/gaganmanku96/

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade