If you haven’t read my previous post I would suggest to take a look of it before proceeding further. In that post I’ve talked about training your ML model and connecting it with Flask to serve as REST API. In this post we’ll be deploying it to AWS EC2 instance.
Lets look at the steps:
- Create an EC2 instance.
- Create virtual environment and install packages.
- Copy model files to instance.
- Configure supervisor config file.
- Install Nginx and configure some settings.
- Configure nginx config file
- Send requests
Launching an EC2 instance on AWS
So, here the steps:
- Create an account in AWS EC2 (if you already have an account then just skip this step).
- Create an EC2 instance. You will see a page like below. Click on launch instance.
3. Select free tier instance or any other instance of your preference. I’m using free tier instance with Ubuntu Server 18.04 LTS.
NOTE: AWS supports deep learning and ML instances but they don’t come under free tier. You can choose any instance as per your convenience.
4. I’m using t2 micro instance as it is free tier eligible.
Click on Review and launch
5. Now, you will see option to create new key-pair or use existing one. I’m creating new key-pair. From drop down menu select Create a new key pair. Type name for your key and download it. After that click on Launch Instances.
6. Your instance should be running now. Right click on it and select connect. Follow the steps to connect it.
You’ll see something like this.
7. Install all packages required. Type as below —
$ sudo apt update$ sudo apt install python3-pip$ sudo apt install python3-venv$ python3 -m venv root$ source root/bin/activate$ pip3 install sklearn flask gunicorn$ sudo apt install supervisor
8. Now, we need to copy app.py and model to server. Type as follows in your local computer terminal as follows:
- $ scp -i ~/Downloads/key-pair.pem source destination.
- Example — scp -i ~/Downloads/key-pair.pem ~/Documents/productionize/app.py ubuntu@ec2–x–x–x–x.compute-1.amazonaws.com:~/
- Once done. Type ls (in server)
You should be able to see your model and app file. To make this post simple I’m not moving them anywhere.
9. Now, we’ll make sure that our environment is well setup. We’ll type python3 app.py . You should be able to see your model running.
Creating config file for Supervisor
- Next step is to create a supervisor script for enabling process monitoring. If you app.py fails then it will restart it again and you can also set number of retries. Type as follows
$ sudo nano /etc/supervisor/conf.d/api.conf
Enter following code in api.conf file
- directory — place where your app.py file is stored.
- command — we will be using gunicorn to run flask app. The command to run using virtual environment is as /home/ubuntu/root/bin/gunicorn app:app -bind localhost:8002
- stderr_logfile — place where you want to create log file
3. Now type
$ sudo supervisorctl reread$ sudo supervisorctl reload$ sudo supervisorctl restart all
Now, you should see something like this
Nginx for Reverse Proxy
- Change inbound rules of EC2 instance. The goal is to open port 80 to accept connections.
2. Click on launch-wizard-2 (you may see a different name) and make following changes in inbound rules.
3. Now, type
$ sudo apt update
$ sudo apt install nginx
$ sudo ufw app list
$ sudo ufw allow 'Nginx HTTP'
You should see that nginx by typing
$ sudo service nginx status
A problem occured while I tried to run on my server. It was related to PID. If you get the same error use — https://www.cloudinsidr.com/content/heres-fix-nginx-error-failed-read-pid-file-linux/
4. You can test it by typing
You will see nginx welcome page.
5. Next step is to create nginx config file. Type as following
$ sudo nano /etc/nginx/nginx.conf
Now, type the following in that config file.
Note: make sure to open port 80 in inbound rules as I have discussed above.
6. Open python interpreter in your local machine and try sending request to AWS server.
Congrats, you have successfully productionized your ML model.
Following the same methodology you can deploy any model.
Thank You for reading…!