How to deploy your first website — The long & the short of it!

Anupama Nair
The Patr-onus Deployment Blog


We know you didn’t show up here to read an elaborate introduction, so let’s skip right to the good stuff & learn how to deploy.

Disclaimer: The below blog is a step-by-step guide to deploying your site/app. It is one of the many ways to go about the deployment process. This is also the traditional method so if you’re looking to learn, this is the spot!

Currently, in order to automate the deployment of a simple React application or a webpage that you’ve created, you would need to go through the following processes:

Step 1: Setting up your GitHub repository

  • Go to a popular git hosting services like GitHub / GitLab, etc. & login to your account.
  • In the case of GitHub, hit the “+” icon on the top right corner and click on “New repository”.
  • Now open VSCode on your desktop.
  • Open a new folder in VSCode & setup your website’s code in that folder.
  • Open up your website code (using npm start) & take a look at it on your browser to make sure everything’s good to go

Step 2: Setting up your local git repo

  • In VSCode, go to the terminal option on the menu bar & open a new terminal.
  • Now in the terminal, type: git init (This initializes a new repository on your local machine)
  • Type git status (This tells you what the status of your local repo is)
  • Type git add . (This tells git to track all the files in your folder that have changed)
  • Now try checking git status again & it should show your files in green. This means it is tracking those changes in the mentioned files.

Step 3: Pushing to GitHub

  • Type git commit (This commits all changes that are being tracked in git)
  • Key in a message for the commit, and exit the text editor to complete the commit.
  • Now go to the git repository you created with GitHub.
  • Copy the repository url you get from there
  • Get back to VS code & type git remote add origin url in the terminal.
  • Type git push --set-upstream origin master (This pushes whatever is committed in your local repo to github)
  • Go back to GitHub & refresh. You should see your code present there.

Step 4: Writing a Dockerfile

  • Build your code (usually done with npm run build) to generate your html files.
  • Create file called Dockerfile (A docker file tell the docker what commands to run in the docker image)
  • Copy the following content to the Dockerfile:
FROM nginx:latest
COPY build/* /usr/share/nginx/html
  • Now in your terminal, type
    git add Dockerfile
    (This tells git to start tracking this Dockerfile)
  • Now commit your changes by typing
    git commit
    and key in your commit message.
  • Once the changes are committed, push the changes by typing
    git push
  • Now, if you refresh your GitHub repo, you’ll find the Dockerfile in it

Step 5: Creating a Docker Repository

The next thing we need is a docker repository. Whenever we have a docker image, we need to push it to a docker repository

Step 6: Pushing to Docker Hub

docker build . -t username/reponame

Here, username is your docker hub’s username and reponame is your docker hub’s repository name that you created in step 5

  • Generate an API Token from Docker Hub by navigating to:
    Account settings > Security > New Access Token
  • Login to dockerhub on your terminal by typing
docker login

and enter your Docker Hub’s username when the command prompts for your username and your access token (which you just generated) in the password prompt.

  • Push the image to Docker Hub by typing
docker push username/reponame

Step 7: Testing our code

Now, let’s check if things are working fine.

  • For this, we will run the docker image locally using the command
docker run --rm -p 8080:80 username/reponame
  • Our goal is that whatever is on port 80 in the container must run on 8080 on our host machine
  • And therefore, if you open a browser to http://localhost:8080, you should see your website running.

Step 8: Creating a DigitalOcean server

Now we need to be able to show this page or site to people on the internet & not simply on our localhost.

  • First, we’ll create a new server through DigitalOcean or your preferred cloud service provider. We’ll go with DigitalOcean for this tutorial because they’re awesome & we use their services regularly.
  • Click on Create > Droplets (these are virtual machines that we can rent from DigitalOcean)
  • We recommend choosing the latest LTS version of Ubuntu as it is extremely reliable. If you are new to deployments & are unsure about what you’re doing, you can choose the same. If you have a preference & are experienced, you can take your pick.
  • If you do not have an ssh key, we recommend creating one using the following command on your local machine
  • Once you have a key-pair generated, add that public key (the .ssh/ file) to DigitalOcean and select it for your server.
  • Choose the region closest to your users.

Step 9: Connecting to your server

The next step is authentication. For this, we need to use the ssh key we just generated. It is a way of authentication with servers where your servers can only be accessed with the key you have. So make sure the keep your private key (the .ssh/id_rsa file) secure.

  • Once you have an IP address for your newly created server from DigitalOcean, open up a connection to your server through ssh on terminal using the command
ssh root@server-ip

Step 10: Setting up the server

Now, we need to install docker on the server. In the ssh session that has popped up after the last command, run the following:

apt install -y

After this command is completed, docker would be installed on the server.

Step 11: Installing NGINX

I know you’re tired but we need to keep going. The next thing you need to do is install nginx on the server to handle requests that are coming in. Use the following command to do that

apt install nginx -y

Step 12: Running the docker container on the server

Now you need to make sure that your container is exposed on the right port. Run the following commands:

docker run -p 8080:80 username/reponame

(80 inside the container exposed on 8080)

  • This pulls the image we pushed to Docker Hub, runs it, and exposes port 80 inside the container to 8080 on our server.
  • Now open your browser to port 8080 on your server IP & you’ll find your site there:


Step 13: Setting up DNS

It’s not the most elegant to be sharing an IP address with people on the internet for a website, so the next step is to purchase a domain if you do not already have one. Then, with your domain registrar,

  • Under DNS management, create an A record and set the “name” of the record to the subdomain you want your website accessible on (for example, in the case of, site is the subdomain). If you want it accessible on your root domain (like on just, then set the subdomain to @.
  • Paste the IP address of our server in the “value” section of the A record.
  • Save the DNS record.

Step 14: Setting up Docker Compose

You’d think you’re done but wait…. you will notice that as soon as you close the terminal, the website goes down & we can’t possibly be expected to keep our terminal running throughout the day. We’ll now make the docker container run in the background. While this can be done with docker, an easier way of automating this would be through docker-compose. You start by installing docker-compose

apt install docker-compose -y
  • Create a file in your server by first connecting to it (using ssh root@server-ip) and then typing the following on the connected session:
  • nano docker-compose.yml (this opens a text editor on the server to a file called docker-compose.yml)
  • Enter the following into the file:
version: '3'services:
image: username/reponame
- 8080:80
restart: always
  • To close the nano editor, press Ctrl+o to write the file, followed by Enter to save it, and then Ctrl+x to exit the file.
  • Notice that all we need to do is put in our docker image name & port and it will take care of the rest for us.
  • Type in docker-compose up -d on the connected ssh session.

Docker-compose will automatically create a container for us and keep it running in the background, making sure it restarts it in case it crashes.

Step 15: Configuring NGINX

We will now setup our NGINX configuration file to direct NGINX to redirect traffic to our application.

  • Create an NGINX config file to redirect your domain to the application with the following command
nano /etc/nginx/sites-enabled/
  • Write your configuration. In this example it is as follows
server {
listen 80;
listen [::]:80;

location / {
proxy_set_header Host $host;
proxy_pass http://localhost:8080;
  • Once the file is written, type in the following command to reload NGINX:
nginx -s reload

Step 16: SSL certificates using LetsEncrypt

The next step is to generate your ssl certificate

  • First, you’ll need to install certbot on the ssh session using the command
apt install certbot -y
  • We are now gonna prove our ownership of the domain by placing a bunch of files in a part of the domain that certbot expects it to be in.
  • Then you need to instruct NGINX on where it should serve those files from.
  • Update your NGINX configuration to the following
    (using nano /etc/nginx/sites-enabled/
server {
listen 80;
listen [::]:80;

location / {
proxy_set_header Host $host;
proxy_pass http://localhost:8080;

location ^~ /.well-known/acme-challenge/ {
default_type "text/plain";
root /var/www/letsencrypt;
  • Reload NGINX again with the command nginx -s reload
  • Create the folder that will have the ownership verification files (from the previous step) using the command
mkdir -p /var/www/letsencrypt
  • Now let’s generate the certificates using the command
certbot certonly -d --webroot -w /var/www/letsencrypt
  • Enter your email address when prompted.

Step 17: Redirecting http traffic to https

Now we need to make sure that even when a customer requests the http site, they are redirected to https

  • Update your NGINX configuration as follows:
server {
listen 80;
listen [::]:80;

return 301$request_uri;

server {
listen 443 ssl http2;
listen [::]:443 ssl http2;

ssl_certificate /etc/letsencrypt/live/;
ssl_certificate_key /etc/letsencrypt/live/;

location / {
proxy_set_header Host $host;
proxy_pass http://localhost:8080;

location ^~ /.well-known/acme-challenge/ {
default_type "text/plain";
root /var/www/letsencrypt;
  • Then reload nginx again using the command
nginx -s reload

Now that’s a wrap. That being said, this is the most basic form of deployment. This does not handle defaulting 404 to our index.html yet. We also don’t have DDoS protection, firewall or automatic scaling. For all of that, you’ll need separate tools & many many more steps….

Or you could switch to Patr & leave all your deployment worries to us!

Skip 10 out of 17 steps & jump right to the good stuff! Live websites or web apps in minutes!