Automate Ansible playbook deployment with Amazon EC2 and GitHub

Paul Zhao
Paul Zhao Projects
Published in
18 min readDec 21, 2020

To accomplish validating if a line is present in the file without any modification

What is Ansible and what it could do?

Ansible, as an open-source automation tool, or platform, is used for IT tasks such as configuration management, application deployment, intraservice orchestration, and provisioning. In this project, we will focus on how to make deployments faster and scale to various environments using Ansible playbooks.

Why Ansible?

After acknowledging what Ansible is, we will now dive in why we apply Ansible playbooks. At workplace, there are plenty of repetitive work that employees may need to wok on on a daily basis. If these tasks can be, what we quote in the world of DevOps, automated, then enormous human resources can be saved for other tasks. Therefore, Ansible playbooks comes into play and provides automated solutions to accomplish repeated tasks.

What we will accomplish at the end of the project?

At the end of the day, we will automate an Ansible playbook deployment using Amazon Elastic Compute Cloud (Amazon EC2) and GitHub validate if a line is present in the file without any modification.

How this project can be used for your purpose at work?

By replacing playbook .yml file, you can automate a great number of tasks at work as you wish. For your reference, please find more automated tasks that could be accomplished by Ansible playbooks here.

Infrastructure & Automation

Ansible is an open-source automation tool that uses playbooks to enable you to make deployments faster and scale to various environments. Think of playbooks as recipes that lay out the steps needed to deploy policies, applications, configurations, and IT infrastructure. You can use playbooks repeatedly across multiple environments. Customers who use Ansible playbooks typically deploy periodic changes manually. As complex workloads increase, you might be looking for ways to automate them. In this post, we show you how to automate an Ansible playbook deployment using Amazon Elastic Compute Cloud (Amazon EC2) and GitHub.

In this project, we go through the whole process step-by-step. In the section “Alternative procedure: Use an AWS CloudFormation template,” we present a quick and repeatable solution. You can modify either method to suit your requirements.

Process overview

Diagram of infrustructure

The above diagram shows the how CI/CD pipeline is fulfilled.

A push event triggers a webhook request, which is sent to an Amazon EC2 instance. We use NGINX to route the request to an Express server running on the EC2 instance. The Express server then runs an Ansible command to pull and run the newly pushed playbook.

First, set up Ansible on an Amazon EC2 instance running an Amazon Linux 2 Amazon Machine Image (AMI) connected to a GitHub repository that stores your playbooks.

Second, configure a webhook, which is a way for an app to send other applications real-time information during a push event. This allows you to automatically configure multiple environments, which saves you the time and energy that would have otherwise been spent on manual processes.

Prerequisites

For this walkthrough, you need the following:

  • An AWS account — with non-root user (take security into consideration)
  • AWSCLI installed
  • In terms of system, we will be using RHEL 8.3 by Oracle Virtual Box on Windows 10 using putty
  • An Amazon EC2 instance running an Amazon Linux 2 AMI with An Amazon EC2 key pair created with security group that allows SSH (Secure Shell) and HTTPS access — we will terraform EC2
  • A GitHub repository to store playbooks

Let us work on them one by one.

Creating a non-root user

Based on AWS best practice, root user is not recommended to perform everyday tasks, even the administrative ones. The root user, rather is used to to create your first IAM user, groups and roles. Then you need to securely lock away the root user credentials and use them to perform only a few account and service management tasks.

Notes: If you would like to learn more about why we should not use root user for operations and more about AWS account, please find more here.

Login as a Root user
Create a user under IAM service
Choose programmatic access
Choose programmatic access
Create user without tags
Keep credentials (Access key ID and Secret access key)

Installing AWS CLI

Visit here and download Mac packer

Download MacOS pkg installer

Install it successfully

To verify your aws cli installation

$ aws --version
aws-cli/2.0.46 Python/3.7.4 Darwin/19.6.0 exe/x86_64

To use aws cli, we need to configure it using aws access key, aws secret access key, aws region and aws output format

$ aws configure
AWS Access Key ID [****************46P7]:
AWS Secret Access Key [****************SoXF]:
Default region name [us-east-1]:
Default output format [json]:

Set up RHEL 8.3 by Oracle Virtual Box on Windows 10 using putty

First, we will download Oracle Virtual Box on Windows 10, please click Windows hosts

Second, we will also download RHEL iso

Let us make it work now!

Click Oracle VirtualBox and open the application and follow instructions here, you will install RHEL 8.3 as shown below

Oracle VM VirtualBox

Notes: In case you are unable to install RHEL 8.3 successfully, please find solutions here. Also, after you create your developer’s account with Red Hat, you have to wait for sometime before register it. Otherwise, you may receive errors as well.

Now it’s time for us to connect to RHEL 8.3 from Windows 10 using VirtualBox.

Login RHEL 8.3

Click activities and open terminal

Open terminal

Notes: In order to be able to connect to RHEL 8.3 from Windows 10 using putty later, we must enable what it is shown below.

Bridged Adapter selected

Now we will get the ip that we will be using to connect to RHEL 8.3 from Windows 10 using Putty (highlighted ip address for enp0s3 is the right one to use)

IP address

Then we will install Putty.

ssh-keygen with a password

Creating a password-protected key looks something like this:

$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/pzhao/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/pzhao/.ssh/id_rsa.
Your public key has been saved in /home/pzhao/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:RXPnUZg/fGgRGTOxEfbo3VOMo/Yp4Gi80has/iR4m/A pzhao@localhost.localdomain
The key's randomart image is:
+---[RSA 3072]----+
| o . %X.|
| . o +=@ |
| . B++|
| . oo==|
| .S . o...=|
| . .oo o . ..|
| o oo=.. . o |
| +o*o. . |
| .E+o |
+----[SHA256]-----+

To find out private key

$ cat .ssh/id_rsa
-----BEGIN OPENSSH PRIVATE KEY-----
b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAABlwAAAAdzc2gtcn
NhAAAAAwEAAQAAAYEAwoavXHvZCYPO/sbMD0ibtkvF+9/NmSm2m/Z8wRy7O2A012YS98ap
8aq18PXfKPyyAMNF3hdG3xi1KMD7DSIb/C1gunjTREEJRfYjydOjFBFtZWY78Mj4eQkrPJ
.
.
.
-----END OPENSSH PRIVATE KEY-----

Notes: You may take advantage of GUI of RHEL to send Private Key as an email, then open the mail and copy the private key from email

Open the Notepad in Windows 10 and save private key as ansiblekey.pem file

Ansiblekey.pem

Then open PuTTY Key Generator and load the private key ansiblekey.pem

Load private key in putty key generator

Then save it as a private key as ansible.ppk file

We now open Putty and input IP address we saved previously as Host Name (or IP address) 192.168.0.18

Load private key in putty

We then move on to Session and input IP address

IP address saved

For convenience, we may save it as a predefined session as shown below

Saved session

You should see the pop up below if you log in for the very first time

First time log in

Then you input your username and password to login. You see below image after log in.

Login successfully

To this point, we accomplished logging into RHEL 8.3 from Windows 10

Terraform EC2

Next, we will be creating an Amazon EC2 instance running a Red Hat 8 with An Amazon EC2 key pair created with security group that allows SSH (Secure Shell) and HTTPS access — we will Terraform EC2

To install terraform, simply use the following command:

Install yum-config-manager to manage your repositories.

$ sudo yum install -y yum-utils

Use yum-config-manager to add the official HashiCorp Linux repository.

$ sudo yum-config-manager --add-repo https://rpm.releases.hashicorp.com/AmazonLinux/hashicorp.repo

Install.

$ sudo yum -y install terraform

Notes: In case of a wrong symbolic link set up, please check out this link. Also, you may need to re login after changing the symbolic link.

To check out installation of terraform

$ terraform version
Terraform v0.14.3
+ provider registry.terraform.io/hashicorp/aws v3.21.0

Create a new directory named terraform_ec2 and change directory into it

$ mkdir terraform_ec2
$ cd terraform_ec2/

Under this directory, we will first create main.tf file for our EC2 instance

vim main.tf

provider "aws" {
profile = "default"
region = "us-east-1"
}
resource "aws_key_pair" "Redhat" {
key_name = "Redhat"
public_key = file("key.pub")
}
resource "aws_security_group" "Redhat" {
name = "Redhat-security-group"
description = "Allow HTTP, HTTPS and SSH traffic"
ingress {
description = "SSH"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "HTTPS"
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "HTTP"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "terraform"
}
}
resource "aws_instance" "Redhat" {
key_name = aws_key_pair.Redhat.key_name
ami = "ami-096fda3c22c1c990a"
instance_type = "t2.micro"
tags = {
Name = "Redhat"
}
vpc_security_group_ids = [
aws_security_group.Redhat.id
]
connection {
type = "ssh"
user = "ec2-user"
private_key = file("key")
host = self.public_ip
}
ebs_block_device {
device_name = "/dev/sda1"
volume_type = "gp2"
volume_size = 30
}
}
resource "aws_eip" "Redhat" {
vpc = true
instance = aws_instance.Redhat.id
}

Also, we need to copy public key and private key created previously to terraform_ec2 folder.

$ cp ~/.ssh/id_rsa.pub ./key.pub
$ cp ~/.ssh/id_rsa ./keys
$ ls
key.pub keys main.tf

To start with, we will use terraform init

$ terraform initInitializing the backend...Initializ ing provider plugins...
- Reusing previous version of hashicorp/aws from the dependency lock file
- Installing hashicorp/aws v3.21.0...
- Installed hashicorp/aws v3.21.0 (signed by HashiCorp)
Terraform has been successfully initialized!You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

Then we may validate terraform using

$ terraform validate
Success! The configuration is valid.

Next, terraform plan using

$terraform plan
.
.
.
Plan: 4 to add, 0 to change, 0 to destroy.
------------------------------------------------------------------------Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.

Finally, terraform apply using

$ terraform apply
.
.
.
Plan: 4 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes.
.
.

Now, we will double check our instance created

Redhat server

Last prerequisite is a git hub repository. In case that you don’t have one yet, please visit here.

Create a repo

Step 1: Set up webhook processing

To use Ansible with GitHub webhooks, set up webhook processing on the EC2 instance. This procedure uses NGINX as a reverse proxy to route the request to an Express server. Git is not required to process the webhook, but it is necessary for Ansible to pull the playbook from the repository.

First, we need to locate Redhat server created s

Locate Redhat Server

Second, we press connect to EC2 instance connection page

Press Redhat

SSH to Redhat server from RHEL 8.3 using Putty

Notes: The trick question here is we never created a Redhat.pem key. However, previously we copied our private key to our terraform folder as key.pub . In order to make it work, we need to apply ssh -i “key.pem” ec2-user@ec2–54–236–160–101.compute-1.amazonaws.com

Here is the output

$ ssh -i "key.pem" ec2-user@ec2-54-236-160-101.compute-1.amazonaws.com
Warning: Identity file key.pem not accessible: No such file or directory.
Last login: Sun Dec 20 01:56:19 2020 from 72.137.76.221
[ec2-user@ip-172-31-2-47 ~]$

So we are now inside Redhat server.

We now need enable the Extra Packages for Enterprise Linux (EPEL) repository by running the following command.

Install the EPEL release package for RHEL 8. Enable both the EPEL and CodeReady Builder repositories. The CodeReady Builder repository contains development tools required by many EPEL packages.

$ sudo dnf install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm -y
$ sudo dnf config-manager --set-enabled codeready-builder-for-rhel-8-rhui-rpms

Apply the updates to the packages.

$ sudo yum update -y

Install Ansible, NGINX, and Git.

$ sudo yum install ansible -y
$ sudo yum install nginx -y
$ sudo yum install git -y

Your webhook processing is now set up.

Notes: sudo is required to give previlege to applying these command lines. Otherwise, installation will not go through.

Step 2. Install Node.js and set up the Express server

  1. Install Node.js.
$ curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.34.0/install.sh | bash% Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
Dload Upload Total Spent Left Speed
100 13226 100 13226 0 0 106k 0 --:--:-- --:--:-- --:--:-- 106k
=> Downloading nvm from git to '/home/ec2-user/.nvm'
=> Cloning into '/home/ec2-user/.nvm'...
remote: Enumerating objects: 278, done.
remote: Counting objects: 100% (278/278), done.
remote: Compressing objects: 100% (245/245), done.
remote: Total 278 (delta 31), reused 100 (delta 20), pack-reused 0
Receiving objects: 100% (278/278), 142.25 KiB | 10.94 MiB/s, done.
Resolving deltas: 100% (31/31), done.
=> Compressing and cleaning up git repository
=> Appending nvm source string to /home/ec2-user/.bashrc
=> Appending bash_completion source string to /home/ec2-user/.bashrc
=> Close and reopen your terminal to start using nvm or run the following to use it now:
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm
[ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion" # This loads nvm bash_completion
$ . ~/.nvm/nvm.sh
$ nvm install node
Downloading and installing node v15.4.0...
Downloading https://nodejs.org/dist/v15.4.0/node-v15.4.0-linux-x64.tar.xz...
######################################################################### 100.0%
Computing checksum with sha256sum
Checksums matched!
npm notice
npm notice New minor version of npm available! 7.0.15 -> 7.3.0
npm notice Changelog: https://github.com/npm/cli/releases/tag/v7.3.0
npm notice Run npm install -g npm@7.3.0 to update!
npm notice
Now using node v15.4.0 (npm v7.0.15)
Creating default alias: default -> node (-> v15.4.0)

2. Choose a location for the Express server. In this example, we create a directory called server to store the relevant files.

$ mkdir server && cd server
$ npm install express
added 50 packages, and audited 50 packages in 3sfound 0 vulnerabilities

3. When the installation completes, create a JavaScript file that contains the code to handle the webhook request. We create a sample named app.js that runs the ansible-pull command to pull and run the playbook.yml file from a GitHub repository. The server is configured to listen on port 8080. You must know that the server is configured to listen on port 8080 because the NGINX configuration needs to know where to route the traffic that it receives. The port number is arbitrary, but the specification in the NGINX configuration file must match the port that is defined in the Express server code. In this example, replace <GitHubUser>, <repo-name>, and <playbook> with your information.

var express = require('express');
var app = express();
const util = require('util');
const exec = util.promisify(require('child_process').exec);
app.post('/', function(req, res){
try {
console.log('executing deployment...');
exec("ansible-pull -U git@github.com:<GitHubUser>/<repo-name>.git <playbook>.yml", (error, stdout, stderr) => {
if (error) {
console.log(`error: ${error.message}`);
return;
}
if (stderr) {
console.log(`stderr: ${stderr}`);
return;
}
console.log(`stdout: ${stdout}`); });
} catch (e) {
console.log(e);
}
res.json({ received: true });
});
app.listen(8080, '127.0.0.1');

Notes: In order to make everything clear, we will use RHEL 8.3 via Putty to install git and push our local playbook.yml file to github.

After logging into RHEL 8.3

$ mkdir playbooks
$ cd playbooks/

To set up our github

$ git remote set-url origin https://<USERNAME>:<PASSWORD>@github.com/path/to/repo.git

Then follow codes below

$ git init
$ git add .
$ git push --set-upstream origin master
Enumerating objects: 3, done.
Counting objects: 100% (3/3), done.
Compressing objects: 100% (2/2), done.
Writing objects: 100% (3/3), 437 bytes | 437.00 KiB/s, done.
Total 3 (delta 0), reused 0 (delta 0), pack-reused 0
To https://github.com/lightninglife/ansible-project.git
* [new branch] master -> master
Branch 'master' set up to track remote branch 'master' from 'origin'.

Finally, we check out playbook.yml file in github

Repo

By this point, please check out your folder structure, it should be like what is shown below:

/home/ec2-user 
├───server (folder)
│ app.js
│ nginx.conf
├───node_modules (folder)
package.json
package-lock.json

In app.js file, we need to make update below

Specify the GitHub user and repository where the playbooks are stored. This is required by the ansible-pull command. In this example, replace <GitHubUser>, <repo-name>, and <playbook> with your information.

exec("ansible-pull -U https://github.com:<GitHubUser>/<repo-name>.git <playbook>.yml")

Please use example below as a reference

exec("ansible-pull -U https://github.com/lightninglife/ansible-project.git playbook.yml")

Run the Express server

$ node app.js

Step 3. Set up a deploy key for your repository

  1. Create an SSH key on your instance. In this example, replace <your_email@example.com> with your email address
$ ssh-keygen -t rsa -b 4096 -C <your_email@example.com>

Keep the default settings, and press Enter to skip through the prompts.

Then we can cat our private key and paste into into a code editor or notepad for use later.

$ cat ~/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQC90E1c4TRoSjhoS/Zd4vAtk2g3TQaUgYsWk2q5dBt6nIb1bo
.
.
.
/C2VEuO7FVOeNzcPK5GmpH+rDEn9r3Kqgm4vHKS5P8syAQhtbTs3F5mzHWHJT3YoaA+oGi8gpwfVBbu4GEMkWwEWhKiCx7rb1etYmhir65+8z3y23i+uqOekwRZsf4b+SlqEdjbPIWCYm1aAldySgj33LiNTuP3kJiREB8SmVT+jp+vsWfAAcuWBdThNlU/VhrwKr8Z3kHnyZmPYEz8Q== zhaofeng8711@gmail.com

2. When the key is created, run the following code

$ eval "$(ssh-agent -s)"

The output looks similar to this:

$ Agent pid 7329

You will use this deploy key later in the procedure.

Step 4. Configure NGINX to route traffic

  1. Use the following basic configuration to listen on port 80 and route traffic to the port that the Express server listens to

vim nginx.conf

server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;

location / {
proxy_pass http://127.0.0.1:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}

error_page 404 /404.html;
location = /40x.html {
}

error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}

2. Start NGINX

$ sudo systemctl start nginx 
$ sudo systemctl enable nginx

Step 5. Set up GitHub to configure the webhook

  1. Log in to your GitHub account, and navigate to your repository settings. The page looks like this:
Deploy keys

2. Add the deploy key from Step 3

3. Add key we created in Step 3 to key section and fill in a title. Then Add key

Add key

4. On the Webhooks tab, choose Add webhook.

Webhooks page

5. Locate IP address as shown under EC2 instance page — Instance in use

Locate IP address

6. Add IP address and port to Payload URL

Add IP address and port

7. After creation, recent deliveries will show 200 success

200 success

Now let us test it out

In our server, we type in

$ node app.js
Started on PORT 8080

As soon as we redeliver is triggered as shown below, the following code will be shown in our Putty terminal.

Redlivery triggered
$ node app.js
Started on PORT 8080
executing deployment...
stdout: Starting Ansible Pull at 2020-12-21 04:02:08
/usr/bin/ansible-pull -U https://github.com/lightninglife/ansible-project.git playbook.yml
[WARNING]: Could not match supplied host pattern, ignoring: ip-172-31-6-63
[WARNING]: Could not match supplied host pattern, ignoring:
ip-172-31-6-63.ec2.internal
localhost | CHANGED => {
"after": "aa7b606ac2a890b3a468d8e9c56caa9caf9f84bf",
"before": null,
"changed": true
}
[WARNING]: provided hosts list is empty, only localhost is available. Note that
the implicit localhost does not match 'all'
[WARNING]: Could not match supplied host pattern, ignoring: ip-172-31-6-63
[WARNING]: Could not match supplied host pattern, ignoring:
ip-172-31-6-63.ec2.internal
[WARNING]: Could not match supplied host pattern, ignoring: web
PLAY [Examples of lineinfile] **************************************************
skipping: no hosts matched
PLAY RECAP *********************************************************************

To minimize our workload, why not ClourFormation, below CloudFormation Template is covering all the way to npm install express

Alternative procedure: Use an AWS CloudFormation template

Use the following AWS CloudFormation template to provision the Ansible stack. (See Creating a stack on the AWS CloudFormation console). This stack does not create the web server code or the NGINX configuration file. For a sample configuration and sample code, see the previous section, “Walkthrough for automating Ansible playbook deployment.” This AWS CloudFormation template runs only in the US East (N. Virginia) Region, and you must use a public subnet with internet access. To use this template in another Region, configure the Mappings section to match your Region with the latest AMI ID.

AWSTemplateFormatVersion: 2010-09-09
Parameters:
SubnetID:
Type: AWS::EC2::Subnet::Id
Description: Subnet to deploy EC2 instance into
SecurityGroupIDs:
Type: List<AWS::EC2::SecurityGroup::Id>
Description: List of Security Groups to add to EC2 instance
KeyName:
Type: AWS::EC2::KeyPair::KeyName
Description: >-
Name of an existing EC2 KeyPair to enable SSH access to the instance
InstanceType:
Description: EC2 instance type
Type: String
Default: t2.micro
Mappings:
AWSRegionToAMI:
us-east-1:
AMIID: ami-0a887e401f7654935
Resources:
EC2Instance:
Type: AWS::EC2::Instance
Properties:
ImageId:
!FindInMap
- AWSRegionToAMI
- !Ref AWS::Region
- AMIID
InstanceType: !Ref InstanceType
KeyName: !Ref KeyName
SecurityGroupIds: !Ref SecurityGroupIDs
SubnetId: !Ref SubnetID
UserData:
Fn::Base64:
!Sub |
#!/bin/bash -ex
amazon-linux-extras install epel
yum update -y
yum install ansible -y
yum install nginx -y
yum install git -y
systemctl start nginx
systemctl enable nginx
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.34.0/install.sh | bash
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"
nvm install node
cat <<EOF >> /home/ec2-user/.bashrc
export NVM_DIR="/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"
EOF
cd /home/ec2-user
mkdir server && cd server
npm install express
Tags:
-
Key: Name
Value: Ansible - CloudFormation
-
Key: Environment
Value: Development

Cleanup

  1. As we terraformed EC2 instance previously, we may apply terraform to destroy restore created previous as shown below.
$ terraform destroy 

Notes: This should be under folder we created previously under RHEL 8.3 from Oracel Virtual Box.

2. In case you applied CloudFormation, you may terminate it using following command line.

$ aws cloudformation delete-stack --stack-name <your-cloudformation-name>

For more information in regards to CloudFormation deletion, please visit here.

Notes: In case of deleting issues, please check out events page to find out reasons, depencies are highly likely involved.

Conclusion:

Let us now recap what we learned throughout this project.

Firstly, we set up a number of prerequisites

  • An AWS account — with non-root user (take security into consideration)
  • AWSCLI installed
  • In terms of system, we will be using RHEL 8.3 by Oracle Virtual Box on Windows 10 using putty
  • An Amazon EC2 instance running an Amazon Linux 2 AMI with An Amazon EC2 key pair created with security group that allows SSH (Secure Shell) and HTTPS access — we will terraform EC2
  • A GitHub repository to store playbooks

Secondly, we create our EC2 instance using Terraform

Lastly, we create our project in 5 steps

Step 1: Set up webhook processing

Step 2. Install Node.js and set up the Express server

Step 3. Set up a deploy key for your repository

Step 4. Configure NGINX to route traffic

Step 5. Set up GitHub to configure the webhook

Alternative procedure: Use an AWS CloudFormation template

Notes: This project can be used to provision any ansible playbook by triggering webhooks after applying node app.js

--

--

Paul Zhao
Paul Zhao Projects

Amazon Web Service Certified Solutions Architect Professional & Devops Engineer