Simple guide on how install Apache on AWS EC2 Instance

Yasir Mahamud
8 min readApr 8, 2023

--

Let’s first understand a few key terms

AWS: Amazon Web Services, a cloud computing platform with numerous products

AWS EC2: Elastic compute cloud — used to launch virtual computing environments (instances), host servers, and configure security

Apache: A free and open-source web server software

Git: A version control system that tracks changes in files

GitHub: GitHub is a cloud-based hosting service that allows you to manage Git repositories

Repositories: A repository contains your project files and their version history

Use Case

You are a Cloud Engineer working for Retro Tech tasked with creating an Amazon EC2 Instance on AWS to host the organizations new website. Before, Retro Tech hosts their websites on premise which involves purchasing very expensive infrastructure in a data center. The organization now wants to migrate to the cloud to decrease costs of capital expenditure and increase efficiency.

Prerequisites

· AWS IAM User Account with permissions to allow for AWS EC2 access

· Linux Knowledge

· Vim text editor installed

· GitHub Account

Break down of entire Process

1. Launch an EC2 Amazon Linux t2.micro (free tier) in a public subnet of your Default VPC.

2. Create a security group that allows inbound traffic on HTTP for 0.0.0.0/0 and allows inbound traffic on SSH from your Ip address.

3. SSH into your EC2 instance and install Apache with a custom webpage using a BASH script.

How to take it one step further

1. SSH into your EC2 instance and create a script directory.

2. Move your BASH script created earlier from its current location to the new scripts directory.

3. Create a new repository in your GitHub for your BASH Scripts.

4. Use Git commands to add your BASH scripts to the newly created GitHub repo.

Step 1:Configure and Launch Amazon EC2 Instance

We will launch our instance by clicking the Launch Instance button and then selecting launch instance in the drop-down menu.

The name of our Project will be Apache web server

Next, we will select Amazon Linux quick, everything else in the settings can remain default

For instance type we will be selecting t2.micro, it is likely the default setting.

Click on the “Create new key pair” to create a new key pair, then enter your desired key pair name. Select “.pem” for private key file format.

Click on “Create key pair”, as show below. The “.pm” file should automatically start downloading on your local system. Locate the file after the download is complete and store it in a safe directory. Later, we will use this key pair to connect to our EC2 Instance through ssh.

Under network settings, we will click the blue edit button to expand the menu. We will enter a security group name and leave the remaining options as default.

Under security group rule one we will enable our ability to SSH into the console by changing the source type to My IP. This will also provide security as only our IP address will be able to access it. Under rule two we will change the type from HTTPS to HTTP. This will allow us to check out Apache Web Server later on.

Now we will click the orange launch instance button under the summary section and launch our EC2 instance!

Step 2: SSH into Amazon EC2 Instance

Verify EC2 is running and navigate to options to connect to instance

Navigate to your EC2 Instances and verify that the new EC2 Instance we just launched is running.

Once we’ve verified our EC2 Instance is running, navigate to the top right pane, click “Actions” and select “Connect” as seen below.

We can now move to the SSH Client tab. This will provide you with the command to SSH into the instance. You can enter this command into your preferred terminal. I will be using Git Bash

Step 3: Run ssh command passing in key pair file to authenticate into the EC2 Instance

We need to run the ssh command in your CLI and in addition, add the “-i” option to pass the key pair file at the same time to authenticate into your EC2 Instance.

You can find your EC2 Instance Public IPv4 address by navigating to the EC2 Instance dashboard and copying the “Pubic IPv4 address”, as seen below.

Run the command below to ssh into the EC2 with the key pair file

"[key_pair_name.pem]" ec2-user@[EC2_Public_IPv4_Address].compute-1.amazonaws.com

Success!

If you did all the steps correctly, you should have similar results as seen below and now you have connected into your EC2 Instance!

We can now install Apache and perform updates so we can begin working on our web server.

sudo yum -y install httpd

Now we are going to start our webserver!

sudo systemctl start httpd
#starts the server

sudo systemctl enable httpd
#enables server for startup

sudo systemctl status httpd
#checks status of server

Step Four: Create a custom webpage using a BASH script

Now that our Apache server is up and running, we can create our webpage! To initiate this, we will start by changing our current working directory to the HTML directory.

cd /var/www/html/

pwd
#pwd prints the working directory

The next step will be creating our index file. This will be the default page of our website and will contain the text we are editing.

sudo touch index.html

Since we are not the root user and therefore don’t have the privileges to edit this file we will update the file permissions by using the chmod command.

sudo chmod -R 777 index.html

Now we will open the index.html file in vim

sudo vim index.html

Let’s go ahead and do a simple webpage. Once we are done making our edits we will exit vim by pressing escape and entering :wq

Before we move on let’s make sure the file was successfully updated. We will use the cat command for this.

cat /var/www/html/index.html

Lastly, before we can check out our webpage, we need to restart Apache for the changes to take place.

systemctl restart httpd

Let’s check out our website! We just need to enter the Ip address into the browser.

We did it!

How to take it to the next level!

Step One: Create a scripts directory

To accomplish this, we will use the mkdir command.

sudo mkdir scripts

Step Two: Move your BASH script created earlier to the new script’s directory

Now, this is an easy step, we will just use the mv command similar to as we previously did.

mv index.html scripts

Step Three: Create a new repository in your GitHub for your BASH Scripts

We’re going to jump over to GitHub and create a new repository.

Step Four: Use Git commands to add your BASH scripts to the newly created GitHub repo

Now for the fun part! Let’s assume Git is not installed on our instance. We will install it using the yum command.

sudo yum install git

Next, configure you Github username and email

git config --global user.name YourUserName
#YourUserName is your username

git config --global user.email YourEmail@provider.com
#YourEmail is your associated GitHub email address

Now that Git is installed, we can now clone our repository from GitHub. To do this we will need the URL of the repository.

With the URL acquired we can now clone the repository using the following command.

sudo git clone <url>
#url bieng the url from your repository

Now we will move the file index.html into our BashScripts directory using the mv command.

sudo mv index.html BashScripts

With the file moved over we can now add it. The add command will add the index.html file to our local repository.

sudo git add index.html

Now time to commit it! Commit will save the changes and a brief description stating what changes we made to the local repository.

sudo git commit -m "adding index html file"

Now that it has been committed, we will push the changes. The push command will upload the local repository content to our GitHub repository.

This will require a token from your GitHub Account. To access the personal token, we will go to our GitHub account > Settings > Develop Settings > Generate new token.

The changes have been successfully pushed through! We can now check our GitHub to see the repository updated.

Congratulations! We have completed all of our objectives!

--

--

Yasir Mahamud

DevOps enthusiast passionate about automation, CICD, cloud infrastructure & containerization. Sharing insights on best practices and emerging trends