AWS VPC: Create your own private Cloud and test its security.

Chalbi Mohamed Amine
18 min readOct 20, 2019

--

loud computing is one of the many buzzwords that you would hear if you work in the IT domain along with other words such as Big data, artificial intelligence, and the internet of things. Cloud computing is in my humble opinion is privileged between all of these words as it has become a necessary tool for the success of all of these technologies and innovations. As more companies continue to move from the on-premises data center to public clouds and as 5G networks are becoming a reality pushing the telcos to move to a fully centralized infrastructure, Cloud computing is only going to grow in demand and popularity.

Let’s skip the history part of cloud computing and dive into what we need to know, for now, however it doesn’t mean that you shouldn’t take a look at what got us to the current state of public cloud computing.

Necessity is the mother of invention

In my university students with a major in cybersecurity and defense are tasked to deploy a local LAN architecture to simulate a vulnerable Web server and execute attacks on this server, collect the different logs, Index and visualize them using the ELK stack, And as a final step run machine learning algorithms on the logs to try and detect attack patterns. Let’s take a look at what a basic architecture should look like:

Proposed architecture for our project

So a student would think of two possible solutions to implement such a topology:

  • VM based solution: We run each component as a virtual machine on top of hypervisors such as Virtualbox or VMware. This solution will require a powerful machine to run the above architecture if we give each machine 2 Gigs of RAM and one logical processor then we would need 10 Gigs of RAM and 5 logical processors just for the virtual machines without forgetting the Host OS. Besides, the working environment would be so messy with multiple people trying to work off the same machine.
  • Real machines: Here each student would use his machine as a part of the architecture. One pc would run the firewall while another will work as a webserver! This should solve the performance problems however connectivity problems that didn’t exist in the VM solution begin to appear. You need to make sure that the internal servers (the green zone) are connected to a LAN in the specific configuration) and that the traffic between the attacker and the servers must pass through the Firewall. My friends faced problems while trying to solve this. I suggested a simple solution that consists of connecting the Hacker to the firewall using an Ethernet wired interface and creating a WLAN to connect the rest of the server to the internal interface of the Firewall. As they tried to do this they realized that a wifi hotspot wouldn’t work, This is predictable due to client isolation in modern AP, each client can only talk to the router and not to other clients connected. At this stage I suggested going with a VPN, in this case, the LAN would be tunneled over the Internet to create a secure IPSEC connection between the different machines that simulated a LAN. I should mention that the students didn’t have access to a physical switch at the time.

So we go with solution number 2, however, the fact that the students are using Windows laptops but running Linux machines such as pfsense (Firewall solution) means they do rely on virtual machines. This coupled with the VPN connection creates other problems for passing the connection from the host OS to the VM OS (Using NAT or PAT would solve this conflict).

It was at this moment that I realized that we cloud actually do this with much ease if we run it all inside of a virtual cloud! if you ever find yourself messing around with networking and virtual machines and firewalls chances are a switch to the cloud would make your life much easier.

So let’s say we want to implement our architecture inside of the cloud, what would change?

The old hardware is replaced by EC2 instances each running a different operating system.

For example, our Firewall EC2 instance will be running pfsense which is an open-source firewall/router. It can be installed on a dedicated physical machine or run as a virtual machine in our case.

The EC2 server instance is running ubuntu inside which we will run a vulnerable version of vulnerable Apache HTTP server.

The ELK EC2 instance is running the ELK stack.

And finally our IDS. IDS stands for an intrusion detection system, this is a system responsible for monitoring the network traffic inside of your enterprise with the objective of detecting cyber-attacks or intrusions. Usually, an IDS works hand in hand with an IPS (intrusion prevention system) which is responsible for tacking the necessary actions to block the detected threats (Both systems can be fused together to form an IDPS). We will speak later about our solution to the IDS.

Now let’s get our hands dirty!

VPC creation

Creating a VPC is quite a simple task, let’s login to our AWS console.

From the top bar click on the “Services” button, here you will find a wide range of services to choose from. Scroll down till you find the “Networking & Content Delivery” section and choose “VPC”.

You should see the following page:

Now let’s click on “launch VPC wizard”

We get a new interface to fill some information about our new VPC, The most important of these is the IPv4 CIDR block. CIDR stands for Classless Inter-Domain Routing, This is a concept introduced to get better use of the IPv4 address range. A CIDR block is made out of two parts:

  • Network Address: This is the address that identifies our entire VPC, It’s the common factor between all the IPv4 address we are going to use. Since our VPC is a private network we are going to be good boys & girls and use the recommended best practices set by the IETF in RFC 1918, you can access the original memo here, to quote RFC 1918 :
The Internet Assigned Numbers Authority (IANA) has reserved the
following three blocks of the IP address space for private internets:

10.0.0.0 - 10.255.255.255 (10/8 prefix)
172.16.0.0 - 172.31.255.255 (172.16/12 prefix)
192.168.0.0 - 192.168.255.255 (192.168/16 prefix)

So basically we should use one of these addresses in our VPC! AWS suggested a 10. address so I choose that one.

  • Network mask: Let’s say we choose a 10.1.0.0 CIDR block for our VPC, this address can be written in binary as a 32-bit number, the network mask tells us how many of those bits can we modify. As such a /16-bit mask means you can modify the last 16 bits of the CIDR address to create new valid IPv4 addresses within your network, while a /8 mask means you can modify the last 24 bits of your CIDR address! If we do a little math we can see that the number of valid IPv4 addresses is 2 raised to the power of the number of bits we can modify. So a /16 mask offers 65536 possible addresses (The real number will be lower actually but by just a few addresses).

So we end up with 10.1.0.0/16 as our CIDR block.

The next parameter that we need to set is our public subnet IPv4 CIDR. A subnet is a way of dividing our VPC into different parts, this is quite useful if you want to separate machines or services from other ones. This process is the same process as we did for our VPC but the CIDR for our subnet should be a valid IPv4 in our original VPC that means we choose one of our valid IPs and assign it to the subnet, in our case we choose 10.1.0.0/16 that means we can mess around with the last 16 bits so let’s choose 10.1.1.0 as our subnet address and assign to it a subnet mask that must be shorter than our VPC subnet ( since a subnet is inside of the VPC it can’t be bigger !), I decided on a /24 mask so this will offer us 256 IPs.

Finally, choose whatever name suits you for the VPC and hit next.

If all goes well you should see the following message:

Let’s go back to the homepage of AWS VPC. Click on “VPC” to see the available VPCs.

You should see our newly created VPC here:

That’s the main part of the VPC setup but will still need to configure some other stuff but we will leave that for later as we move along.

Webserver

Now let’s run our webserver. For this, we will be using an Ubuntu server running on EC2 on top of which we are going to install our web application(a vulnerable one). So let’s start!

Open the AWS console and go the “Compute” section, click on EC2:

On the next page click on the “Launch Instance” button.

Now AWS will ask you to choose an AMI for your new EC2 instance. AMI stands for Amazon Machine Image which is basically an artifact that contains the operating system files, packages and all needed files to run the required system. There is a wide list of AMIs to choose from, some are free while others are paid you can even create your own AMIs in the future.

We will go with an Ubuntu machine, basically, any Linux machine would do.

Make sure you choose a 64-bit(x86) AMI since the ARM AMI will only run on EC2 machines powered by AWS’s own ARM-based Graviton processors (The A1 family of EC2).

Now it’s time to choose the type of EC2 instance that we will be using to run our server. This is similar to shopping around for a physical machine, you have multiple types and categories of EC2 machines as well as multiple generations. Each one of these machines is characterized by its own performance characteristics and its price. The main three parameters to look for here are:

  • CPU: Number of virtual cores you will have, some machines have burst characteristics that mean that you can run the CPU at full load for a limited period of time each hour, while most of the machines will offer you the possibility of fully utilizing their CPU performance as you wish. Obviously the cost will vary from one model to another as well as the use cases, for workloads that are sporadic and infrequent it makes sense to use a T2 or T3 instance. You can, and you should, read more about the different EC2 instance types here.
  • RAM: This is the size of the RAM available to your machine, You would be amazed by how much memory can be assigned to a single EC2 instance, as some go up to 4 Terabytes of RAM! It’s this that makes AWS powerful, you are only limited by your creativity.
  • Networking: This parameter determines how fast is the EC2 instance connected to the internet. It varies from low, moderate, high up to 100 Gigabit/s.

Now for our web server let’s choose something simple, I will go with a small machine with 1 Gbs of RAM and 1 vCPU, this is because am planning on running some attacks that are more effective on less powerful machines.

The EC2 instance type that we need is T2.micro, so select that and hit next.

In the next step, we will need to configure our instance, so choose the VPC we created earlier and choose the public subnet and leave the rest as is and hit next.

Now we will configure the storage of our machine we will need to specify two essential parameters:

  • The size: You will need to set the size of the storage that will be allocated to your machine, remember you will be paying for this storage so don’t go crazy! assign a reasonable amount demanding on what you expect to need. The cool thing is you can always expend the storage of your EC2 on a later stage so don’t worry about running out of space.
  • The type of storage medium: AWS offers two main categories of EBS storage, SSD-backed storage for workloads that are dependent on the IOPS(Input Output Per Second) and HDD-backed storage for throughput intensive workloads, where performance depends primarily on MB/s.

Let’s leave the default type which is an SSD and allocate 30 Gbytes to our machine and move to the next step.

The next step is adding tags to our instance, this is optional but a really good practice to get used to. It will help you further down the line save time and get things done faster. So let’s add a simple tag, a name tag, this will help us easily identify our instance when we are dealing with multiple ones. So click on the “add new tag” button, give it a key and a value, I choose “name” as the key and give it a value of “webserver”.

The next step is a bit more delicate than the others, we are going to configure the security groups associated with our EC2 machine. Security groups are like a firewall but on an instance-level, not a network level instead. They allow us to set what ports and what services can have access to our EC2 instance we can also set what external IPs can have access to these ports and services. When setting a security group we need to set rules for both inbound and outbound traffic. One major idea that is very useful when dealing with cloud security or any type of information system security is that of Least Privilege Permissions. In essence, it means that each user or entity should be given the minimal amount of permissions that are needed to be able to perform it’s intended role correctly. It's acceptable that an actor doesn’t have the needed permissions to perform its role as you can easily add these to make sure that things function correctly. But give extra permission to an entity and things can go south very rapidly.

So what permissions does our instance need? well let us start with outbound traffic, we can let our machine send requests to any machine it wishes because we are more worried about traffic that is coming from the outside than traffic that is leaving our network(This is very sloppy and you should be more careful when working with production servers). For our inbound traffic, we need two services:

  • HTTP: our server is a webserver as such we need to allow distant machines to send its HTTP request over port 80 using the TCP protocol.
  • SSH: Secure Shell is a cryptographic network protocol that allows the user to access distant machines and control them remotely. We need to be able to connect to our machine in order to set up our web application. SSH uses TCP and runs on port 22.

When you open the security group configuration page you will see that AWS has already added an SSH rule, we need however to specify the source field, this will tell AWS what IPs are allowed to connect using SSH, here you can specify a unique IP, a range of IPs using the CIDR notation, or tell AWS to use your current IP address as the source. So let’s choose the latter option as we are the only ones that are supposed to be able to SSH into this machine.

Now let’s add a new rule for HTTP, this is pretty straightforward, just click on add rule and click on “type” and select from the drop-down menu “HTTP”.

If we were configuring a real-world web server then we would have allowed access to the HTTP service from all IPs, except here we are running a vulnerable web application and trust me you don’t want to realize the hard way that hackers are always scanning the internet for whatever breaches that they can find. So we will also limit access to HTTP to only our current IP.

The end configuration should look like this:

Now we are finally finished! AWS will display a final review before lanching our instance:

If everything seems in order then hit launch.

Now AWS will ask you to choose an already existing key or create a new one. This key is actually a key pair that is genereted by AWS, they keep the public key and give you the private key to download and save. The private key is used by SSH to connect to the distant EC2 machine, As such if you lose it you will lose your access so keep it in a safe place! Let’s choose to create a new key pair and give it the name “webserver”. Click on download to save the private key.

You should have a “webserver.pem” file. Now you can hit the “Launch Instances” button.

Success! we have launched our webserver and we can now go back to our EC2 console and check on our instance.

We have 1 running instance so that’s a good sign let’s click on “running instances” to see further details.

We see that our instance is running. If you look closely you can see that our machine has a private IP:10.1.1.91, That’s a valid IP inside of the subnet that we created! So AWS automatically assigned it that IP.

Now let’s start the configuration of our machine, So we need to connect to the instance via SSH, for that, we will need to download and set up Putty if you are using a windows machine like me else you can just use the ssh command on Linux, you can take a look here for more info.

Start by downloading Putty SSH client from here and install it on your local machine.

Now once we have putty installed we need to convert the private key that we have downloaded from AWS to a new key format that putty can understand. Luckily when you install putty it automatically installs Puttygen which is a tool that allows you to convert keys to different formats.

So let’s fire up Puttygen and load the key that we downloaded earlier.

Make sure you tell Puttygen to show all file types.

Once you have loaded the key the following window will show up:

So let’s follow these instructions. Click on “Save private key” and Puttygen will ask if you wish to proceed without using a passphrase, a passphrase is an additional layer of security that you can add on top of your key, so even in case when someone has access to the key file it is useless without the passphrase so go ahead and add a passphrase of your choosing and remember it. Save the file and remember its location we will need it in the next step.

Now open Putty. There are a lot of options that we can mess around with but the most basic is the hostname or IP address. Here we should tell putty what’s the IP address of the machine we are trying to connect to. To get the IP address we need to go back to the EC2 console and copy that information.

At this point, you should notice that our machine has an IP address 10.1.1.91 which is a private IP address, only reachable from within the same network (the public subnet of our VPC) so that address is no use for us. Our machine needs a way to communicate with the internet.

AWS has a solution in place and it’s called AWS Internet Gateway, you can find out more about it here. Basically an internet gateway is a redundant and highly available VPC component that allows communication between instances in your VPC and the internet. It’s not a single physical bottleneck that can create a single point of failure but instead an abstraction a redundant architecture. So let’s go ahead and add an IG to our VPC.

The good surprise is that if you create your VPC using the first or second option in the VPC wizard AWS will automatically add an IG to the VPC.

The next step is to add make sure that the routing table of our Public subnet has a route to the internet gateway. A routing table has a list of addresses and where to forward these packets. To see the route-table of our subnet we need to go to the VPC Dashboard and from there select subnets. From there select our public subnet associated with our VPC.

As you can see our Route Table has two entries:

  • Destination 10.1.0.0/16: Target local: This is the route for local traffic inside of our VPC that should remain local.
  • Destination 0.0.0.0/0: Target igw-030057e4227707537: 0.0.0.0/0 means all traffic as this CIDR matches every single possible IPv4 address, as such all traffic that isn’t destined to the local VPC will be redirected to the Internet Gateway.

So our route table is in order, the next step is to make sure our EC2 instance has a public IPv4 address associated with it. Let’s go back to the VPC console and choose “Elastic IPs”.

Click on “Allocate new address”

In scope choose VPC and select Amazon pool for the address pool and hit Allocate.

Now you will be given a new IPv4 address that we can use.

Let’s go back to the Elastic IPs dashboard, you will see that the new address has been added, Righ-click and choose “Associate address”.

For resource type choose “instance”, select our web server instance and select our private IP and click on “Allocate”

Success! now let’s go back to our EC2 instances dashboard to verify that the IP address has been correctly associated with our instance.

Perfect! everything is working as intended.

Copy that IP and go back to Putty to start our SSH session.

In the IP address field insert the following: ubuntu@ip-address

Now let’s select our key file, in the “Connection” tab select “SSH” from there select “auth” and click on browse to select the key we saved using Puttygen.

Now click on open. Voila! We are now connected to our remote webserver using SSH.

It’s time to install our vulnerable web application. I choose DVWA, which is short for Damn Vulnerable Web Application, you can download it from here and follow the install instructions from the following youtube video.

The setup process isn’t a short or easy one but if you stick with the video and look up the errors in google you will be able to get things running in the end. If you find yourself stuck feel free to ask for help.

Once everything is running just type the IP address of our EC2 instance in your browser and add a “/dvwa” at the end and you will be greeted with the interface:

Our web app is now online!

We are done with a big part of the work and now we will move on to the setup and configuration of the rest of our infrastructure in the next parts of this series.

If you have made it this far I salute you for your passions and effort! and wish you enjoyed reading this post. If you have any suggestions feel free to write them down.

--

--

Chalbi Mohamed Amine

An Ex-Medical student turned computer science & engineering student with a passion for all things complicated and weird !