How I saved $9000 in hosting on AWS — how to set up Laravel on AWS
This article will tell you how we at Harbours.io managed to save $9000 in monthly costs on Amazon Web Services. This article is not meant as a tutorial but as a detailed guide on what to look out for, what obstacles we faced and how we managed to overcome them.
I might do a tutorial on the entire CI/CD pipeline on YouTube in the future. But, I think this article is more valuable for learning what to look out for on AWS because it is a continuously evolving platform.
Background
At Harbours.io, we have opted to use a microservices infrastructure running the latest version of Laravel. Each of these microservices has its own repository in our company on GitHub.
We have a couple of frontend applications, such as a support page, a landing page and the web admin dashboard. These are packed into a monorepo stored on our company GitHub. All these services use Next.js.
We picked Laravel and Next.js because of their proven stability and large communities. We have a lot of experience in these technologies and believe that we won’t have to switch stack for a couple of years. We don’t see enormous data throughput or time-sensitive data in the future. If that problem would arise, we would probably create a different microservice that would run on a faster language like Go.
Where should we deploy?
We started setting up the infrastructure in late December 2021. We first had to decide how we would deploy our services. We already had a bit of experience managing servers on-premise, but we wanted to explore cloud solutions. One reason for this was that we didn’t have to invest in expensive servers and connections when we were just starting up. Another reason was that we didn’t have a building to store our servers safely.
We opted for AWS because it has become one of the largest and most flexible platforms to host services. Their payment strategy is very transparent, and we could use the free tier for a couple of months when we develop our minimum viable product.
In hindsight, I’m still very pleased with AWS, and I wouldn’t have picked another platform if I had the chance to do so. The amount of services we have deployed and compute time we have used in the free tier is very generous. Although documentation could be better in some places and support for frontend applications (Amazon Amplify) is still lacking compared to similar services such as Vercel.
If you’re also deciding on your cloud platform, make sure to check out the startup program that most providers offer. Amazon, Azure and Google Cloud all offer similar programs for startups where you can get x amount of credits for free to get you started.
Deployment one
We had zero knowledge about AWS or any cloud platform provider. But, during our comparison between AWS, Azure and Google Cloud, we came across something called EC2. The official description for EC2 is “Secure and resizable compute capacity for virtually any workload”. With this, we still didn’t really know what this was, but after a bit of searching around, we found out that they were something like virtual machines.
From our experience working on servers stored on-premise, we started to deploy our application the same way we were used to by logging into the EC2 instance and uploading our code.
Step one was creating a free EC2 instance. Depending on the region, there are different free instances. We selected the region EU-west-2, which is the region for London, and thus our free instance is a t2.micro. The free tier is either a t2.micro or a t3.micro. The specs of this server are one vCPU, 0.5GiB of RAM and “low network performance”. Other specs can be found here https://aws.amazon.com/ec2/instance-types/. The specs are nothing spectacular, but at this early stage of our development, it doesn’t matter because there’s no traffic and no production clients yet. We can upgrade the server later through AWS anyway.
Once you create your instance, you get an overview of all your instances. When you click on your just created instance, you get a dashboard with all information about the VM.
The first weird question I encountered was “How do I connect to this server?!” and “How do I upload my application?”. Well, on AWS, it works a bit different. Suppose you want to connect via SSH (which I recommend). You first have to add your public SSH key to the Keystore in AWS. You can find this in the side menu on the EC2 page.
You would think just to set up something like GitHub Actions to set up CI/CD and run everything off that server, right? Well, you can do that, but I wouldn’t recommend it for production use. When an EC2 instance gets deleted or terminated, you will lose all your data, including storage on the server itself and its configuration. This can be very dangerous if you don’t have a backup routine or when you have to manage a lot of data like documents or profile images.
It became very obvious to us that we were missing the essence of AWS. You now know that EC2 instances are more meant for dynamic uses. How do we achieve such a thing?
Deployment two
We heard of Elastic Beanstalk. It is meant for deploying applications through a scalable interface. The official description puts it more eloquently “AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS”.
This immediately felt like the solution we needed. We wanted to deploy services developed in PHP, and we needed a scalable infrastructure. After looking on the internet for documentation, I followed this YouTube tutorial to deploy our first service: https://www.youtube.com/watch?v=KtpiF3SUCkA.
I’m not going to write out in detail what that tutorial is about. Instead, I will take a few things that I’ve learned from that tutorial and improve them. There are a couple of things wrong with the YouTube video now because AWS either updated or removed features that were previously accessible. We will use these improved sections and apply them in our new environment.
Having said that, this tutorial was a beneficial resource to learn how AWS works. If you want to follow the YouTube tutorial, go ahead. But, keep in mind that there are some errors in it.
During the second deployment try, we deployed everything through Elastic Beanstalk. Everything worked perfectly. The migrations ran, the daily backups backed everything up at 4 AM, and everything was load balanced. The problem with this was the underlying architecture that Elastic Beanstalk uses. By default, EB uses a new EC2 instance and a new Application Load Balancer for every application you set up. You get ~720 hours of free usage for the EC2 instance and 720 hours of free use for the load balancer. Note that the free tier doesn’t stack! If you have EC2 instances, you only get 720 hours of free usage and the rest you have to pay for.
Our mistake was that we set up 3 Elastic Beanstalk services. Elastic beanstalk created 3 EC2 instances and 3 Elastic Load balancers, which don’t fall in the free tier. This is not that bad. It is expected that we should pay for the services that we use. But we wanted to optimise our cost savings, which didn’t work with the current strategy. To make matters worse, once the load balancer kicked into action and spun up another EC2 instance, we got a notification that alerted us that our monthly forecasted amount would be $8910.47.
At around midnight, our domain got hit by a very common vulnerability scanner that hit our servers with around 60 requests per minute. That tripped our Elastic Load Balancer, and it spun up another EC2 instance. AWS calculated the price of this action and predicted that this cost would linearly scale throughout the whole month. That is why we got our notification. In reality, we only had to pay around $30 because the load balancer only kicked in for only a couple of minutes and then scaled down again. But I’ll tell you, getting that notification is a quick way to get out of bed in the morning.
Deployment 3
Needles to say, we wanted another method of deploying our services. After a very long search, we found a better strategy. This strategy uses docker and can utilise the full EC2 instance capacities. The AWS service we use is called the Amazon Elastic Container Service. Trying to find information for this service is a bit obscured by AWS because they offer a service called Fargate. This automates and manages everything we are doing manually. That is really great, but it is costly. There also is no free tier for that service, which would have meant that we would pay a year for services that weren’t actually used.
We ended up setting everything up in ECS (Elastic Container Service) and using the EC2 launch type. In this service, you define tasks. These tasks create services that are run on your preferred launch type. This sounds very confusing, but it improves when you work with it.
A task defines what service you would like to use. For example, you want a translation server that your microservices can call upon when they need something translated. This translation server also requires a Redis cache to cache commonly queried translation strings. When you define a task in ECS, you specify the number of resources and the containers. In the container section, you would define your images for your translation server and the image for the Redis server.
Then you have to create a service in an ECS cluster. A cluster is comparable to the clusters defined by Kubernetes. We called our cluster Harbours.io (the name of our company). We have a service called translations-production. This service uses the translation task. In this service, we can also specify when it needs to scale up or down or which load balancers can utilise that service.
Once that service starts, it gets a random port number assigned which is then automatically mapped by the Elastic Load Balancer’s Target Group to be accessible via port 80 (or any port you desire). That’s pretty much all there is to it! Our services are now accessible over the internet, and almost everything is still inside the free tier. Our current forecasted amount is around $0.50, which goes to Route53 (DNS).
I believe this strategy is here to stay because everything is deployed via Docker and Kubernetes. This gives us the flexibility to switch cloud providers. If AWS doesn’t do the trick for us anymore, we can just move our codebase to Azure or Google Cloud. With little effort, we can deploy our services again. After that, we just have to invest a bit of time to get our fully automated CI/CD back.
Tips & tricks on AWS
These tips don’t only apply to ECS but also to other services on AWS. I’ve stumbled across many weird bugs in AWS and outdated tutorials that come along with an evolving platform. That’s why I want to summarise what I’ve learned and can be helpful in the future for other developers.
Accessing EC2 instances
One of the first things I wanted to do when I launched an EC2 instance was connect to it via SSH. Sadly this took me around 5 hours to figure out. First off, you need to create your SSH key pair in the “Key Pairs” section before creating your EC2 instance. In EC2 -> Network & Security -> Key Pairs. Add your public SSH key. When you create your instance, you can select your key pair. If you use Elastic Beanstalk, you can add your key pair when the environment still exists. You just have to add it under configuration -> security -> EC2 key pair.
CodePipeline and GitHub connection
Whilst building our CodePipeline for our core server, I created a GitHub connection that had access to our core server repository. When I created the CodePipeline for our translation server, I created another GitHub connection with access to our translation server repository. After creating the translation server pipeline, our core server pipeline lost connection to GitHub.
You can only have one connection to your GitHub repositories (this is separated with companies and users, of course). Make sure to name your connection accordingly, for example, harbours-io-github-connection. This connection has access to the repositories that need to be deployed and is used by all the pipelines.
The Free Tier
Make sure to check the free tier and their expiration dates. Some services only have one month or one-year free tier. Others have 720 hours of free usage that gets reset every month. If you have 720 hours of free usage, for example, using an EC2 instance, the hours of multiple EC2 instances stack up, but the free tier doesn’t. Running 3 EC2 instances results in 2160 hours per month where only 720 is free. The remaining 1440 hours are billed.
Note that when creating Elastic Beanstalk environments, it will create multiple load balancers and multiple EC2 instances. Therefore I strongly recommend running ECS on the EC2 launch type.
Route 53 and load balancers
We have several microservices under the harbours.io domain, for example, a.harbours.io, b.harbours.io and c.harbours.io. All these services are running on the ECS cluster and are hosted on 1 EC2 instance for the time being. We can route our different subdomains to the correct container through dynamic docker port mapping. For this, you need to point all domains in Route 53 to an Application Load Balancer. Configure a couple of target groups: a-target-group, b-target-group and c-target-group.
Then in the load balancer listener config, you add listeners for a.yourcompany.com, b.yourcompany.com and c.yourcompany.com and point them to their corresponding target group.
After that, you can create your services in ECS (you cannot edit load balancer settings), select the load balancer and select the corresponding target group.
SSL and TLS
When creating your certificate in the Amazon Certificate Manager, think ahead of the coming subdomains. You can add a wildcard subdomain like *.harbours.io. This will cover a.harbours.io, b.harbours.io, … But, it won’t cover third-level subdomains like a.a.harbours.io or a more real-word example api.demo.harbours.io. To cover third-level subdomains cannot simply do *.*.harbours.io. You have to specify them like this *.demo.harbours.io.
You cannot edit certificates after they are created. When you want to add a new subdomain, you must create another certificate and add that to the load balancer. This adds complexity and can be mitigated in most cases if we think a bit ahead.
Automated Backups
You essentially have two options for automated backups in AWS. Either you do it yourself, or you use the build-in backups feature of AWS. We decided to implement it ourselves because the pricing is relatively aggressive compared to their other services on the platform. They handle backups through snapshots that are daily stored for a specified amount of time. The snapshots are, of course, snapshots of the whole underlying EC2 instance. I had my doubts about the cost of resetting to a snapshot and parking the snapshots on the platform.
We chose Spatie Backup to automate our backups. You can create notifications that alert you when something went wrong with the backups or when the backup was successfully created. You can also provide a very detailed backup strategy that is straightforward.
Cronjobs
This isn’t an AWS tip but more a Docker tip. We run a PHP-alpine image for the lowest possible image builds (We have to pay per GB stored on AWS, and when working with multiple images, every MB counts). I couldn’t find a command that would activate the Laravel cron jobs, so I’ll provide it. Inside the Docker file, you can enter this line
RUN echo "* * * * * php /var/www/html/artisan schedule:run" >> /etc/crontabs/rootENTRYPOINT ["/bin/sh", "-c", "/var/www/html/docker/startup.sh"]
The startup.sh script contains the following lines of code:
crond -f &cd /var/www/html && php artisan serve --host=0.0.0.0 --port=80
This runs the crontab and our Laravel server when the container spins up.
Migrations
There is still a lot of debate around when and where you should run migrations. We decided to run migrations in our docker file because our CodePipeline in AWS builds our image and stores it in the Elastic Container Registry. This means that the docker build script is a one time job per deployment version.
The only problem with this is that when an image build fails or a part of the pipeline fails, the migrations will run. This can cause the existing services to become unstable or unusable because of the database changes. Our migrations command is at the bottom-most side of our docker file to prevent this issue from forming, but only time will tell if this strategy was the right choice.
Another method I’ve seen is to run the migrations as part of a startup script for the container. The only issue I see with that is that it tries to migrate the database successfully every time a new container starts up. I rather don’t have unwanted traffic going to the database. That’s why we decided to run the migrations during the pipeline phase.
Environment variables
We entered the variables manually in each environment during our previous deployments on Elastic Beanstalk. This became very messy very quickly because many services looked alike (such as production vs staging environments). Also, every time we edited environment variables, the Elastic Beanstalk environment had to refresh and was missing some files that were uploaded during commands executed in the .ebextentions folder.
When we switched to ECS, we uploaded our environment files to S3 buckets, to which only two people have access. During the build phase of our CodePipeline, the environment files and OAuth secrets get copied from S3 buckets into the server code. This connection is secure by default. Execution is done in CodeBuild and is connected to our S3 bucket via our VPC inside AWS.
I think this strategy is more beneficial because editing our environment files has become very easy whilst upholding solid security on the S3 bucket.
AWS Activate
Most cloud service providers have a program focused on getting startups going on their platform. They do this by providing free funding if you are eligible. On AWS, this is $1000 for tiny startups or $100.000 for larger startups that already got external funding. I know Azure and Google Cloud provide similar programs, so be sure to check those out as well if you’re looking into those platforms.
General tips
If you can, invest the time to learn docker. Using docker, you can utilise your server the most. When every server costs you money, this becomes really valuable.
Look into logging aggregation tools such as CloudWatch. These services collect your logs and display them in a central control panel. You don’t have to log into every server with these kinds of tools to check the logs.
Have sound error notification systems. There is a Lambda function setup throughout every pipeline that will notify us through Discord when a pipeline fails to deploy. We also have automated systems inside our Laravel codebase that notify us of a 500 internal server error with valuable metadata that can help us track down bugs. Even the backup system notifies us daily about successful or failed backups.
Don’t think of anything EC2 related as persistent. All data is lost! Everything gets deleted when a new EC2 instance gets terminated by automated systems such as Elastic Beanstalk.
Conclusion
AWS is an excellent platform for hosting your services. If you keep an eye on your monthly costs and where the money goes. You can try to find a way to minimise them. Right now, during our development stage, we only pay around $0.50 per month. We even have free startup funding provided by AWS.
One downside to the AWS platform is its enormous size and its services. I’ve even seen cases where their services will cannibalise their other services. There are a lot of tutorials on YouTube, but I haven’t seen any good complete CI/CD tutorials on ECS (and believe me, I’ve looked hard for them).
Once everything is working, it is working very well. You have many options to create every kind of server or function that you would need to operate, monitor or even notify your services.
I’m thinking about doing a full tutorial on the strategy we implemented using ECS in the free tier with automatic migrations, continuous deployment, automatic backups, emailing, CloudWatch, SSL… So please let me know if you’re interested!