How to scale Laravel on AWS with very low server costs

Shpëtim Islami
Laravel advanced
Published in
4 min readMar 6, 2020
Photo by Christina @ wocintechchat.com on Unsplash

I would like to start explaining why you would want to do this, well the following sums it up:

  • We want to pay less server costs
  • We want to be able or be prepared to serve automatically huge number of requests without downtime
  • We want to have centralized place to store our access and error logs
  • More specifically since we are talking about Laravel, to have our queue jobs server handle more jobs at once.

What scalable application is and how to write one, check out this article: rules to write scalable application

Interested so far? OK then let’s start!

To go deeper into explaining the above mentioned goals, I would like to explain how can we achieve them.

Q: How can we pay less and serve a lot of visitors at the same time?

A: The trick here is not the AWS prices per say but the way we will build our application, luckily for us we will pay for things that we use only which means that if we have 1 server running for the whole month, and 1 other server running for 2 hours on “Black Friday” for the current month. We will only pay for only those 2 hours when we needed the other server. The servers themselves don’t have to be big monster servers with a lot of CPU and RAM capacity! The t2.micro instance type will do just fine in most Laravel projects which has just 1vCPU and 1GB Ram.

Q: Anything to keep in mind when building scalable applications?

A: Yes, the first thing is avoid saving files in the local file system, especially when allowing users to upload, because it will go into the server that the user is connected to at that time, and if you have more then 1 server running at that the same time, the next request might be served from another server and then the user will not be able to see what he uploaded!
To avoid this you can upload files into AWS S3 or use AWS EFS, I would prefer to go with the first option because setting up EFS is not that easy plus it is very slow but again it depends on the use case.

Lets get technical.

First step is to create a AWS account and Bitbucket account if you haven’t already!

There are going to be 2 environments, 1 for the web requests and 1 for queue jobs, so that we do not process long running queue jobs in the same place and have our server reject/delay requests because of resources are being used:

Beanstalk web application

  • Create new Beanstalk application
  • Create new environment, choose Web server environment
  • Choose PHP Platform
  • Choose Sample application (for now)
  • Click on the button Configure more options (bottom right)
  • Choose High availability (using Spot and On-Demand instances)
  • Click Modify under the Software section, and check the box to Enable Log Streaming. This step will stream the access and error logs in CloudWatch which you can later group and query them, so that our life is easier when our application is in production.

This is all for a sample application, click create environment

Beanstalk worker application

  • Create new Beanstalk application
  • Create new environment, choose Worker environment
  • Choose PHP Platform
  • Choose Sample application (for now)
  • Click on the button Configure more options (bottom right)
  • Choose High availability (using Spot and On-Demand instances)
  • Click Modify under the Software section, and check the box to Enable Log Streaming. This step will stream the access and error logs in CloudWatch which you can later group and query them, so that our life is easier when our application is in production.

This is all for a sample application worker, click create environment

RDS for database

I would recommend that you never manage the database on your own! it will give you a lot of headaches (depending on the size of the team).
So just go and create an RDS instance which is managed by AWS and you do not have to worry about availability, best practices, configurations, backups, monitoring etc.

Follow the official documentation from AWS: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Tutorials.WebServerDB.CreateDBInstance.html

Until now if you followed all the steps above, we should have a sample application which can scale at low cost, have monitoring, and centralized logs.
Now we want to integrate our existing/new Laravel application into it.
To make life easy lets use Bitbucket pipelines, and the reason for using Bitbucket is because they offer more build time for free. With pipelines you can integrate a continues integration and continues delivery, meaning you push your changes and they will/can run tests and the deploy in production (beanstalk application we created above).

Bitbucket pipeline for CI/CD

Create a file named bitbucket-pipelines.yml in the root of your Laravel application an use the sample below:

After replacing your settings for AWS with the placeholders in the sample yaml above, commit this into your project and push it to Bitbucket.
This will trigger a pipeline and and when it is finished it will deploy the same code to your web and worker environments.

--

--

Shpëtim Islami
Laravel advanced

In love with web development stack such as PHP & Python, Intensively using Laravel and MongoDB, Dad, footballer and Reader — Pet project: https://bolter.app