Hosting a Laravel Application on AWS Lambda (Full Guide)

Ever since I heard about AWS Lambda I have been fascinated by the idea and how I could run Laravel on it. I know it was possible based on other projects and blog posts, but I had to try for myself.

I posted my success on twitter and received requests to publish a blog post. While I am not the first person to publish on the topic, I am hoping this is the most in depth guide that explains how to setup, Lambda, VPC, API Gateway, S3 and Cache to get Laravel working.

About Lambda’s

Lambda is a AWS service branding as Function-as-a-Service (FaaS) because we all need more aaS in our lives. Its lets you run code as a function in response to other AWS services events, for example an S3 file upload. But most interesting to me is you can listen to API gateway events enabling you to build a web service completely event driven, running a function call per HTTP request. No server needed!

How to Guide

I will try and break down the process as much as I can for someone who does not have much experience with AWS. I am going to try and cover as much detail as possible.

Just a note, make sure you are in the AWS region you want your code to function in. I am going to base my in eu-west-2 (London), you will need to swap to us-east-1 for certificates.

1) Setting up your First Lambda

Navigate your way to the AWS lambda service and create a new function, you want to author from scratch.

Name : laravel
Runtime: Node.js 6.10
Role: Create a Custom role

Once you click create a custom role you will get taken to the IAM management console. 
IAM Role : Create a new IAM Role
Role Name : lambda_laravel_role

Press Allow

The window will close and take you back to the lambda create form. Choose your newly created role

Create Lambda Function

Once created you will see a inline code editor with

You’re now free to test the function to get familiar with how they work. When you click test, AWS will ask you to provide a test event, to save having to change this in the future use the API Gateway AWS Proxy as a template and give your event a name, Create. Then you can press the test button. You should get a screen output; you have just run your first Lambda function!

We are now ready to try something more advanced.


Lambda currently does not support PHP out of the box however they do let you run arbitrary executables paving the way to run PHP CGI. You just need to include a PHP binary as part of your function, this binary needs to be compiled based on Amazons own AMI image.

The easiest way is to compile the PHP CGI runtime for AWS is in a docker image and copy it out of the container.

If you want to skip this step I have loaded some binaries for 7.2 and 7.1 to github

If you would like to compile your own (recommended) I’ve put together a build script

Once finished you will have a php-cgi binary in your directory which you can copy for late. You will be unable to run it on your computer as it was compiled to run in a AMI image.

Now we can glue this together with a NodeJs function that can spawn the PHP binary and return its output. We need expand the handler javascript and build a zip archive containing that code as well as our php binary.

To see this code clone and checkout version1. Don’t forget to NPM install and add a PHP binary.

git clone
git checkout -b version1 origin/version1

Once your ready you can build your zip, I’ve excluded .git and .idea to make sure we don’t archive stuff we don’t need. This will make a ZIP in the parent directory. Note the *.* this is because we also need to upload the hidden .env file. Its also possible to provide your environment config directly in the Lambda console.

zip -r ../ * .* -x “*.git*” -x “*.idea*”

Upload your zip and change the handler to handler.handler

The module-name.export value in your function. For example, “index.handler” would call exports.handler in index.js.

3) AWS Api Gateway

We need to setup the basics of the API gateway, this is used to hook HTTP with a Lambda.

Go to the API Gateway console and create a new API. You can call it what ever you want and decided if you want it to be regional or edge.

Once created you will have a empty API, you need to setup resources and methods. First create a method and select Any. To setup select Lamda Function, set Use Lambda Proxy integration to true and find your lambda. Save and grant permission.

Next from the actions button choose Deploy API then set your stag name to prod. You will be given a URL. You will not see anything to start with as our lambda is not responding to HTTP requests correctly. We fill fix that next.

4) Laravel

Now will a basic understanding of Lambda’s we need to hook it all together to get a real project running.

You can clone version2 if you want to skip the coding part

git checkout -b version1 origin/version2

Once we have installed Laravel we need to change our handler to send real HTTP requests and return HTTP responses in a way API Gateway and Laravel understand.

Lambda setup up as a proxy needs to return a JSON object with status code, headers and the body

With this in mind we change the handler to send a request to PHP CGI process the response and parse it into the JSON response.

Upload and visit your staging url, You should see that Laravel is alive, but not happy.

The stream or file “/var/task/storage/logs/laravel.log” could not be opened: failed to open stream: Read-only file system

A valuable lesson to be had, the file system your Lambda is extracted and runs in is read only. However you do have access to 512mb of writable /tmp. We will use this for views, cache and anything else we can’t use remote storage for. For the logs however we will set Laravel to use error log.


Our handler picks up output to stderr and console logs. This gets picked up by Amazon and but in your CloudWatch logs.


The next issue we need to solve is sessions. Ideally we need to use a permanent cache service to safely store sessions between page loads as we can’t guarantee we will get the same ‘container’ running our function but for now we will tell the session/cache and filesystem drivers to use /tmp. The easiest way I have found is to modify the bootstrap/app.php

You tell PHP to build the directories that are needed and then tell Laravel to change the storage path to /tmp/laravel . You do this before the application gets a chance to fully boot. I am tempted to make a new entry point file to avoid having to have this changed for local/prod dev.

Again upload to Amazon and this time you should be greeted with a friendly familiar page.

5) Api Gateway Revisted

We only setup the route endpoint “/” before and we need to setup a wildcard for all other resources and methods. We also need to fix your favicon.ico robot.txt and any other things you run in your root location.

For assets I will show two methods, using API Gateway to proxy to S3 or getting your assets to point to directly to cloudfront + s3.

Proxy routes

Go back into the API Gateway and find your API. From the actions menu create a new child resource. Then you can press the proxy resource and leave everything up to Amazon.

Again as before setup the link to your lambda function. Once created you re-deploy your API. We are going to add some routes to our Laravel application so we can test, with will do php artisan make:auth as a good way to get everything ready for the next step.

At this point having /prod in your testing URL is going become a pain as it will break all your routes. We need to great a custom domain name. In the API gateway in the side panel you can create custom Domain Names. However you need to sign a ACM certificate (SSL) first, (you must also do this in the US East (N. Virginia) not sure why but if you forget you can import it there). AWS certificates are free.

AWS Certificates

From the ACM certificate console (N. Virginia) request a certificate. Enter your domain names, for this I am going to use a wildcard as we will need to make a separate subdomain for assets. * You can add other names too, more useful for root domains with and without www. From the next step you need to validate you control the domain, you will need to use Route 53 for DNS if you want to alais domain names together.

On Step 4 make sure you look at the validation step. Using Route 53 you can just press the Create record in Route 53 button and let amazon do this for you.

Depending on validation you may need to wait a little bit for the certificate to be issued, however for me this step was instant.

AWS API Gateway Custom Domain

Back in the AWS Api Gateway you can return to the custom domains and Create Custom Domain Name enter the domain name you want ( in my case). Choose Edge Optimised unless you only want regional. Choose your created ACM certificate. Base Path Mapping

Destination: Api Gateway name (Laravel)
Stage: Prod (or what ever you called the main stage on your api)

Once you press save you are in for a bit of a wait while CloudFront initialises.

Once finished you can copy your target domain “” in my example and go back into Route 53. Find your hosted zone and add a record set.

Once saved and DNS had propagated you can now visit your new. Routes will be working as its serving from the route domain.

5) RDS + Cache and VPC’s

At this point we have a functioning home page but we don’t have a database connection or cache server setup for sessions. While RDS can be connected two from outside of AWS cache and other key services are going to need to run in a VPC.

Go to the VPC console and create a new VPC using the wizard. Create a VPC with a Single Public Subnet. Give it a name, I’ve called mine demo-laravel-app. Next in Subnets in the VPC console add another in a different availability zone.

Next we need to allow internet access, this will be usefor for connecting to RDS later. VPC management > Route Table > [Routes] > Add ,, dropdown to pick your internet gateway.

We can now attach AWS services to this VPC. We will start with the Lamda, go to your IAM console as we need to attach Basic with VPC permission. You need to go to your lambda role and attach AWSLambdaVPCAccessExecutionRole

Your now running your Lamda inside a VPC. You make find a tiny performance hit.


RDS is an amazing service for those of out there who don’t want database management sat on their shoulders. I personally recommend Aurora as its the most managed solution however in this example I am going to use MySQL to get the free tier.

Give the database a name. Setup a master user and password. On the advance settings choose your VPC, we are also going to make it favourite accessable. You can also set the database name for a database to be make on creation. Once created you are going to need to wait a little bit for it create.

Once created you are going to need to grant access, this is easily done by viewing the details of your instance and clicking the security group listed. You may find your IP address is already listed in the inbound connections. From the inbound tab edit the rules > add rule. Mysql, TCP, 3306, My Ip > Save.

Once created you should get a public endpoint which you can use your favourite MySQL client to connect to. Change your ENV file to point to your new connection. You can then run php artisan migrate from your dev box to upload the RDS server. Rezip your code and upload to lambda. We can now create an account, we can also login.


The login is an issue though, currently we have sessions set to store in the file driver. This is only working by chance, AWS reuses the function again, after a cold boot or if our load increased you would get logged out. You could use dynamodb for a session driver, however I am going to make a Redis host.

Head to the ElasticCache dashboard and create yourself a Redis instance. Make sure you choose your VPC and select subnets you want it to run in as well as the security group. Wait for it to create so you can copy the details your your .env file.

Don’t forget you will need install predis.

composer require predis/predis

Zip, deploy and test again. This this you should be able to keep logged in even after a redeployment.

6) Assets

You may have noticed that assets are not loading, we have two issues, one Laravel is not picking up we are using HTTPS. This is our fail for not passing HTTPS though PHP CGI, adding:


To the env object in our handler solves that issue. Next we need to upload our assets to somewhere they can be accesses. We will use S3, and proxy the API gateway to the assets. This is not the most ideal setup but the easiest. The recommended way would be to setup cloudfront and use that as a CDN, but getting that to work would be another guide.

To get started move your assets to a folder in your project called assets. This way we can forward all traffic from that resource to s3. Open the AWS s3 console and make a new bucket. I’ve called mine and chosen the same region. I am going to grant read access to the whole bucket.

Then you can upload you assets, I’ve kept mine in a assets folder. Once uploaded you should be able to go to your assets and view them live

The interesting part of this URL is the first section we will need this in the API gateway.

From your root directory make a new resource and call it assets. From the new assets folder make a proxy child resource

You can see we are configuring the api gateway to proxy all traffic to the assets folder to S3. Tell the API to deploy and in your laravel application fix your assets if you need

<link href="{{ asset('css/app.css') }}" rel="stylesheet">
<link href="{{ asset('assets/css/app.css') }}" rel="stylesheet">
<script src="{{ asset('js/app.js') }}"></script>
<script src="{{ asset('assets/js/app.js') }}"></script>

Zip, deploy and test.

Finally we are finished we have a 100% AWS Serverless hosted site. No servers to worry about.

Whats Missing

There are a few elements currently missing, I have not discussed cron but I don’t think that will be two hard to achieve, using CloudWatch to schedule a Lamdba every minute. You will need to make sure your code does not take more than 300 seconds to run. It would also be worth making another function that used the PHP CLI binary as there would be no need for HTTP processing. If it does you will need to break the command in to jobs and batch them, this brings me onto my next point Jobs.

I have not 100% made my made up about how to process background jobs. Laravel mostly depends on putting jobs into a pool then continuously polling for work. This is not ideal as we don’t have any long running scripts. We need to be told about jobs. We could use SNS to fire a lambda when there is a job but we would then loose the ability to queue. This could be solved by also using cloudwatch to check delayed jobs. But only a minimal frequency of one minute its not ideal. I am going to have a think about how to build and code this and do a follow up article.

Cold boot times are also an issue, when a Lamdba first runs it can take a few seconds to execute, due to the way AWS manages containerising your code. The next executions are a lot faster, you could get round this by scheduling HTTP requests to keep it hot however this may not won’t work at scale, more testing needed.


As long as you write your code following the 12 factor app principles I can’t see any issue using Lamdba’s, I can’t remember the last time I coded an application to rely on the state of the machine that hosting the code. Files hosted on S3, sessions stored in ElasticCache etc, Laravel supports this all out of the box with config.

My next move is to improve to tooling as deployment is still very manual, get queue processing and then I am going to test running a live project on lambdas. I am then going to explore other options.

If you want to learn more or follow this progress please leave your email :)