Saving S3 Bucket Quotas from Serverless Framework

juwita
HARA Engineering
Published in
3 min readJul 3, 2020

Have you ever experienced deployment error when you try to create a new AWS lambda service with the Serverless framework?

This problem might occur due to the limitation of AWS S3 bucket quotas. Once you hit the limit, you are recommended to contact support to increase your bucket limit. But what if we can be more efficient with our S3 bucket usage?

Understanding why this happens
AWS Lambda deployment with Serverless Framework will automatically be using AWS S3 (Simple Storage Service) to archive all the code functions used. Subsequently, a new S3 bucket is created in every new service deployment, which causes the account’s S3 bucket quota to fill up quickly. AWS has bucket limitation for every account, it allows us to create up to 100 buckets. Once the limit reached, we can request additional bucket to a maximum of 1000 buckets.

Source: https://www.serverless.com/framework/docs/providers/aws/guide/deploying/
Multiple buckets made by serverless deployment

The problem comes when we work with multiple teams using the same AWS account/environment. Amazon S3 has plenty of storage usages. Apart from its function to archive serverless framework artifacts, it can be used to create static websites, store backup logs, backup up files/databases, and store public files. Other teams might use S3 for different needs. If we do not organize our account S3 buckets usage efficiently before we know it — we will receive the dreaded Deployment Error message from serverless deployment:

An error occurred: ServerlessDeploymentBucket - You have attempted to create more buckets than allowed

To avoid this error, we can reduce the number of bucket used for serverless framework artifacts. This means, our bucket quota can be used for other usages!

What to do then?
While I was looking into serverless.yml documentation, I found an interesting solution to this problem. The deploymentBucket function allows us to define which bucket will be used for storing function code. We can create one bucket just to store our serverless code artifacts.

Let’s use an example, my-sls-bucket-artifact on serverless.yml below.

The function will upload a zip file that consists of the code itself and the CloudFormation template file. This zip file will be uploaded on the S3 bucket under /serverless/[your-service-name]/[stage]/[timestamp] key folder.

  • [your-service-name]: based on the service key defined at the beginning of the serverless.yml file
  • [stage]: based on the deployment stage defined
  • [timestamp]: (automatic) based on the time of deployment.

And this is the result! ⭐️

my-sls-bucket-artifact/
└── serverless/
├── service-name-1/
│ └── dev/
│ └── 1234567890123-2020-05-18T08:06:21.466Z/
│ │ ├── service-name-1.zip
│ │ └── compiled-cloudformation-template.json
│ └── 4567890123456-2020-05-20T10:23:28.406Z/
│ ├── service-name-1.zip
│ └── compiled-cloudformation-template.json
└── hello-service/
└── dev/
└── 2345678901234-2020-05-20T03:07:29.466Z/
├── hello-service.zip
└── compiled-cloudformation-template.json

We can see that the tree structure of the serverless.yml has been uploaded on my-sls-bucket-artifact bucket under serverless/hello-service/dev/2345678901234-2020-05-20T03:07:29.466Z folder key.

By implementing thedeploymentBucket key, our code artifacts can be stored in just one bucket. Imagine, if we have 50 serverless services, by using this function, we can save 49 buckets to use it for other needs.

So, say bye-bye toServerlessDeploymentBucket error! 👋🏻

Learn more about HARA

--

--