How to Host Multiple Serverless APIs Under the Same Domain

Brock Lanoza
Type Faster
Published in
5 min readMar 30, 2018

Did you ever wish that people would just shut-up about serverless? It’s rare that one makes it out of a tech conference these days without hearing endless utterances of the term “serverless”. Annoying as this may be, the concept of serverless itself is incredibly inviting. Think about it; allow developers to write code and build applications without having to deal with all of the underlying infrastructure that more often than not becomes difficult to maintain and scale? It’s a pretty wonderful thing.

Serverless APIs: A blessing and a curse

If you’ve tried The Serverless Framework out, you’ve seen how easy it is to setup and deploy a live API by leveraging cloud services such as AWS Lambda. At the company I work for, Redshift Digital, we’ve used Serverless on a few client projects that needed a slim, simple backend to serve some content to the frontend application. In that situation, it worked very well and allowed us to move much quicker than would have been possible without such technology.

Unfortunately, our honeymoon with Serverless came to an abrupt end. While working on a recent project, I was merrily watching our API deploy when the following happened:

Serverless: Uploading CloudFormation file to S3...
The CloudFormation template is invalid: Template format error: Number of resources, 212, is greater than maximum allowed, 200

Where did this come from? As it turns out, this was an issue with Amazon and not Serverless. Under the hood, when creating services deployed to AWS, Serverless compiles a CloudFormation script from the code you wrote and uploads it to the cloud. CloudFormation can be thought of as the control center for your AWS account. Anything that you can do through the console in AWS can be scripted through CloudFormation and deployed in an automated fashion. Sadly, AWS currently imposes a hard limit of 200 resources when deploying a CloudFormation script.

After digging a bit deeper into what was actually getting deployed with each Serverless package, it was easy to see how we were exceeding this limit, even with only about 30 endpoints in our API.

The extra baggage of a Serverless function

You don’t need a Nvidia GPU to determine that 30 < 200. That said, how were we hitting the CloudFormation limit? The reason for this can be summed up as such:

30Functions != 30Resources

Let’s assume you created a Serverless API with 30 functions that acted as endpoints. For every function in your API, you also get the following resources:

  • Version (of your Lambda function)
  • Log Group (on CloudWatch)
  • Permissions (Allowing API Gateway to call your Lambda function)
  • Resource path (of the API Gateway endpoint)
  • Method (of the API Gateway endpoint)

In total, you get 6 resources per function in addition to the few global resources shared by entire API. Although some app APIs can get by with such a limit in place, there are plenty of APIs out there that just simply require more than 30 endpoints to do their job. How can we get around this limit and still use a Serverless API to meet our needs?

Microservice architecture

To combat this resource limit, our team decided the best course of action was to split up our API into separate micro-services. For each resource in our REST API (users, organizations, etc), we created a new Serverless service with its own serverless.yml` file. By doing this, we realized a few benefits:

  • Our code was separated in a way that made it easier to maintain
  • We avoided the hard-limit imposed by CloudFormation (hooray!)
  • Deployments were faster since we were only deploying a few of the API functions at a time, rather than all of them at once
Sample directory structure

Alas, this did not make a complete solution. At this point, we were left with a number of API hosts generated by AWS (one for each service). Ya know, these things:

https://sougly1337.execute-api.us-west-1.amazonaws.com/development/

Luckily, it’s possible to alias each of these as base paths of a single API using m̶a̶g̶i̶c̶ a custom domain on API Gateway.

Setting up a custom domain on API Gateway

To setup a custom domain, go into your AWS console and navigate to the aptly-named “Custom Domain Names” section. Click on “Create a Custom Domain Name” and you will be greeted by this screen:

Assuming you have the domain that you want to use handy, you can type it in here and click the appropriate radio button under “Endpoint Configuration”. Generally speaking, the option you select will be determined by the purpose of your API.

  • End-user facing APIs will use Edge Optimized since this will take advantage of the underlying CloudFront distribution to quickly get data to your users in whatever region they may reside.
  • If you API happens to talk exclusively to internal resources, such as an EC2 instance, then you would want to use Regional since this would optimize the API for the single region that it has to communicate with.

Base Path mappings are friend, not foe

Now that we’ve done that, we’re ready to bring it all together and map the microservices to this domain. By adding a base path and choosing the correct microservice to point to (destination), you make it possible to access your individual Serverless services all through a single API endpoint.

In this case, mediumarticlesarecool.com/users would call the previously mentioned users service. Be aware, creating a custom domain can take up to 45 minutes to finish since it has to spin up a CloudFront distribution.

Don’t worry, organizations is plural

Conclusion

If you’ve stuck around for this long, you’ve hopefully learned a useful way to implement a large REST API using Serverless functions all whilst avoiding the dreaded CloudFormation resource limit error. Next time, I’ll be writing a piece explaining how we used CircleCI to automate the deployment of our microservices.

Disclaimer about using Serverless APIs in production:

It should be noted that while this worked for our particular situation (our serverless API was acting as a middleware layer between the frontend and other backend services), it may not work for you. For as great as serverless can be, there are some pain points that make it more of a hassle than it’s worth in certain situations. For example, dealing with connection pooling is notoriously difficult since Lambda functions are stateless. Make sure your understand your application’s specific needs before going all-in on serverless.

Brock Lanoza creates product solutions at Redshift Digital. When not pondering the finer points of human-computer interaction, he can be found shamelessly spam-posting pictures of Persian cats to the internet.

--

--

Brock Lanoza
Type Faster

Engineer building a 1 person Real Estate business operated entirely from my iPhone to 10k/mo in cashflow