it’s simple with AWS: A step by step serverless fan-out architecture guide

Rafael Rodriguez
8 min readJul 6, 2020

--

Hi everyone, this is my first post ever, and here I will start a series of posts related to AWS, where I will show how simple it can be to build scalable and highly available architectures.

Scalable and highly available services

If you have some experience building scalable systems, you most probably have used or heard about queues, consumers, event bus, and resource orchestration.. 😵 😵 😵 oh, that sounds complicated, and it can be! Maybe you will need to manage your Kafka cluster or RabbitMQ, and perhaps someone will say: “we need a Kubernetes cluster!”. Well, although they all work just fine, it might not be the best solution available.

If you’re looking for a quick and straightforward solution ready to scale, think twice before implementing your Kubernetes cluster and check serverless approaches.

In this guide, I will build a simple, yet robust fan-out architecture, using only AWS Services.

Prerequisites

For this tutorial, you will need:

  • an AWS account
  • basic knowledge of AWS
  • basic concepts of the fan-out pattern

Fan-out

Is a pattern where we spread messages to multiple destinations

Why do I need to use fan-out?

  • If you have a use case where availability and scalabilities are requirements
  • If you want to decouple and make your architecture fault-tolerant
  • If your downstream services can and should work independently (microservices)

A “real” world use case

Let’s think about a simple Orders module of a marketplace application where a user submits the order, and a bunch of things happens in our backend while he waits for a success response.

So, we can say that we:

  • Process orders
  • Notify users
  • Prepare orders for delivery
  • Generate orders reports
  • Verify for compliance in certain countries

A Monolith could efficiently execute all of these steps and might be just what you need. But if you need to scale(moving to new markets), how complex would it be, and how would you do it? Horizontally? Vertically? Do you need orchestration to scale? Kubernetes? What if one of the steps fails? Can I reprocess it? How to code that?

These are just a few questions, and it can become much more complicated, so how can we avoid it?

Fortunately, AWS has many different services to simplify our lives. It takes only a few clicks to build a highly available, scalable, and fault-tolerant fan-out architecture that will remove the burden of managing our infrastructure.

A highly available and fully decoupled fan-out architecture that works with everything

Following the previous use case, we can combine a set of AWS serverless services that will run independently and will scale to thousands of messages per second.

Here we POST our orders to the API gateway; the API gateway sends them to an SNS topic (orders topic) that takes care of distributing the event to our different queues; after that, you can process them independently in various services.

I’ve done something similar, but I have used a lambda.

One of the cool things about API gateway is that you can integrate it directly to AWS services, and with VTL, you can do all types of mappings and integrations. So, go ahead and play with it.

So many pieces it doesn’t look that easy.

Yes, AWS can be scary, but the most natural way of learning it is playing around, so, log in your AWS account and follow this tutorial:
(alternatively, clone the terraform project linked at the end of this post)

AWS API Gateway

It is a serverless service for HTTP, REST, and WebSockets APIs.

openapi: "3.0.1"
info:
title: "Fan Out Sample"
description: Sample API with direct integration to SNS
version: "0.0.1"
paths:
/market/{market}/shops/{shop_id}/oders:
post:
parameters:
- in: path
name: market
description: The Market where the order should be processed
required: true
schema:
type: string
enum: [eu, us, ru]
- in: path
name: shop_id
required: true
schema:
type: string
description: The shop ID to whom the oder should be processed
responses:
200:
description: "200 response"
content:
application/json:
schema:
$ref: "#/components/schemas/Empty"
x-amazon-apigateway-request-validator: "full"
components:
schemas:
Empty:
title: "Empty Schema"
type: "object"
x-amazon-apigateway-request-validators:
full:
validateRequestParameters: true
validateRequestBody: true

The above steps should create a basic API without any integration

Now let’s create the SNS topic.

SNS is a high available serverless pub/sub messaging service.

  • Open the SNS service in the console
  • Create a topic and name it: orders_topic
  • Copy the ARN reference

Let’s add an IAM role to allow the API Gateway to publish in our SNS topic.

IAM role is a role that our API Gateway will assume to be able to send messages to SNS.

{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"sns:Publish"
],
"Effect": "Allow",
"Resource": [
"ARN_OF_YOUR_TOPIC"
]
}
]
}
  • Copy the policy name
  • Create a new AWS Role and Attach the new policy to it
  • Copy the role ARN

Configure the AWS API Gateway service integration request.

You can define where the messages sent to your API are going to be forwarded, and it can be an AWS service, a lambda, a VPC endpoint, or another API (HTTP).

  • Go back to the API Gateway console
  • Select your API and click in the POST method
  • Click in Integration Request
API Gateway Method details
  • Enter the region of your topic
  • Enter AWS Service: SNS
  • Enter the HTTP Method: POST
  • Enter PATH Override: /
  • Enter Execution Role: The ARN of your new role
  • Enter Content Handling: Passthrough
API Gateway Configuration

Configure the integration request mapping template

A request mapping template lets you override a request before sending it to the downstream service.

  • Click in the integration request
  • Scroll down to mapping templates
  • Tick the never combo box
  • Click add mapping template
  • write application/json and save
  • add the following code (replace YOUR_TOPIC_ARN) and SAVE
Action=Publish&TopicArn=$util.urlEncode('YOUR_TOPIC_ARN')&Message=$util.urlEncode($input.body)&MessageAttributes.entry.1.Name=shop&MessageAttributes.entry.1.Value.DataType=String&MessageAttributes.entry.1.Value.StringValue=$input.params('shop_id')&MessageAttributes.entry.2.Name=market&MessageAttributes.entry.2.Value.DataType=String&MessageAttributes.entry.2.Value.StringValue=$input.params('market')
API Gateway Integration Request Mapping

Add an HTTP Header to your integration request.

You can add custom headers to your integration request using HTTP headers.

API Gateway Integration Request Header

At this point, the API Gateway is ready to send messages to the newly created SNS topic, and you already have a high available fan out service but still need to spread the messages, so let’s create some queues.

Creating SQS Queues

SQS is a serverless queuing service that enables you to scale and decouple microservices

SQS Details

Subscribe your Queue

Subscriptions are the way of telling the SNS topic which services are interested in the messages sent to it.

  • Open SNS Console
  • Select your topic
  • Click create subscriptions
  • Select the protocol SQS
  • Enter your SQS Queue ARN
SNS Subscription

Create as many queues as you need, and it’s done!

Now you can start sending messages to your API Gateway, and you will be able to pool messages from your SQS queue.

Try it out

You can now open the API Gateway Console and test your API

AWS API Gateway Test Console

Now POOL your messages from the SQS console.

SQS Message Detail

Extra: Message Attributes

If you explore the API Gateway / SNS integration we’ve done, you will notice the use of Message Attributes. Message Attributes is an excellent feature of SNS where you can use simple filtering and spread only messages that matter to our subscribers.

Try cloning and provisioning the sample, then test your API setting the market as “ru” or “us” and observe the messages in your SQS queues, you will notice that the compliance queue only receives events from the market “ru”.

That looks cool, but I’m still not convinced.

You’re probably asking yourself (and I hope you are) things like: “how does that scales?”, “What about the costs?” “Anything I should take into account?”.

The good thing about doing serverless with AWS is that they take care of everything, meaning they will scale automatically for you. But hey, your API will not be able to handle infinite requests per second, they do have some limits. This simple architecture will handle up to 10000 transactions per second (depending on the region), and that is a lot.

Check the following links for limits related to API Gateway, SNS, and SQS.

Another advantage of serverless architectures is: You don’t pay for what you don’t use, so, forget about unused infra, you don’t need to take care of those large machines that stay idle 2–3 hours per day. AWS takes care of it for you, and you only pay for what you use, making a perfect choice for early startups that don’t know exactly their loads.

And well, we didn’t write a single line of code; with just point and clicks, we were able to create a scalable fan-out architecture that you pay for what you use, and that, that is pretty awesome.

Be careful, everything looks great, and it is, but the cost can scale pretty quick, especially for SNS. Use this link and avoid any unexpected charges. Also, set some AWS Budgets for your account and be notified when you exceed them.

Next Steps

In this guide, we built the basics to connect AWS API Gateway with an SNS topic and an SQS queue, and we achieved an underlying fan-out architecture that scales, but we’re still missing a few things to have the right solution. So, have a look at:

API Gateway features like:

SQS features like:

alternatively, avoid the point and click and clone a terraform source code with lambda integration here

--

--

Rafael Rodriguez

Serverless enthusiast | AWS Certified Solutions Architect Professional | Head of Engineering @ Medloop | Marathoner