it’s simple with AWS: A step by step serverless fan-out architecture guide

Hi everyone, this is my first post ever, and here I will start a series of posts related to AWS, where I will show how simple it can be to build scalable and highly available architectures.

Scalable and highly available services

If you’re looking for a quick and straightforward solution ready to scale, think twice before implementing your Kubernetes cluster and check serverless approaches.

In this guide, I will build a simple, yet robust fan-out architecture, using only AWS Services.

Prerequisites

  • an AWS account
  • basic knowledge of AWS
  • basic concepts of the fan-out pattern

Fan-out

Why do I need to use fan-out?

  • If you have a use case where availability and scalabilities are requirements
  • If you want to decouple and make your architecture fault-tolerant
  • If your downstream services can and should work independently (microservices)

A “real” world use case

So, we can say that we:

  • Process orders
  • Notify users
  • Prepare orders for delivery
  • Generate orders reports
  • Verify for compliance in certain countries

A Monolith could efficiently execute all of these steps and might be just what you need. But if you need to scale(moving to new markets), how complex would it be, and how would you do it? Horizontally? Vertically? Do you need orchestration to scale? Kubernetes? What if one of the steps fails? Can I reprocess it? How to code that?

These are just a few questions, and it can become much more complicated, so how can we avoid it?

Fortunately, AWS has many different services to simplify our lives. It takes only a few clicks to build a highly available, scalable, and fault-tolerant fan-out architecture that will remove the burden of managing our infrastructure.

A highly available and fully decoupled fan-out architecture that works with everything

Following the previous use case, we can combine a set of AWS serverless services that will run independently and will scale to thousands of messages per second.

Here we POST our orders to the API gateway; the API gateway sends them to an SNS topic (orders topic) that takes care of distributing the event to our different queues; after that, you can process them independently in various services.

I’ve done something similar, but I have used a lambda.

So many pieces it doesn’t look that easy.

Yes, AWS can be scary, but the most natural way of learning it is playing around, so, log in your AWS account and follow this tutorial:
(alternatively, clone the terraform project linked at the end of this post)

AWS API Gateway

openapi: "3.0.1"
info:
title: "Fan Out Sample"
description: Sample API with direct integration to SNS
version: "0.0.1"
paths:
/market/{market}/shops/{shop_id}/oders:
post:
parameters:
- in: path
name: market
description: The Market where the order should be processed
required: true
schema:
type: string
enum: [eu, us, ru]
- in: path
name: shop_id
required: true
schema:
type: string
description: The shop ID to whom the oder should be processed
responses:
200:
description: "200 response"
content:
application/json:
schema:
$ref: "#/components/schemas/Empty"
x-amazon-apigateway-request-validator: "full"
components:
schemas:
Empty:
title: "Empty Schema"
type: "object"
x-amazon-apigateway-request-validators:
full:
validateRequestParameters: true
validateRequestBody: true

The above steps should create a basic API without any integration

Now let’s create the SNS topic.

  • Open the SNS service in the console
  • Create a topic and name it: orders_topic
  • Copy the ARN reference

Let’s add an IAM role to allow the API Gateway to publish in our SNS topic.

{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"sns:Publish"
],
"Effect": "Allow",
"Resource": [
"ARN_OF_YOUR_TOPIC"
]
}
]
}
  • Copy the policy name
  • Create a new AWS Role and Attach the new policy to it
  • Copy the role ARN

Configure the AWS API Gateway service integration request.

  • Go back to the API Gateway console
  • Select your API and click in the POST method
  • Click in Integration Request
API Gateway Method details
  • Enter the region of your topic
  • Enter AWS Service: SNS
  • Enter the HTTP Method: POST
  • Enter PATH Override: /
  • Enter Execution Role: The ARN of your new role
  • Enter Content Handling: Passthrough
API Gateway Configuration

Configure the integration request mapping template

  • Click in the integration request
  • Scroll down to mapping templates
  • Tick the never combo box
  • Click add mapping template
  • write application/json and save
  • add the following code (replace YOUR_TOPIC_ARN) and SAVE
Action=Publish&TopicArn=$util.urlEncode('YOUR_TOPIC_ARN')&Message=$util.urlEncode($input.body)&MessageAttributes.entry.1.Name=shop&MessageAttributes.entry.1.Value.DataType=String&MessageAttributes.entry.1.Value.StringValue=$input.params('shop_id')&MessageAttributes.entry.2.Name=market&MessageAttributes.entry.2.Value.DataType=String&MessageAttributes.entry.2.Value.StringValue=$input.params('market')
API Gateway Integration Request Mapping

Add an HTTP Header to your integration request.

API Gateway Integration Request Header

At this point, the API Gateway is ready to send messages to the newly created SNS topic, and you already have a high available fan out service but still need to spread the messages, so let’s create some queues.

Creating SQS Queues

SQS Details

Subscribe your Queue

  • Open SNS Console
  • Select your topic
  • Click create subscriptions
  • Select the protocol SQS
  • Enter your SQS Queue ARN
SNS Subscription

Create as many queues as you need, and it’s done!

Now you can start sending messages to your API Gateway, and you will be able to pool messages from your SQS queue.

Try it out

AWS API Gateway Test Console

Now POOL your messages from the SQS console.

SQS Message Detail

Extra: Message Attributes

Try cloning and provisioning the sample, then test your API setting the market as “ru” or “us” and observe the messages in your SQS queues, you will notice that the compliance queue only receives events from the market “ru”.

That looks cool, but I’m still not convinced.

The good thing about doing serverless with AWS is that they take care of everything, meaning they will scale automatically for you. But hey, your API will not be able to handle infinite requests per second, they do have some limits. This simple architecture will handle up to 10000 transactions per second (depending on the region), and that is a lot.

Check the following links for limits related to API Gateway, SNS, and SQS.

Another advantage of serverless architectures is: You don’t pay for what you don’t use, so, forget about unused infra, you don’t need to take care of those large machines that stay idle 2–3 hours per day. AWS takes care of it for you, and you only pay for what you use, making a perfect choice for early startups that don’t know exactly their loads.

And well, we didn’t write a single line of code; with just point and clicks, we were able to create a scalable fan-out architecture that you pay for what you use, and that, that is pretty awesome.

Be careful, everything looks great, and it is, but the cost can scale pretty quick, especially for SNS. Use this link and avoid any unexpected charges. Also, set some AWS Budgets for your account and be notified when you exceed them.

Next Steps

API Gateway features like:

SQS features like:

alternatively, avoid the point and click and clone a terraform source code with lambda integration here

--

--

Serverless enthusiast | AWS Certified Solutions Architect Professional | Head of Engineering @ Medloop | Marathoner

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Rafael Rodriguez

Serverless enthusiast | AWS Certified Solutions Architect Professional | Head of Engineering @ Medloop | Marathoner