sosw — AWS Lambda Orchestration Part 1: twitter API tutorial

Nikolay Grishchenko

This article marks the first in a series that will explore the recommendations for “uncommon” use-cases of serverless applications with AWS Lambda. But what makes these use-cases different from regular Lambda tutorials? The use of asynchronous invocations of functions.

Before we dive any deeper, it is important to get crystal clear on the two options you have to invoke Lambda functions. The AWS Lambda developer guide identifies these options as: synchronous and asynchronous.


Your synchronous option assumes that the “invoker” keeps an open connection to the Lambda API and awaits some form of response from the function execution. Your response could be empty, but the invoker should still wait for it. This might kill serverless concept for numerous use-cases, so onto your second option.

Photo by JJ Ying on Unsplash


The asynchronous invocation type is designed for use where synchronous isn’t. In asynchronous cases the invoker simply ensures that the API has received a request for a function execution, but doesn’t factor in the consequences. You can imagine that your function return value redirects to `/dev/null` and if you have to save any artefacts — the function should take care of this itself… But this is not the only thing that the function becomes responsible for when invoked asynchronously.

First of all, functions fail. Even the super redundant spaceship control software fails. Your “hello world” functions will also fail. So, why don’t spaceships crash land left and right due to software failures? Well… sometimes they do, but usually the errors are handled by multiple layers of error handling mechanisms. For Lambda Functions the first layer is provided by AWS “out of the box” — retry mechanism. If the asynchronously called function fails for any reason AWS will retry it automatically two more times using an exponential back-off pattern. If all three attempts fail you can configure the events to be sent to Dead Letter Queue (DLQ) which can be either queue (SQS) or message pub sub (SNS.)

And don’t forget, function invocations throttle. Although for “hello world” this is not usually the main problem, still there is a limit of concurrent Lambda executions. Under the hood, the Lambda is running in containers of a kind. A single container may handle only one concurrent request at a time and AWS reuses your existing containers to spawn new ones automatically using some very short queue for invocation requests — but there is a limit configurable per Lambda function. In different regions, there are account-level limits from 500 to 3000 concurrent execution. If your function is connected to a VPC, the Amazon VPC network interface limit can also prevent it from scaling (this problem is supposed to be solved very soon with Remote NAT).

There is also an execution time limit that can make your life significantly harder when creating serverless applications. The maximum execution time is now 15 minutes and you can shrink it even smaller per function. If your function is still running when this time credit runs out there is no healthy way to shut it down from outside. The execution process is simply killed and “good bye” invocation. The container stays alive and may be used for other invocations. Retry mechanism for timeout works similarly as for function errors and may even happen in the same container after some back-off time.

Oh hell, why would I even bother coming to this serverless world when there are so many limitations?! That would be for the FaaS evangelists to answer, but I can try to help you solve some of these problems. You might think that if your Lambda has to take care of all these problems, especially when it’s response normally goes to `/dev/null` — it might be difficult. Well… This is where orchestration comes in.


The main orchestration mechanism provided by AWS itself is called “Step functions”. It works great when your workflow includes multiple different functions (and other AWS services). Step functions use the “state machine” architectural pattern. You can easily design your workflows, describe function invocations, pass artefacts, etc. Think of step functions as a great coordinator of workflows, but not the orchestrator of functions — it covers some of the difficulties we have seen earlier, but for example the execution time limit is still on the table. There are several articles on Medium describing use-cases for Step Functions. There is also a great hello world example in AWS documentation.

So how can we overcome the time limit? The most obvious solution is to chunk the data you need to process into smaller pieces and invoke Lambda multiple times. This way, each Lambda execution will process its own isolated part of the data. Developers often use this pattern by streaming the data into a pipeline and processing batches of events from the stream. AWS Kinesis provides great integrations with AWS Lambda and as long as this is a managed solution, you don’t have to worry much about most incidents such as lost or corrupted data. AWS takes care of chunking, data stream speed control, provisioning Lambda containers, retry mechanisms. Hey, problem solved.

But what do you do if you need to actively pull some data from an abstract data source, for example from an external API or database? Say we don’t have any “events” at this moment. This maybe just some scheduled job just describing parameters to pull, but it knows nothing about the data or events we shall receive from this pull operation. The tool we are going to use to solve these tasks is a complex of AWS Lambda functions for orchestrating other Lambda functions.

sosw — AWS Lambda Orchestration
sosw — AWS Lambda Orchestration

sosw — is a Python project where Workers are language independent, but for this tutorial we will assume that they are also written in Python.

You can read much more about sosw in the official documentation.

Now, let’s imagine that you have spent 10 minutes of your time already setting up the sosw environment in your AWS account. The Installation Guidelines provide step-by-step instructions on how to deploy the environment that fits in the AWS Free Tier. The following AWS resources are used: IAM, Lambda, DynamoDB, S3, CloudWatch Events.

As an example, we are now going to pull the data on popularity of several different topics in the hashtags of tweets. Imagine that we have already classified some popular hashtags to specific groups. Now we want to pull the data on usage in the last 2 days and save it to DynamoDB for future analysis. Our job might look like this:

"topics": {
"cars": {
"words": ["toyota", "mazda", "racing", "automobile", "car"]
"food": {
"words": ["recipe", "cooking", "food", "meal", "tasty"]
"shows": {
"words": ["cinema", "movie", "concert", "show", "musical"]

Now let’s say that for each day we would want to read the data for only the two previous, no problem — add the chunking parameter:

"period": "last_2_days"

Pulling the data for every keyword may take several minutes, in this case we add another two parameters:

"isolate_words": true, "isolate_days": true

And done! This payload shall become our job. This means that we can chunk it per specific word or day and create independent tasks for our Worker Lambda with each chunk.

Now the time has come to create the actual Lambda, but first you will have to register your own twitter API credentials. Submitting the application takes 2–3 minutes, but you have to wait for approval (mine it took ~8 hours and I listed “educational reasons” for the developer account.)

The Lambda code you can find in the repository. You may have already cloned this repository with examples when setting up sosw Essentials: examples/workers/tutorial_pull_tweeter_hashtags

If you know how to create | package | deploy Lambdas — you can do this using whichever method you prefer: SAM, web-interface, aws-cli, etc. This example has CloudFormation templates with all required resources, you simply have to replace 000000000 with your AWS Account ID.

For those of you who prefer to follow the step-by-step guidelines, we shall execute them from a minimum EC2 instance running Amazon Linux 2 in order to make sure we use the same environment and tutorial commands. This can be the same machine you used during Installation of sosw Essentials.

Step 1

Prepare the CloudFormation and create resources DynamoDB Table, IAM Role for the function and the Lambda function itself. You can update and deploy the CloudFormation stack this way:

# Get your AccountId from EC2 metadata. 
# Assuming you run this on EC2.
ACCOUNT=`curl | grep AccountId | awk -F "\"" '{print $4}'`

# Your bucket. You should have it from the Essentials Installation.

FUNCTIONDASHED=`echo $FUNCTION | sed s/_/-/g`

cd /var/app/sosw/examples/workers/$FUNCTION

# Install sosw package locally. The only dependency is boto3,
# but we have it in Lambda container already. Just skipping now.
pip3 install -r requirements.txt --no-dependencies --target .

# Make a source package.
zip -qr /tmp/$ *

# Upload the file to S3, so that AWS Lambda will use it from there.
aws s3 cp /tmp/$ s3://$BUCKETNAME/sosw/packages/

# Package and Deploy CloudFormation stack for the Function.
aws cloudformation package --template-file $FUNCTIONDASHED.yaml \
--output-template-file /tmp/deployment-output.yaml \
--s3-bucket $BUCKETNAME

aws cloudformation deploy \
--template-file /tmp/deployment-output.yaml \
--stack-name $FUNCTIONDASHED \

The function runs a very simple scenario, where we make paginated API calls to fetch the data, calculate count of tweets and then put it to the DB. The CloudFormation has created this table for results already.

Step 2

In order for sosw to work with your Lambda we shall register it as a “Labourer” in the sosw_configs. The default and fastest way to patch configs is to use the DynamoDB config table. We created one during the initial setup of sosw. There are plenty of settings we can configure for the Orchestrator on how to treat this Labourer (function). You can also configure custom metrics from CloudWatch that the Labourer will respect and you can even make specific rules on how different Labourers should behave taking these metrics into account. But let’s explore that in a more advanced tutorial.

Feed the basic sample configurations we have in the package to the DynamoDB: examples/workers/tutorial_pull_tweeter_hashtags/config

For this tutorial we have a solid script to recursively update configs. It injects the contents of labourer.json to the existing configs of all the Essentials. It will also create a config (with placeholders for your credentials) for the Worker Lambda itself out of self.json

cd /var/app/sosw/examples
python3 tutorial_pull_tweeter_hashtags

Now we provide the twitter API credentials. You should also save them to the DynamoDB config table. Either manually or simply paste them in the configuration of our specific Lambda: examples/workers/tutorial_pull_tweeter_hashtags/config/self.json

and run the script again:

cd /var/app/sosw/examples
python3 tutorial_pull_tweeter_hashtags

In case you don’t have the credentials yet you can still go on to the next step. This tutorial function will simply skip calling twitter API but will exit safely.

Photo by Luz Mendoza on Unsplash

Step 3

Great! We are ready to run the tutorial.

To start, you simply call the sosw_scheduler Essential with the job that we have constructed earlier. The payload for the Scheduler must also have the labourer_id which is the name of Worker function itself. The sample payload is saved in the task.json file in the repository.

cd /var/app/sosw/examples/workersPAYLOAD=`cat sosw_tutorial_pull_tweeter_hashtags/config/task.json`aws lambda invoke --function-name sosw_scheduler \
--payload "$PAYLOAD" /tmp/output.txt && cat /tmp/output.txt

In this example the call is synchronous, but this is not a requirement. As a result we see a list of tasks created in the DynamoDB sosw_tasks table. Each task has a part of the big job following the rules we have set for the chunking mechanism.

What happens to these tasks next? The Orchestrator runs every minute and will start invoking the Worker Lambda with the tasks from this queue. We have configured it to have a maximum of 3 concurrent invocation for this Labourer so it will take some time. You can adjust this speed in the config table in sosw_orchestrator_config row.

Because the Orchestrator only runs once per minute, “Free Concurrent” really means: max_allowed — still_running_previously_invoked

For short-running tasks like these we could have spawned much more executions but your batches outside of the tutorial could run much longer.

After several minutes when all the tasks are processed, you can check the that the tasks go from sosw_tasks to sosw_closed_tasks. The results table of our Worker Lambda now has the data we were pulling.

From our work we can see that people were tweeting more about concerts than movies and musicals combined!

What we have just worked is a scalable and resilient serverless solution for a rather uncommon AWS Lambda use-case. At my current company Better Impression we have over 100,000 daily asynchronous invocations of ~50 different Lambda Workers orchestrated by this mechanism. No matter how simple it may seem, there are many other possibilities for this technology to explore.

In upcoming instalments of this series, we will work on examples with more advanced features of the sosw package, but for now check out this link for more information on sosw and contribute to sosw in the GitHub repository.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade