How to configure cron to run an AWS Lambda using Serverless?

Bartosz Mikulski
Fandom Engineering
Published in
4 min readJun 1, 2020

In this article, we are going to deploy an AWS Lambda function using Serverless, specify its dependencies as a Lambda Layer, and configure a cron trigger to run the function periodically.

Our example function is going to ping a REST endpoint once a day using the Python request library. It is not a sophisticated use case, but it allows us to demonstrate how to add dependency libraries to the Lambda function. We could use that code to trigger an action in a browser-based online game or to check whether a website is available.

How to Install Serverless and Plugins

Before we start, we need to install and configure the Serverless tool. There are a few installation options. In this tutorial, we are using NPM to install Serverless:

npm install -g serverless

When the CLI tool finishes the installation, we must configure it by calling the following function:

serverless

Now, it is time to install the serverless-python-requirements plugin. This plugin lets us specify function dependencies in the requirements.txt file:

sls plugin install -n serverless-python-requirements

Serverless Configuration File

To generate the template of a new project, we must call the serverless create function and specify the template name which identifies the provider and programming language we want to use.

In our case, we use the template called: aws-python3.

serverless create \
— template aws-python3 \
— name request-test \
— path request-test

This command creates the project configuration file in the request-test directory (inside the current working directory). Let’s open the serverless.yml file. We are going to replace its content with:

service: request-testprovider:
name: aws
runtime: python3.8
stage: 'prod'
region: 'eu-central-1'
deploymentBucket:
name: 'test-lambda'
plugins:
- serverless-python-requirements
custom:
pythonRequirements:
dockerizePip: non-linux
layer: true

What happens here? First, we specify the provider and the runtime. We are going to deploy the function on AWS and use Python 3.8 to write the code.

Serverless lets us distinguish between production and test environments, by using the stage parameter (it can be any string). Our function runs in production; hence: “prod.”

After that, we must choose the AWS region to run our function and choose the S3 bucket in which Serverless will store the function code.

To allow specifying dependencies in the requirements.txt file, we must enable the serverless-python-requirements plugin and instruct it to create a new AWS Layer that contains the installed libraries.

Function Code

Finally, we can implement our function. As promised, the function is going to be simple. All we need to call a REST API in Python is code like this:

import requestsdef call_the_api(event, context):
response = requests.get('https://api.github.com')
return {
'statusCode': response.status_code,
'body': ''
}

Our Lambda function propagates the status code of the page, but we are not going to do anything with this response body. We do it only to inform AWS Lambda whether our function failed or finished successfully.

Function Configuration

We are going to assume that we have stored the function code in the example_func.py file. In the last configuration step, we have to add the function to the serverless.yml file:

functions:
example_func_call_the_api:
handler: example_func.call_the_api
memorySize: 128
layers:
- {Ref: PythonRequirementsLambdaLayer}
events:
- schedule: cron(15 6 * * ? *)

First, we specify the identifier of the function. The name does not matter, but it is best to use something that tells you where the function is defined and what it is supposed to do. Because of that, we merged the file name and the function name to form the identifier.

Next, we must configure the function handler. It tells AWS which function it should call to execute this Lambda. We have defined the call_the_api function in the example_func.py file, so the module name is example_func and the function name: call_the_api.

After that, we specify the memory available during the function execution. We don’t need vast amounts of memory, so 128 MB should be enough. Remember that the AWS Lambda cost depends on the amount of the reserved memory (not the used memory!) and the function execution time!

The layers parameter configures the AWS Layers of the function. In this step, we are going to use the layer created by the serverless-python-requirements plugin. Note that we can reuse the same layer in multiple functions.

In the end, we have to define the trigger that causes the function execution. We want to use cron scheduler. Because of that, we put schedule: cron(15 6 * * ? *) in the file.

Cron expressions in AWS Lambda

For those of you who are familiar with the cron notation the “15 6 * * ? *” parameter may look weird. The standard cron expression has only five parameters and does not use the question mark.

In the AWS cron expressions, we have six parameters: minutes, hours, the day of a month, the day of a week, and year. In addition to that, either the day of a month or the day of a week parameter must be a question mark. The question mark instructs AWS to run the function every day of a month or every day of a week, respectively.

In our example, we set the cron expression to “15 6 * * ? *” which means every day at 06:15 UTC.

Testing and deployment

To test the AWS Lambda locally, we have to package it first:

serverless package

After that, we have a Docker image containing the function. We can run that Docker image using the “serverless invoke” command:

serverless invoke local - docker - skip-package - function example_func_call_the_api

When we finish development and testing, it is time to deploy the code in production. For that, we run the following command:

sls deploy

--

--

Bartosz Mikulski
Fandom Engineering

“I do ETL… because AI can’t learn from the absence of data.” * blogger * conference speaker * data/machine learning engineer