HackerNoon.com
Published in

HackerNoon.com

Alex Casalboni

Apr 2, 2019

13 min read

How to FaaS like a pro: 12 less common ways to invoke your serverless functions on AWS [Part 1]

Yes, this is you at the end of this article, contemplating new possibilities! [Photo by Joshua Earle on Unsplash]
  1. Amazon Cognito User Pools — Users management & custom workflows
  2. AWS Config — Event-driven configuration checks
  3. Amazon Kinesis Data Firehose — Data ingestion & validation
  4. AWS CloudFormation — IaC, Macros & custom transforms

A bit of history first

When AWS Lambda became generally available on April 9th, 2015 it became the first Function-as-a-Service out there, and there were only a few ways you could trigger your functions besides direct invocation: Amazon S3, Amazon Kinesis, and Amazon SNS. Three months later we got Amazon API Gateway support, which opened a whole new wave for the web and REST-compatible clients.

1. Amazon Cognito User Pools (custom workflows)

Cognito User Pools allow you to add authentication and user management to your applications. With AWS Lambda, you can customize your User Pool Workflows and trigger your functions during Cognito’s operations in order to customize your User Pool behavior.

  • Pre Sign-up — triggered just before Cognito signs up a new user (or admin) and allows you to perform custom validation to accept/deny it
  • Post Confirmation — triggered after a new user (or admin) signs up and allows you to send custom messages or to add custom logic
  • Pre Authentication — triggered when a user attempts to sign in and allows custom validation to accept/deny it
  • Post Authentication — triggered after signing in a user and allows you to add custom logic after authentication
  • Custom Authentication — triggered to define, create, and verify custom challenges when you use the custom authentication flow
  • Pre Token Generation — triggered before every token generation and allows you to customize identity token claims (for example, new passwords and refresh tokens)
  • Migrate User — triggered when a user does not exist in the user pool at the time of sign-in with a password or in the forgot-password flow
  • Custom Message — triggered before sending an email, phone verification message, or a MFA code and allows you to customize the message

2. AWS Config (event-driven configuration checks)

AWS Config allows you to keep track of how the configurations of your AWS resources change over time. It’s particularly useful for recording historical values and it also allows you to compare historical configurations with desired configurations. For example, you could use AWS Config to make sure all the EC2 instances launched in your account are t2.micro.

  • Tags (for example, resources with an environment or project-specific tag)
  • Resource Type (for example, only AWS::EC2::Instance)
  • Resource Type + Identifier (for example, a specific EC2 Instance ARN)
  • All changes
  • invokingEvent represents the configuration change that triggered this Lambda invocation; it contains a field named messageType which tells you if the current payload is related to a periodic scheduled invocation (ScheduledNotification), if it’s a regular configuration change (ConfigurationItemChangeNotification) or if the change content was too large to be included in the Lambda event payload (OversizedConfigurationItemChangeNotification); in the first case, invokingEvent will also contain a field named configurationItem with the current configuration, while in the other cases we will need to fetch the current configuration via the AWS Config History API
  • ruleParameters is the set of key/value pairs that you optionally define when you create a custom rule; they represent the (un)desired status of your configurations (for example, desiredInstanceType=t2.small ) and you can use its values however you want; let’s say this is a smart way to parametrize your Lambda function code and reuse it with multiple rules
  • resultToken is the token we will use when to notify AWS Config about the config evaluation results (see the three possible outcomes below)
  • eventLeftScope tells you whether the AWS resource to be evaluated has been removed from the rule’s scope, in which case we will just skip the evaluation
  • COMPLIANT if the current configuration is OK
  • NON_COMPLIANT if the current configuration is NOT OK
  • NOT_APPLICABLE if this configuration change can be ignored

3. Amazon Kinesis Data Firehose (data validation)

Kinesis Data Firehose allows you to ingest streaming data into standard destinations for analytics purposes such as Amazon S3, Amazon Redshift, Amazon Elasticsearch Service, and Splunk.

(Optionally, you might have API Gateway or CloudFront in front of Kinesis Firehose for RESTful data ingestion)
  • Ok if the record was successfully processed/validated
  • Dropped if the record doesn’t need to be stored (Firehose will just skip it)
  • ProcessingFailed if the record is not valid or something went wrong during its processing/manipulation
  • Both incoming and outgoing records must be base64-encoded (the snippet above already takes care of it)
  • I am assuming the incoming records are in JSON format, but you may as well ingest CSV data or even your own custom format; just make sure you (de)serialize records properly, as Kinesis Firehose always expects to work with plain strings
  • I am adding a trailing \n character after each encoded record so that Kinesis Firehose will serialize one JSON object per line in the delivery destination (this is required for Amazon S3 and Athena to work correctly)

4. AWS CloudFormation (Macros)

We have already seen many CloudFormation templates so far in this article. That’s how you define your applications and resources in a JSON or YAML template. CloudFormation allows you to deploy the same stack to multiple AWS accounts, regions, or environments such as dev and prod.

  1. Create a Lambda function that will process the raw template
  2. Define a resource of type AWS::CloudFormation::Macro (resource reference here), map it to the Lambda function above, and deploy the stack
  3. Use the Macro in a CloudFormation template

How to implement a CloudFormation Macro

Let’s now focus on the implementation details of the Lambda function performing the template processing.

  • region is the region in which the macro resides
  • accountID is the account ID of the account invoking this function
  • fragment is the portion of the template available for processing (could be the whole template or only a sub-section of it) in JSON format, including siblings
  • params is available only if you are processing a sub-section of the template and it contains the custom parameters provided by the target stack (not evaluated)
  • templateParameterValues contains the template parameters of the target stack (already evaluated)
  • requestId is the ID of the current function invocation (used only to match the response)
  • requestId must match the same request ID provided as input
  • status should be set to the string "success" (anything else will be treated as a processing failure)
  • fragment is the processed template, including siblings
  1. Your function processes some resources and customizes their properties (without adding or removing other resources)
  2. Your function extends the input fragment by creating new resources
  3. Your function replaces some of the resources — potentially your own custom types — with other real CloudFormation resources (note: this is what AWS SAM does too!)
  4. Your function does not alter the input fragment, but intentionally fails if something is wrong or missing (for example, if encryption is disabled or if granted permissions are too open)

How to deploy a CloudFormation Macro

Once you’ve implemented the processing function, you can use it to deploy a new macro.

How to use a CloudFormation Macro

Using a macro is the most likely scenario for most developers.

Conclusions

That’s all for Part 1 :)