Add monitoring to your Amplify app by using Amplify framework

Rui Pedrosa
The Startup
Published in
9 min readDec 12, 2019

In case you’re using the AWS Amplify framework, you may want to proactively monitor your app especially if you’re doing something for being used by others. By proactively monitor, I mean:

As with most of the tasks in the software industry, there are multiple ways of getting it done. You could implement an Error Processor that notifies you when an error is logged or go further on build a Log Analytics Solution. However, since you’re using AWS Amplify framework and Cloud Formation templates, I just want to share with you a quicker and simpler way to start doing that we used at manifestsoftware.io in our partnership with SRG Software which:

  1. is aligned with Amplify framework architecture;
  2. can be fully deployed using Amplify CLI just by enter amplify push command so, consequently, it will work in a consistent and reliable way in all existent (and new) environments that you may have;

Step 1: Get notified when your function fails to execute due to errors in the function code (response code 4XX)

We’ll:

  1. create our own “monitoring” custom category/cloud formation stack that can be used by any other Amplify category by taking advantage of dependsOn block in the backend-config.json file. The “monitoring” custom category/cloud formation stack will create an SNS topic resource and a subscription. You can, later on, update the template to, for example, create an SMS subscription or publish a message to Slack;
  2. set up a Cloud Watch alarm if AWS fails to invoke our function and use the SNS topic to notify when that happened;

So let's start by:

  1. Modify amplify/backend/backend-config.json in your project by creating a custom category/cloud formation stack:

It is preferable that you:
- name category name following camelCase format (we call it “monitoring”);
- name resource name following PascalCase format (we call it “Topics”);
Variables outputted by this stack could be used in other stacks by following [categoryName][resourceName][variableName] convention (in this case, something like monotoringTopics[variableName])

2. Under amplify/backend folder, make a folder structure like the following:

3. “template.json is a cloudformation template, and parameters.json is a json file of parameters that will be passed to the cloudformation template. Additionally, the env parameter will be passed in to your cloudformation templates dynamically by the CLI”. In this case, we want to get notified by email using an SNS topic so the template.json and parameters.json can be simple as this:

parameters.json
template.json

We’re creating an SNS topic called “alarms” and an “alarmsSubscriptionEmailProd” (or “alarmsSubscriptionEmailTest”) subscription;

4. To make a function dependent on the monitoring category and have access to the “AlarmsTopicArn” output variable, is simple as adding the dependsOn block in the backend-config.json file:

As I mentioned before, Amplify CLI pass the “AlarmsTopicArn” output variable of the monitoring category to our MyFunction in a form of [category][resource][output-variable] parameter (in this case, monitoringTopicsAlarmsTopicArn);

5. And finally, create an alarm for when AWS failed to invoke your “MyFunction”:

MyFunction-cloudformation-template.json (just InvocationErrorAlarm)

The alarm is very sensitive as it popup at the first fail. We find this particularly helpful on an MVP launch as the risk of having errors is higher but feel free to tweak it ;)

6. If you now doamplify push, you can now see your “InvocationErrorAlarm” in Cloud Watch alarms (“medium” is my environment name ;):

MyFunction-medium-invocation-errors (Pending confirmation)

Don’t forget to look at the email you set on “alarmsSubscriptionEmailProd” (or “alarmsSubscriptionEmailTest”) and confirm the subscription:

MyFunction-medium-invocation-errors (Confirmed)

Congratulations! All set! Simple no? time to test (:

7. Just make AWS fail to invoke your function by update your index.js to:

exports.handler = function (event, context) { //eslint-disable-line
throw new Error("unexpected");
};

and voilá!
your function is “in alarm”:

and you got an email like this:

Step 2: Get notified when your function takes too long to run

In step 1., we set up an alarm based on the “Errors” metric for the “AWS/Lambda” namespace. AWS Lambda CloudWatch Metrics also includes a “Duration” metric so get notified when your function takes too long to run is simpler as creating a new alarm in MyFunction cloudformation template ;) :

MyFunction-cloudformation-template.json (just DurationAlarm)

Once again, this is a very sensitive alarm and, as I’m setting maxDurationInMs in parameters.json to 1s. Let see if it works by making our index.js file to take at least 2s to run:

exports.handler = function (event, context) { //eslint-disable-line  console.log("Waiting for 2 second...");  var millisecondsToWait = 2000;  setTimeout(function () {    console.log(`value1 = ${event.key1}`); // Called 2 second after   the first console.log    context.done(null, 'Success!'); // SUCCESS with message  }, millisecondsToWait);};

and voilá! You got an email! 👏

Step 3: Get notified when your function executes but logged errors

Until now, we’re being set up Cloud Watch alarms based on AWS Lambda metrics. To get notified when a function logs an error using console.error("Your error message"); , we to set up an alarm that looks at Cloud Watch logs for a pattern that contains “ERROR” text.
So, let's start by creating an:

  1. AWS::Metric::Filter that “describes how CloudWatch Logs extracts information from logs and transforms it into Amazon CloudWatch metrics”:

As you can see, we’re looking for a "FilterPattern": "Error" in the LogGroup resource. Having a LogGroup resource is not mandatory as AWS automatically create a Log Group for your function with aws/lambda/function-name but I prefer having the LogGroupbeing defined in the cloud formation template so I can define a default RetentationInDays and have my AWS region automatically cleaned as much as possible ;)

2. One moreAWS::CloudWatch:Alarm ;)

3. Be sure that you delete any existent Cloud Watch log for your function (aws/lambda/MyFunction-{YOUR_ENV}) before you do amplify push. After you do amplify push , you can see that there is a metric & alarm set up in the LogGroup resource:

LogGroup aws/lambda/MyFunction-mediumAWS::Metric::Filter
LogGroup aws/lambda/MyFunction-mediumAWS::Metric::Filter & Alarm

4. Let see if it works by logging an error message even if the function succeeds:

exports.handler = function (event, context) { //eslint-disable-line  console.error('Oh no, I just failed to send a welcome email on a Cognito PostConfirmation trigger but I don\'t registration to fail so I\'m calling context.done with success message');  context.done(null, 'Success!'); // SUCCESS with message};

and voilá! You got an email!

Step 4: End-to-end view of requests (including response times)

According to AWS:

With X-Ray, you can understand how your application and its underlying services are performing to identify and troubleshoot the root cause of performance issues and errors. X-Ray provides an end-to-end view of requests as they travel through your application, and shows a map of your application’s underlying components.

So, the question is, how hard it is?

Start tracking our MyFunction in X-Ray is simple as:
1. install the x-ray sdk:

npm install aws-xray-sdk-core --save

2. make sure that you’ve x-ray enable for your lambda function in cloudformation template:

"LambdaFunction":{
"Type":"AWS::Lambda::Function",
"DependsOn": [ "AmplifyResourcesPolicy" ], ...

"Properties": {
...
"TracingConfig":{
"Mode":"Active"
}
...
}
},
..."AmplifyResourcesPolicy":{
"DependsOn":[
"LambdaExecutionRole"
],
"Type":"AWS::IAM::Policy",
"Properties":{
"PolicyName":"amplify-lambda-execution-policy",
"Roles":[
{
"Ref":"LambdaExecutionRole"
}
],
"PolicyDocument":{
"Version":"2012-10-17",
"Statement":[
{
"Effect":"Allow",
"Action":[
"xray:PutTraceSegments",
"xray:PutTelemetryRecords",
"xray:GetSamplingRules",
"xray:GetSamplingTargets",
"xray:GetSamplingStatisticSummaries"
],
"Resource":"*"
}
]
}
}
},
...

and voilá! On X-Ray you’ve now a trace map (with average response times):

MyFunction-medium trace map

3. It is very likely that having a serverless backend in AWS that you want to use aws-sdk to make a call to other AWS services. If that is the case, be sure that aws-sdk also uses X-Ray:

const AWSXRay = require('./node_modules/aws-xray-sdk-core');// const aws = AWSXRay.captureAWS(require('./node_modules/aws-sdk'));exports.handler = function (event, context) { //eslint-disable-linecontext.done(null, 'Success!'); // SUCCESS with message};

If your function is being called by an API Gateway, be aware that you can also enable X-Ray in your API and you’ll get a nice service map like this one out of the box:

X-Ray example service map

pretty cool no? :)

Note: If you get a “Failed to get the current sub/segment from the context” error when calling your function locally (amplify invoke functon), be sure that you set AWS_XRAY_CONTEXT_MISSING environment variable to LOG_ERROR.

Step 5: log client errors

Once again, there are multiple ways to log client app errors but typically you make an HTTP request to a server when an error happens or you use a library/service that does that for you. However, if you just want a simple way to log and query client errors, at least, to start with and knowing that the Amplify framework has support for an analytics category in all platforms (iOS, Android, Web & Reactive Native) that allows recording a custom event with attributes, why not use such functionality to send, for example, javascript errors? we can even use AWS console — Pinpoint Analytics Events right way to visualize errors that may be happening.

If you’re doing a web app, your Analytics API call can be something like this:

Analytics.record({  name: 'js_error',  // Attribute values must be strings  attributes: {    code: err.code,    message: err.message,    url: err.url  }});

If you’re doing a web app, and start having those calls everywhere, maybe you can have your code more loosely coupled by taking advantage of Hub module to notify when an error happens and implement a listener that takes care of record the custom attribute:

In this way, if you want to replace Pinpoint with a more powerful library for client log errors, hopefully, you just need to change code in a single place, i.e., your listener callback function.
You can also override window.onerror the function for errors that end up in the window.

Last, but not least important, don’t forget to check if there are errors in AWS console ;)

To sum up

Monitoring (and logging) are essential especially if you’re concern about the availability and supportability of your software (two of the most important Quality Attribute (QA) in software).
That being said, this is not intended to be a full list of what you should do with monitoring (and logging) neither an attempt to replace an application monitor tool but rather a quick and simple way of setting up a basic monitoring infrastructure using the Amplify framework. As it happened to us at manifestsoftware.io in our partnership with SRG Software, it may help you to quickly get a product in the market using the Amplify framework and get notified when something wrong happens; and if something wrong happens, eventually you’ll fix it even before the user notice ;) As long as the product became more mature, you can use operations good practices to enhance your monitor strategy, take advantage of more powerful application monitor tools, etc.

feedback is great and I would like to hear what do you think about it? 😄

special thanks to my colleagues Vladimir Celica & Axel Stenson for bringing this to live :)

--

--