Structured Logging using AWS Lambda Powertools for TypeScript

Winnie Ho
AMA Technology Blog
6 min readMay 15, 2023

In addition to distributed tracing, logging is another critical component that supports observability and helps us prepare for operational excellence. Logging data, events, and errors in a consistent and machine-readable format enables easy integration with monitoring tools, thus improving the ability to identify trends and address performance issues in our applications.

Photo by Chandler Cruttenden on Unsplash

The Logger utility of the AWS Lambda Powertools for TypeScript provides a convenient way to record key details of each Lambda invocation as JSON in CloudWatch. In this post, we will use a simple Lambda function with an SQS queue event source to show how we can quickly gain insights into the operation of a Lambda function and debug our application by using structured logging. The source code is in this GitHub repo.

We will use AWS CLI to send a batch of messages to SQS which will be processed by a Lambda function.

Installation and Setup

We will install the Powertools using the official public Lambda Layer to enable faster deployment of our CDK application. Then we reference the Powertools Layer and exclude the Layer modules from being bundled with the Lambda code

// logger/lib/lambda-powertools-typescript-logger-stack.ts
// - The latest layer version was 11 at the time of writing

const powertoolsLayer = LayerVersion.fromLayerVersionArn(
this,
'powertools-layer',
`arn:aws:lambda:${
cdk.Stack.of(this).region
}:094274105915:layer:AWSLambdaPowertoolsTypeScript:11`
);

const lambdaPowertoolsMiddyFunction = new NodejsFunction(
this,
'lambda-powertools-middy-logger',
{
bundling: {
externalModules: ['@aws-lambda-powertools/logger', '@middy/core'],
},
layers: [packagesLayer, powertoolsLayer],

// other Lambda configurations...
}
);

// SQS and event source...

Next, initialize the Logger instance outside the Lambda handler so that subsequent invocations can reuse it. Then we can log our first message:

// logger/src/lambdas/powertoolsMiddy/index.ts

import { Logger } from '@aws-lambda-powertools/logger';

const logger = new Logger();

export const handler = async ({ Records }: SQSEvent): Promise<string> => {
logger.info('Logger utility reporting for duty.');
};

To trigger the Lambda function, use AWS CLI to send a batch of messages to the SQS queue. This is the resulting JSON log entry in CloudWatch. The X-Ray trace ID is always present no matter if tracing is enabled or not:

Logger includes a set of staåçndard keys with the CloudWatch log.
Logger includes a set of standard keys with the CloudWatch log.

Handling Common Lambda Logging Scenarios

Capturing Lambda Context Information

To keep track of cold starts and other embedded metrics like memory size, we can use the injectLambdaContext Middy middleware:

// logger/src/lambdas/powertoolsMiddy/index.ts

import { Logger, injectLambdaContext } from '@aws-lambda-powertools/logger';

const logger = new Logger();

const lambdaHandler = async ({ Records }: SQSEvent): Promise<string> => {
logger.info('Logger utility reporting for duty.');
};

export const handler = middy(lambdaHandler).use(injectLambdaContext(logger));
Logger injects extra context information into the CloudWatch log when enabled.
Logger injects extra context information into the CloudWatch log when enabled.

Logging Incoming Requests/Events and Responses

We often need to record the request and the success or failure response of an API call. In our case, we would want to log the incoming SQS event to our Lambda. With logging Lambda context enabled like previously shown, we can log the SQS event with a simple setting in the Middy middleware:

// logger/src/lambdas/powertoolsMiddy/index.ts

import { Logger, injectLambdaContext } from '@aws-lambda-powertools/logger';

const logger = new Logger();

const lambdaHandler = async ({ Records }: SQSEvent): Promise<string> => {
logger.info('Logger utility reporting for duty.');
};

export const handler = middy(lambdaHandler).use(
injectLambdaContext(logger, { logEvent: true })
);

The { logEvent: true } configuration tells Logger to capture the SQS event in CloudWatch:

Logger can record the incoming event of a Lambda invocation. The event varies depending on the type of integration, like the SQS event shown here.
Logger can record the incoming event of a Lambda invocation. The event varies depending on the type of integration, like the SQS event shown here.

Note: To avoid logging sensitive information, filter out things like credentials, financial data, and personal identifiable information. You can also use CloudWatch Logs data protection to audit and mask any accidentally logged sensitive data.

To record the successful or failed result of a Lambda invocation, we can log the response before the function returns:

// logger/src/lambdas/powertoolsMiddy/index.ts

const lambdaHandler = async ({ Records }: SQSEvent): Promise<string> => {
let result = '';
try {
logger.debug('This is a debugging message which only shows in staging.');
logger.info('Logger utility reporting for duty.');
logger.warn('This is a warning.');
logger.critical('Something critical happened.');

// Uncomment next line to skip processing SQS messages and throw an error
// throw new Error('Sample message thrown');
logSqsMessages(Records);

result = `✅ ${Records.length} SQS records processed.`;
} catch (error) {
if (error instanceof Error) {
logger.error(`${error.message} with a custom key`, { customKey: error });
}
result = '❌ No SQS record processed.';
} finally {
logger.info(result);
return result;
}
};

The Logger methods log messages at the corresponding level in CloudWatch. For example, this is what a logger.error looks like in CloudWatch with the error message thrown, the exact error location, and the stack trace:

Notice how we use a custom key when we pass the error object to logger.error. We can also pass the error object as the second parameter. In that case, the error will be logged with the default key name error.

Debugging Business Logic in Different Environments

While we often investigate issues in a staging or test environment by logging debugging messages, we do not want these messages to sneak into production. The Logger utility allows us to set how verbose the logging should be by using the Lambda environment variable LOG_LEVEL or the logLevel parameter when we instantiate Logger inside a Lambda function. Combined with the proper CDK stack prop, it is easy to ensure no unnecessary logging happens in production:

// logger/lib/lambda-powertools-typescript-logger-stack.ts

const lambdaPowertoolsMiddyFunction = new NodejsFunction(
this,
'lambda-powertools-middy-logger',
{
environment: {
LOG_LEVEL: env === 'staging' ? 'DEBUG' : 'WARN',
// other variables...
},
// other Lambda settings...
}
);

The Logger methods have different cardinalities, from the lowest in debug to the highest incritical. Specifying LOG_LEVEL makes Logger only print items with the same or higher cardinality than the set level. So the previous snippet tells Logger to print everything in staging but only items logged by warn, error and critical in other environments.

Enforcing Logging Standard

Organizations often establish a logging standard to ensure activities across all applications and services from different teams are recorded consistently. We can customize the log output by defining a custom log formatter and passing it to Logger when we initialize the instance:

// logger/src/lambdas/powertoolsMiddy/index.ts

const logger = new Logger({
logFormatter: new CustomErrorFormatter(),
});

// logger/src/lambdas/powertoolsMiddy/CustomLogFormatter.ts

enum logLevelIcon {
DEBUG = '🐛',
INFO = 'ℹ️',
WARN = '⚠️',
ERROR = '🚨',
CRITICAL = '🔥',
}

class CustomErrorFormatter extends LogFormatter {
public formatAttributes(attributes: UnformattedAttributes): LogAttributesWithContext {
const uppercaseLogLevel = attributes.logLevel
.toString()
.toUpperCase() as keyof typeof logLevelIcon;
return {
awsRegion: attributes.awsRegion,
timestamp: this.formatTimestamp(attributes.timestamp),
serviceName: attributes.serviceName,
logLevel: attributes.logLevel,
lambda: {
awsRequestId: attributes.lambdaContext!.awsRequestId,
xrayTraceId: attributes.xRayTraceId!,
name: attributes.lambdaContext!.functionName,
coldStart: attributes.lambdaContext!.coldStart,
},
message: `${logLevelIcon[uppercaseLogLevel]} ${uppercaseLogLevel}: ${attributes.message}`,
};
}
}

This is what a logger.warn saves in CloudWatch after using the custom formatter:

Use a custom log formatter to customize how your log entries show up in CloudWatch.
Use a custom log formatter to customize how your log entries show up in CloudWatch.

Analyzing Log Data and Investigating Problems

With structured logging in place, we can quickly parse, filter, and analyze log data in CloudWatch Logs Insights or other third-party log management tools if we have a setup to ship logs from CloudWatch to an external application. For example, all the fields we define in our custom log formatter show up in Logs Insights query editor as IntelliSense:

Custom fields show up in CloudWatch Logs Insights.
Custom fields show up in CloudWatch Logs Insights.

Summary and What’s Next

This post demonstrates how the Logger utility in AWS Lambda Powertools for TypeScript gives us a convenient way to implement structured logging, which is fundamental to providing insights into the health of our applications. In the last post of the series, we will explore another aspect of operational excellence: metrics.

Winnie Ho is a Senior Software Developer @ Alberta Motor Association who has an insatiable desire to learn, read and write about all the things AWS and the web.

--

--

Winnie Ho
AMA Technology Blog

Past: User Experience Designer | Now: 2 x AWS Certified 💙 Senior Software Developer @ amaabca | Future: Impact Beyond Code