Detailed Serverless Monitoring Using a Completely Automated Approach

Sarjeel Yusuf
8 min readDec 6, 2018

--

As a developer, one of the most irritating things I face is that my attempts to implement good coding practices often renders my code broken and filled with bugs. It is quite the tragicomedy of code development, all in the effort to comply with the ecosystem of DevOps. The problem usually occurs when I try to configure tools I am not familiar with, only to result in my code base inevitably throwing errors or behaving unexpectedly. I thus end up barraging senior developers with Slack messages, and Google with endless searches in order to unravel the correct way of configuring these tools. Obviously if one follows the tooling docs to the letter, it should be simple, right? Wrong! More often than not, the documentation does not always address the specific use case you are trying to implement. Most typically, developers suffer through lots of easily-avoidable errors in the journey to become an expert on how to configure and implement that particular tool.

One example of a difficult tooling scenario is implementing a typical serverless monitoring tool. In order to collect even basic monitoring data, you often need to perform an excessive amount of in-code configuration. Usually, you are required to ‘wrap’ your code with the monitoring tool for monitoring to actually occur. Wrapping is the most popular way to integrate monitoring tools and collect basic data from your functions. However, this approach means you need to wrap numerous functions in order to successfully monitor an entire application.

Furthermore, in addition to the integration burden, configuring these tools can also prove tedious. It’s too complicated to expect, for example, a single simple-minded stooge like an unpaid intern, to wrap hundreds of functions and also expect him to do so successfully. No, there needs to be a more simple, less risky way of implementing monitoring for serverless applications.

Our team at Thundra realizes that this manual, labor-intensive approach simply isn’t efficient, and we’ve thus tried hard to design a monitoring tool which is easy and simple to use. Thundra offers its own serverless plug-in that allows automated wrapping to set up basic monitoring, along with automated instrumentation for more detailed monitoring. You don’t need to change your code base just to monitor your code. Finally, adding monitoring to your Lambda functions is actually easier than writing your Lambda functions. This is the way it should be!

Our previous blog showed you how our automated instrumentation (for detailed monitoring data) works, especially for Node.js. Now, let’s take a look at how automated wrapping is implemented. For now, we support automated wrapping for Node.js and Python functions.

Automated Wrapping using the Thundra Plug-in

Automated wrapping is one of the latest developments in Thundra, aimed to make it easy to wrap Lambda functions and provide instant monitoring. Developed in the form of a serverless plug-in, Thundra’s automated wrapper allows developers to simply specify their Lambda functions in a serverless.yml file, and then leave wrapping to the Thundra serverless plug-in.

How does it work? Thundra obtains your specified handler in the serverless.yml file and creates an additional abstraction level that contains the Thundra wrapper. The new abstraction level then calls upon your specified handler.

Let’s say we have a Node.js Lambda function that interacts with a `user` repository, and in doing so makes use of external services such as DynamoDB and performs HTTP requests. By using Thundra’s automatic wrapper, I can simply add the serverless plugin to my serverless.yml and perform automatic wrapping as I deploy the function. When I view my deployed function in the AWS Lambda Console, I can verify from the Handler field that automatic wrapping has been performed.

AWS Lambda console

Upon invoking my function, I immediately can see invocation details in my Thundra Web Console. Thundra automatically enables collection of external services trace data, so I can also see how my example function here is working with DynamoDB and HTTP calls:

Thundra traces after automated wrapping

More details can be found in the Thundra Docs along with specific instructions on how to automate the wrapping of Lambda functions.

As a result of automated wrapping, we can now wrap several functions all at once with a single command! No longer does the office stooge have to spend his valuable time in the complex and arduous task of wrapping one function after another (and potentially making just as many mistakes) in order to integrate serverless monitoring. Instead, with Thundra, anyone can now integrate monitoring into their code base with a simple serverless plug-in.

At the end of the day, we developers can use Thundra to perform more efficient and error-free wrapping of your Lambda functions.

What’s next? Wrapping your functions gives you basic monitoring information. But, in order to really see deep into your serverless application, you must also conduct instrumentation. Often, adding more detailed monitoring presents the same problems with developers manually adding new code to monitor their code and potentially causing new problems. However, Thundra also offers a simple, automated, approach to instrumenting your code. Let’s show you how it’s done by configuring simple environment variables in your AWS Lambda console.

Adding Detailed Tracing — Automated Instrumentation by Configuring Environment Variables

Even though Thundra’s automatic wrapper solves the many woes of integrating monitoring tools, getting additional information from your Lambda environment can be challenging. Commonly, the developer adding monitoring will need to manually instrument their functions with code modifications and re-deploy functions whenever they change their configurations.

However, Thundra offers an automated approach for instrumentation that only requires configuring environmental variables. It is easy to do, and most importantly can be done without any modification of your code base or the need for redeployment.

After using the Thundra plug-in to automatically wrap my functions, I see basic monitoring information as in the screenshot below:

Thundra trace before instrumentation

The basic monitoring data I get from wrapping is detailed with metrics, request and response values, and enriched trace data. I managed to procure all of this without having to open my code files even once.

But, perhaps I am trying to locate a bug in my code and need even more detailed information than only what external services my function is calling. I can further instrument my function to monitor how the repository layer behaves throughout the invocation — again without any changes to my Lambda function.

I do this by making use of the `thundra_agent_lambda_trace_instrument_traceableConfig` environment variable. By defining the files and functions I would like to monitor as in the image below, I can successfully instrument my Lambda functions in such a way as to monitor the repository.

Instrumentation with environment variables

Now if I invoke my function, I get detailed traces showing how the invocation includes functions making database calls, as seen in the image below:

This is great! But, it’s still not a complete picture of what is happening during the invocation. Let’s discuss how to extract even more information — still using the same, simple approach.

Going Deeper — Enriching Span Data by Expanding Environment Variable Configurations

For instance, perhaps I want I could enable Thundra to trace my function arguments, any errors that may arise, and return values. With the current configuration, none of this information shows up, as you can see in the screenshot below:

Trace chart with trace arguments disabled

We can add additional information by simply setting property values, as shown in the image below:

Advanced instrumentation with environment variables

After setting property values, my trace chart now contains embedded information in terms of arguments, errors and return values. This information can be analyzed by clicking on a specific span which displays its trace information in the `Arguments & Return` tab below the chart.

Thundra trace charts with trace arguments enabled

Finally, I have a good level of detailed information about what’s going on in my Lambda environment. I have high-level basic monitoring information that tells me what invocations are being executed and what external services they interact with. I also have detailed tracing information that tells me what database calls are being made, including the request and return values. This helps me pinpoint exactly where problems are occurring in the Lambda code execution and then go fix them.

The best part? I managed to achieve all of this without making any changes to my actual code base and only adding a single line in the Lambda Web Console! Why should I write code to simply monitor the code I already wrote? With Thundra there’s really no reason to! Moreover, there are many options on data we can collect via environment variables. As one example, we could choose to disable the trace spans generated by the interaction of external services.

Often, the best solution is truly the simplest. Now, with Thundra’s automated wrapping capability, you can completely avoid code changes in order to get even the most detailed monitoring information from your Lambda functions. Monitoring your Lambda functions is now as simple as deploying your Lambda functions. Configuring your Lambda functions is as easy as writing a single line of config to an environment variable. This simple, yet powerful, approach saves a huge amount of time and reduces the risk of messing up your precious Lambda code to nearly zero.

Want to try it for yourself? Sign up and use of Thundra is totally free so it’s no risk for you to try. Check out the Thundra Docs to automatically wrap and instrument your Lambda functions and let us know what you think!

Originally published at blog.thundra.io.

--

--

Sarjeel Yusuf

An engineer turned product manager passionate about cloud computing and everything DevOps.Product Manager @Atlassian building DevOps capabilities.