FLOWN Tech blog
Published in

FLOWN Tech blog

An adventure with Vercel and Datadog

At FLOWN, we were looking for a good tool to provide better visibility over our application logs. Because, you know, it’s great when things go well, but you better have some decent logs when it goes wrong. We wanted to have a setup that is easy to query and can alert us when something fails. After trying a few tools, we decided to set this up with Datadog. Our main product is a Next.JS app that is hosted on Vercel. This post details our adventure with the two tools.

Draining the logs from Vercel:

Vercel has an integration https://vercel.com/integrations/datadog-logs for draining the logs into a Datadog instance. You can literally picture a drain that directs all the logs to Datadog, just like any sink.

Note that this only applies to the logs produced by Vercel, other info would have to either be manually monitored or once a new integration is developed.

What are we logging?

Datadog can be used to monitor invocation of Vercel Serverless Functions and any other relevant output that Vercel produces.

Vercel Serverless functions emit two types of logs:

  1. Request logs — an HTTP request is made to the function. They are used to track which functions are called most often and how well they perform. These include the following info:
    - HTTP response code
    - Region
    - Request duration
    - Function name
    - Memory used
    - Invoked path URL
  2. Application logs — any output to console.log. Help collect additional data for testing and debugging purposes.

Formatting logs:

At FLOWN, we use Pino as the log formatter and it needs a bit of extra work for integrating perfectly with Data dog. By default, [pino](<https://www.npmjs.com/package/pino>) uses integers to represent log levels. The table below summarises pino's log levels:

When doing this, we noticed that all logs were being (erroneously) marked as INFO. For instance, the level in the log below is 50 which should be an ERROR but it appears as INFO.

This is no good! We can’t create alerts for these logs that will come through with a high signal-to-noise ratio.

To change this, we need to tell pino to format the log levels as strings e.g error, warn, info. The pino config looks as follows:

We then need to dive into Pipelines which can be accessed under the Configuration tab in Datadog's Logs section. Pipelines are made up of Processors or other pipelines. To create a new pipeline press the Add a new pipeline button and pick a meaningful name. You then need to create two Processors.

  1. Grok Parser to convert the data in the log messages into fields in the event object from where attributes are later inferred. Pressing Parse my logs will set up everything for you (make sure Vercel has already fed some logs into Datadog).

2. Add another Processor, the State remapper.

Your logs should now be classified correctly according to their levels and you can now properly filter them by status.

The original solution was found here: https://docs.datadoghq.com/logs/faq/why-do-my-logs-show-up-with-an-info-status-even-for-warnings-or-errors/

Monitors and Alerting:

Aside from having visibility over the logs and errors in our system, we wanted to be able to be alerted of any issues so that we can respond as quickly as possible. In Logs you can define specific views and only alert when something is added here. The following query represents our view of errors in production:

@proxy.host:flown.com status:error

Note: By default, Vercel’s host property is tied to the Vercel deployment URL which changes with every deployment. To consistently track your desired host use the @proxy.host variable. As you may have guessed, our website is hosted at https://flown.com.

We can then set up a Monitor to notify us when something is drained into this view. Under Monitors > New Monitor you can add a query for any of your log views and then alert relevant users/channels (e.g. Slack).

Bonus: Integrate with Slack:

Integrating with Slack can be used to send alerts that can reach developers faster, should there be an error in the system. The setup differs depending on where you set up your Datadog instance. For US customers, you can just use the Slack app and follow the instructions here. For EU customers, which was my case, the process is slightly different.

Note: I was unable to find the Datadog (legacy) app and just used the Incoming Webhooks integration and tied it to a channel in Slack. Note that this uses a legacy solution.

Under the manage section in Slack find the Incoming Webhooks integration and add it to your channel. You are then given a Webhook URL. This is what you will add to Datadog.

In Datadog, under Integrations choose Slack and connect to the webhook - this will now allow you to notify a Slack channel with your Monitor. You should then start receiving alerts on your chosen Slack channel. Happy monitoring!

That’s all folks! There was not a lot of info on this topic so we thought about sharing this externally. We hope it was helpful to someone. Feel free to comment if you find any issues and we’ll try to resolve them.

Resources:

--

--

--

FLOWN is a deep-work-as-a-service company. We give you the tools and spaces for helping you work and refuel better. This publication is maintained by the tech team.

Recommended from Medium

How to make the Daily Scrum more engaging?

10 Common Concerns About Model-Based Testing (and how we responded to them)

Concerns and responses model-based testing

Complete Infrastructure Automation on AWS with Terraform

List comprehension — make your code readable.

Lok Sewa Aayog Operating System Set 1

Just Start Doing It

[Work][Python]How connect system through SSH

Phase 1: Ammo Count

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Miguel Marin Vermelho

Miguel Marin Vermelho

Software engineer, building FLOWN. Languages and productivity enthusiast. Starting my adventure in writing to see where it will take me.

More from Medium

Next-Auth.js authentication process with custom Micro Service consume and with Custom Sign-In page

Migrating Keeps.com to Next.js

Google reCaptcha V3 on a form that posts to NextJs API Route.

The importance of using Server-Side Rendering with Next.JS

Next.JS with Server-Side Rendering. Showing a server with an arrow pointing to a laptop