An adventure with Vercel and Datadog

At FLOWN, we were looking for a good tool to provide better visibility over our application logs. Because, you know, it’s great when things go well, but you better have some decent logs when it goes wrong. We wanted to have a setup that is easy to query and can alert us when something fails. After trying a few tools, we decided to set this up with Datadog. Our main product is a Next.JS app that is hosted on Vercel. This post details our adventure with the two tools.
Draining the logs from Vercel:
Vercel has an integration https://vercel.com/integrations/datadog-logs for draining the logs into a Datadog instance. You can literally picture a drain that directs all the logs to Datadog, just like any sink.
Note that this only applies to the logs produced by Vercel, other info would have to either be manually monitored or once a new integration is developed.
What are we logging?
Datadog can be used to monitor invocation of Vercel Serverless Functions and any other relevant output that Vercel produces.
Vercel Serverless functions emit two types of logs:
- Request logs — an HTTP request is made to the function. They are used to track which functions are called most often and how well they perform. These include the following info:
- HTTP response code
- Region
- Request duration
- Function name
- Memory used
- Invoked path URL - Application logs — any output to
console.log
. Help collect additional data for testing and debugging purposes.
Formatting logs:
At FLOWN, we use Pino as the log formatter and it needs a bit of extra work for integrating perfectly with Data dog. By default, [pino](<https://www.npmjs.com/package/pino>)
uses integers to represent log levels. The table below summarises pino
's log levels:

When doing this, we noticed that all logs were being (erroneously) marked as INFO
. For instance, the level
in the log below is 50 which should be an ERROR
but it appears as INFO.

This is no good! We can’t create alerts for these logs that will come through with a high signal-to-noise ratio.
To change this, we need to tell pino
to format the log levels as strings e.g error
, warn
, info
. The pino
config looks as follows:

We then need to dive into Pipelines
which can be accessed under the Configuration
tab in Datadog's Logs
section. Pipelines are made up of Processors
or other pipelines. To create a new pipeline press the Add a new pipeline
button and pick a meaningful name. You then need to create two Processors
.
Grok Parser
to convert the data in the log messages into fields in the event object from where attributes are later inferred. PressingParse my logs
will set up everything for you (make sure Vercel has already fed some logs into Datadog).

2. Add another Processor, the State remapper
.

Your logs should now be classified correctly according to their levels and you can now properly filter them by status.
The original solution was found here: https://docs.datadoghq.com/logs/faq/why-do-my-logs-show-up-with-an-info-status-even-for-warnings-or-errors/
Monitors and Alerting:
Aside from having visibility over the logs and errors in our system, we wanted to be able to be alerted of any issues so that we can respond as quickly as possible. In Logs
you can define specific views and only alert when something is added here. The following query represents our view of errors in production:
@proxy.host:flown.com status:error
Note: By default, Vercel’s host property is tied to the Vercel deployment URL which changes with every deployment. To consistently track your desired host use the @proxy.host
variable. As you may have guessed, our website is hosted at https://flown.com.
We can then set up a Monitor to notify us when something is drained into this view. Under Monitors
> New Monitor
you can add a query for any of your log views and then alert relevant users/channels (e.g. Slack).

Bonus: Integrate with Slack:
Integrating with Slack can be used to send alerts that can reach developers faster, should there be an error in the system. The setup differs depending on where you set up your Datadog instance. For US customers, you can just use the Slack app and follow the instructions here. For EU customers, which was my case, the process is slightly different.
Note: I was unable to find the Datadog (legacy) app and just used the Incoming Webhooks integration and tied it to a channel in Slack. Note that this uses a legacy solution.
Under the manage section in Slack
find the Incoming Webhooks integration and add it to your channel. You are then given a Webhook URL
. This is what you will add to Datadog.

In Datadog, under Integrations
choose Slack and connect to the webhook - this will now allow you to notify a Slack channel with your Monitor. You should then start receiving alerts on your chosen Slack channel. Happy monitoring!
That’s all folks! There was not a lot of info on this topic so we thought about sharing this externally. We hope it was helpful to someone. Feel free to comment if you find any issues and we’ll try to resolve them.