Terraform Functionbeat

How to ship AWS Lambda logs with Functionbeat in a Terraform context

Pascal Euhus
FAUN — Developer Community 🐾

--

Photo by Luke Chesser on Unsplash

Anybody thinking about a solution for centralized logging and monitoring of their applications, will quickly come across the Elastic Stack. With its sophisticated Kibana UI, the powerful Elasticsearch backend, and the ecosystem for log and metric collectors, the Beats-Framework, it offers a good all-in-one solution for various application purposes.

In the following, we will look at how logs from AWS Lambdas can easily get into the Elastic Stack. There is a simple off-the-shelf solution for this
with Functionbeat. This beat, which is itself an AWS Lambda, comes with an installer and uses Cloudformation to generate the required AWS resources, such as IAM and security groups.
The biggest disadvantage here, however, is if you do not want to stick to Cloudformation, but also want to integrate the Functionbeat into your IaC stack.
Fortunately the appropriate Terraform module offers a solution. The module is a wrapper around the installer and takes care of the Functionbeat configuration and the required AWS resources.

It is assumed that a VPC and at least one subnet already exists, where
Functionbeat is about to be deployed.
If not, a complete, more comprehensive Terraform example can be found in the official Github repo.

What we are going to deploy

The following concentrates on how to deploy Functionbeat as AWS Lambda via Terraform with optional direct attachment of Cloudwatch Loggroups. As shown in the overview below, there are more inputs apart from Cloudwatch logs available for Functionbeat. Even thought the Functionbeat Terraform module has no built-in support for additional triggers, it can be used as a foundation to attach Lambda triggers to Functionbeat, the standart Terraform way using the modules ouput of the actual Functionbeat ARN.

Photo by Elastic

Integrate Functionbeat module in Terraform

  1. Create a security group for Functionbeat
resource "aws_security_group" "functionbeat_securitygroup" {
name = "Functionbeat"
vpc_id = <REDACTED>

egress {
from_port = 443
protocol = "tcp"
to_port = 443
description = "HTTPS"
cidr_blocks = ["0.0.0.0/0"]
}
}

2. Integrate the Functionbeat module

module "functionbeat" {
source = "git::ssh://
git@github.com:PacoVK/terraform-aws-functionbeat.git"

application_name = "crazy-test-application" # (1)
functionbeat_version = "7.17.1" # (2)
lambda_config = {
name = "my-kibana-exporter" # (3)

vpc_config = {
vpc_id = data.aws_vpc.vpc.id
subnet_ids = data.aws_subnets.private.ids
security_group_ids = [ aws_security_group.functionbeat_securitygroup.id
]
}

output_elasticsearch = { # (4)
hosts : ["https://your-endpoint:443"]
protocol : "https"
username : "elastic"
password : "mysupersecret"
}
}
}

1) application_name => value added to any log in Kibana as a tag for filtering
2)
functionbeat_version => specify which version to deploy
3)
name => Name of the deployt Functionbeat Lambda
4)
output_elasticsearch => any valid functionbeat YAML config for output.elasticsearch in HCL syntax

To further configure Functionbeat you can use fb_extra_configuration to pass all valid options as HCL construct into the module. To keep the transformation from YAML to HCL simple it is recommended to use a online converter like YAML to HCL

If you are hosting the Elastic Stack on Elastic Cloud and expect a big amount of logs to ship, i recommend to use the private link feature to keep the traffic within AWS. This will save costs because the traffic won’t leave AWS backbone and you wont be charged for egress traffic. Of course you’ll be charged for the private link resources.

After having everything is in place you can run the following to deploy Functionbeat:

terraform get && terraform apply

You won’t have any logs in Kibana yet since no subscriptions are defined. To do so you have several options now.

Pure Terraform — subscribe a cloudwatch loggroup

To use the modules built in cloudwatch subscription capability pass the corresponding cloudwatch group name to the
Functionbeat module via loggroup_name property.

Integrate Lambdas deployed via Serverless Framework

If you leverage on the Serverless framework for your application lambdas, this module offers an interface for that.
The Functionbeat ARN is written to SSM per default, hence you can make use of the parameter within your serverless specs.
Install the plugin serverless-plugin-log-subscription into you Serverless stack
which makes it a breeze to attach the corresponding cloudwatchlog groups.

  1. Use the Functionbeat module, install the Lambda and ensure lambda_write_arn_to_ssm is set to true, which is default.
module “functionbeat” {
lambda_config = {
name = “my-kibana-log-shipper”

}

2. To attach all your Lambdas logs for your Serverless application add the following plugin config into your serverless.yml

custom:
logSubscription:
enabled: true
destinationArn: ‘${ssm:my-kibana-log-shipper_arn}’

Other than that the Serverless plugin can also operate on a per function level (please head over to the official docs).

Conclusion

We saw how to make use of Functionbeat in a Terraform context and how easy it is to integrate Functionbeat with existing Serverless
based applications. Of course you can make use of any other infrastructure-as-code tool by using the target Lambda ARN exposed via SSM.

Thank you for reading, you can reach me out via:

Resources

Join FAUN: Website 💻|Podcast 🎙️|Twitter 🐦|Facebook 👥|Instagram 📷|Facebook Group 🗣️|Linkedin Group 💬| Slack 📱|Cloud Native News 📰|More.

If this post was helpful, please click the clap 👏 button below a few times to show your support for the author 👇

--

--

Software-Engineer and DevOps-Enthusiast, AWS Solutions Architect Professional, GCP Professional Cloud Architect