Streamlining MongoDB Deployment on AWS ECS with Terraform

Jens Båvenmark
AWS Specialists
Published in
23 min readMar 26, 2024

This post is for those looking to integrate MongoDB into their infrastructure without relying on managed services like AWS’s DocumentDB and prefer a hands-on approach. I assume you have some background in Terraform, AWS, and Linux (for testing).

In this article, we will dive into deploying a MongoDB container on Amazon ECS (Elastic Container Service) using Terraform. This method aims to provide a straightforward and efficient deployment strategy, leveraging the official MongoDB image, enhancing security with environment variables, and ensuring data persistence with Amazon EFS (Elastic File System). It is ideal for applications needing a straightforward, single-instance MongoDB setup, simplifying the process and avoiding the complexities and management overhead of more extensive cluster configurations.

I’ll be guiding you the necessary steps and configurations to get your database environment set up in a scalable and secure way. In addition to establishing a foundational infrastructure for running MongoDB on AWS ECS using Terraform, we’ll also deploy an EC2 instance. This instance will serve as a test environment to verify that our MongoDB database is functioning correctly and that we can successfully write to it.

Accessing the Example Code

All the Terraform configurations and example codes discussed in this guide are available for direct access and reference. The complete set of files and detailed instructions can be found in our GitHub repository at https://github.com/JBVK/awsecsmongodb.

I encourage you to clone or download the repository to follow along with the examples, try out the configurations on your own, and modify them as needed for your specific use case. This resource is designed to provide a hands-on understanding of deploying MongoDB on AWS ECS using Terraform, making the process as straightforward and efficient as possible.

AWS architecture diagram

AWS architecture diagram for ECS setup

Resources Overview

In this guide, we will deploy and configure the following resources essential for running MongoDB on AWS ECS using Terraform:

  • VPC (Virtual Private Cloud): Creates a secluded network space in AWS to facilitate secure and scalable deployment of resources.
  • Subnets: Segments the VPC into multiple subnets across different availability zones to support high availability and fault tolerance for our MongoDB container.
  • Internet Gateway (IGW): Enables internet access for our VPC, allowing communication between our ECS container and external networks.
  • Route Tables: Directs traffic within the VPC and to external destinations, ensuring that our MongoDB container can be reached and can communicate as needed.
  • Security Groups (SG): Acts as a virtual firewall for our ECS instances and the MongoDB container, controlling inbound and outbound traffic according to specified rules.
  • IAM Policies and Roles: Defines permissions for AWS services and resources, ensuring secure access controls are in place for ECS tasks, EFS, and other necessary services.
  • SSM Parameter Store: Here we will store the password the MongoDB will use for the admin user when deployed.
  • ECS Cluster: Hosts our container tasks, providing the underlying infrastructure for our MongoDB deployment on ECS.
  • ECS Task Definition: Describes the MongoDB container, including its Docker image, CPU and memory requirements, and environmental variables for secure operation.
  • ECS Task: Represents a single running instance of the MongoDB container within our ECS cluster based on the specifications outlined in the task definition.
  • EFS (Elastic File System): Provides a scalable, elastic, cloud-native file storage for persistence, ensuring that our MongoDB data remains intact across container restarts and deployments.
  • Mount Targets: Enables EC2 instances and ECS tasks within our VPC to access the EFS file system, ensuring that our MongoDB container can persistently store data.
  • EC2 Instance: Acts as a client to test the functionality of our MongoDB deployment, verifying connectivity, data persistence, and performance.

This setup serves as an initial configuration for deploying MongoDB on AWS ECS. It focuses on the foundational aspects rather than optimizing for performance or adhering to advanced security best practices. However, we will include some practical tips for enhancing the security of your MongoDB deployment.

Before we dive into the specifics of each AWS resource for our MongoDB deployment, let’s establish the foundation of our Terraform configuration. This involves setting up the main.tf file, which will specify the Terraform settings, required providers, and the AWS provider configuration. This is a crucial step to ensure that our infrastructure as code is well-organized and that Terraform communicates correctly with the AWS API.

Terraform Configuration and AWS Provider Setup

In your main.tf file, start by defining the Terraform block and the required providers. Here, we specify that we're using the AWS provider from HashiCorp and lock in a specific version to ensure consistency in our deployments. While the versions mentioned are not strictly required to match, they are used and tested for this lab. It’s always a good idea to consult the Terraform and AWS provider documentation for compatibility and features when using different versions.

terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "5.40.0"
}
}
required_version = ">= 1.6"
}

This configuration sets the groundwork for Terraform, ensuring that we’re using a compatible version of both Terraform itself and the AWS provider.

Next, configure the AWS provider with your desired region. In this example, we’re using eu-central-1 (Frankfurt), but you should adjust this to match the region most appropriate for your deployment.

provider "aws" {
region = "eu-central-1"
}

Finally, we include a data source to fetch the available availability zones in the specified region. This information is crucial for setting up our subnets in different zones to ensure high availability.

data "aws_availability_zones" "available" {}

This initial setup in the main.tf file is essential for a smooth and structured deployment process. It not only specifies the necessary version requirements and configurations for Terraform and AWS but also prepares the environment to fetch dynamic data needed for further configurations, like the availability zones for subnet creation.

Managing Sensitive Data

MongoDB requires a password for the admin user upon initialization to ensure it is secure. This password is provided to the container via an environment variable, MONGO_INITDB_ROOT_PASSWORD. There are several methods to handle this:

  • The simplest yet most insecure method is to directly specify the variable’s value in the Terraform code:
environment = [
{
name = "MONGO_INITDB_ROOT_PASSWORD"
value = "mongolabpassword"
}
]
  • Another approach involves creating an AWS SSM (AWS Systems Manager) Parameter Store parameter or an AWS Secrets Manager secret with Terraform, which then uses the Terraform Random provider to generate a password and set it as the value of the parameter/secret. However, this solution will store the password in plaintext in the Terraform state file. While keeping your state file secure is always advisable, storing any password in plaintext is generally not the best practice.
  • A recommended practice, and the one I’ll use here, is to create an AWS SSM Parameter Store parameter with Terraform and assign it a placeholder value. Then, manually update the parameter’s value in the AWS Console. This method ensures the password cannot be found in the Terraform state file while still leveraging Terraform’s capability to link resources in the code without the need to manually find ARNs.

Parameter Store

First, we’ll create an AWS SSM Parameter Store parameter. We’ll then craft an IAM Policy to allow our ECS task to read and decrypt the parameter. In a production environment, you might opt for a customer-managed KMS key, but for this blog post, we’ll utilize the default AWS-managed KMS key for SSM (which is automatically used when no specific KMS key is specified).

resource "aws_ssm_parameter" "mongodb_secret_password" {
name = "/mongodb/MONGO_INITDB_ROOT_PASSWORD"
type = "SecureString"
value = "Dummy"

lifecycle {
ignore_changes = [value]
}
}

I’ve included a lifecycle rule in the resource to ignore changes to its value. This is essential since we’ll manually update the value, and without this rule, reapplying our Terraform code would overwrite the value with “Dummy” again.

Next, we’ll define the IAM Policy that enables ECS access to the parameter. We won’t attach this policy immediately, as we haven’t yet created the role that will use this policy.

resource "aws_iam_policy" "ssm_parameter_access" {
name = "ssm_parameter_access"
description = "Allow ECS tasks to access SSM parameters"

policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Effect = "Allow"
Action = [
"ssm:GetParameter",
"ssm:GetParameters",
"ssm:GetParametersByPath",
"kms:Decrypt"
]
Resource = aws_ssm_parameter.mongodb_secret_password.arn
}
]
})
}

With the Terraform configuration ready, apply the code to deploy the SSM Parameter (terraform init and terraform apply).

After applying the Terraform script, you’ll need to update the parameter with a real password in the AWS SSM Parameter Store.

  1. Log into the AWS Console and navigate to AWS Systems Manager > Parameter Store.
  2. Select your parameter and click on ‘Edit’.
  3. In the Value field, enter your new password.

Once the parameter has been updated, we’re ready to proceed with the remaining Terraform configurations.

AWS SSM Parameter Store parameter in Console

After the parameter is updated, we can continue with the rest of the Terraform code.

VPC

Continuing from after creating our SSM Parameter Store parameter, our next step involves configuring the Virtual Private Cloud (VPC) where our MongoDB deployment will reside. This VPC serves as the foundational network environment for our application, encapsulating the MongoDB container and related AWS resources. By setting up a VPC, we establish a secure, isolated network space on AWS, which is a critical first step in deploying any cloud-based application. Let's proceed with defining the VPC and its components.VPC Configuration

First, we define the Virtual Private Cloud (VPC) within which all our resources will reside. This VPC provides an isolated network environment for our MongoDB deployment.

resource "aws_vpc" "mongolab_vpc" {
cidr_block = "10.0.0.0/16"
enable_dns_support = true
enable_dns_hostnames = true
}

This VPC is configured with a CIDR block 10.0.0.0/16, enabling both DNS support and DNS hostnames to facilitate name resolution within the network.

Subnets Creation

Next, we create subnets within our VPC. These subnets allow us to organize resources into separate network segments across different availability zones for higher availability.

resource "aws_subnet" "subnets" {
count = 3
vpc_id = aws_vpc.mongolab_vpc.id
cidr_block = "10.0.${count.index}.0/24"
availability_zone = element(data.aws_availability_zones.available.names, count.index)
map_public_ip_on_launch = true
}

Here, we create three subnets, each with its own CIDR block within the 10.0.0.0/16 range. These subnets are spread across different availability zones and are configured to assign public IP addresses to instances by default.

Internet Gateway

An Internet Gateway (IGW) is essential for allowing communication between resources in your VPC and the internet.

resource "aws_internet_gateway" "mongolab_igw" {
vpc_id = aws_vpc.mongolab_vpc.id
}

This IGW is attached to our VPC, enabling internet access for resources within our network.

Route Table and Associations

To control the routing of traffic within and outside the VPC, we define a route table and associate it with our subnets.

resource "aws_route_table" "mongolab_route_table" {
vpc_id = aws_vpc.mongolab_vpc.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.mongolab_igw.id
}
tags = {
Name = "mongolab_route_table"
}
}

This route table includes a default route (0.0.0.0/0) pointing to the internet gateway, enabling outbound internet access for resources in the associated subnets.

resource "aws_route_table_association" "mongolab_route_association" {
count = length(aws_subnet.subnets.*.id)
subnet_id = aws_subnet.subnets[count.index].id
route_table_id = aws_route_table.mongolab_route_table.id
}

Each subnet is associated with the route table, ensuring that the routing policies are applied across all subnets.

IAM

Next, we’ll tackle the creation of IAM roles essential for our MongoDB container’s execution and operation within AWS ECS. These roles ensure that the ECS tasks have the necessary permissions to run and interact with other AWS services.

ECS Task Execution Role

This IAM role is pivotal for allowing ECS tasks to communicate with AWS services that are required to run containers. It provides permissions that ECS tasks need for execution, such as pulling images from ECR and writing logs to CloudWatch.

resource "aws_iam_role" "ecs_mongo_task_execution_role" {
name = "ecs_mongo_task_execution_role"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Principal = {
Service = "ecs-tasks.amazonaws.com"
}
Effect = "Allow"
Sid = ""
},
]
})
}

We will then attach the AWS-managed policy AmazonECSTaskExecutionRolePolicy to the task execution role. This policy is designed to provide the necessary permissions for ECS tasks to interact effectively with other AWS services. Specifically, it contains permissions that allow tasks to retrieve container images from the Amazon Elastic Container Registry (ECR), publish container logs to Amazon CloudWatch, and manage Amazon Elastic File System (EFS) volumes. By incorporating this policy, we ensure that our MongoDB containers have the essential capabilities to operate within the AWS environment, facilitating tasks such as logging, storage access, and image retrieval.

resource "aws_iam_role_policy_attachment" "ecs_mongo_task_execution_role_policy" {
role = aws_iam_role.ecs_mongo_task_execution_role.name
policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy"
}

We will also attach the IAM Policy we created for our AWS SSM Parameter Store parameter to the task execution role so ECS can access the secure parameter when deploying the MongoDB container.

resource "aws_iam_role_policy_attachment" "ecs_ssm_parameter_access" {
role = aws_iam_role.ecs_mongo_task_execution_role.name
policy_arn = aws_iam_policy.ssm_parameter_access.arn
}

ECS Task Role

While the execution role is about the ECS service itself interacting with AWS services, the task role is more about giving the tasks (containers) themselves permissions to access AWS resources. This is crucial if your MongoDB needs to interact with other services like EFS for storage.

resource "aws_iam_role" "ecs_mongo_task_role" {
name = "ecs_mongo_task_role"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Principal = {
Service = "ecs-tasks.amazonaws.com"
}
Effect = "Allow"
Sid = ""
},
]
})
}

At this point, we will not attach any policies to the role; that will be done after we have created the EFS.

These IAM roles are essential building blocks for security and access management in your MongoDB ECS deployment. The ecs_mongo_task_execution_role ensures that ECS can execute tasks on your behalf, handling essential operations like pulling container images. The ecs_mongo_task_role, on the other hand, grants your MongoDB tasks the necessary permissions to access and interact with AWS services, according to the specific needs of your application.

Security Groups

Next, we’ll establish the three necessary security groups for our setup: one for the ECS service, one for the Elastic File System (EFS), and one for the test EC2 instance. These security groups act as virtual firewalls, controlling inbound and outbound traffic to ensure secure communication and operation of our MongoDB deployment and supporting resources.

Security Group for the Test EC2 Instance

This security group is configured to allow SSH access to our EC2 instance, enabling us to connect via SSH for testing purposes against the MongoDB container.

resource "aws_security_group" "ec2_sg" {
name = "ec2_sg"
description = "Security group for EC2 instance"
vpc_id = aws_vpc.mongolab_vpc.id
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}

Security Group for ECS MongoDB Tasks

Designed specifically for MongoDB tasks running within ECS, this group restricts inbound traffic to MongoDB’s default port (27017) from the EC2 instance security group.

resource "aws_security_group" "mongo_ecs_tasks_sg" {
name = "mongo-ecs-tasks-sg"
description = "Security group for ECS MongoDB tasks"
vpc_id = aws_vpc.mongolab_vpc.id
ingress {
from_port = 27017
to_port = 27017
protocol = "tcp"
security_groups = [aws_security_group.ec2_sg.id]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "mongolab-ecs-tasks-sg"
}
}

Security Group for EFS

This security group ensures that only traffic on the NFS port (2049) from within our VPC can reach the EFS, securing our file storage used by MongoDB for persistence.

resource "aws_security_group" "efs_sg" {
name = "efs-mongolab-sg"
description = "Security group for EFS"
vpc_id = aws_vpc.mongolab_vpc.id
ingress {
from_port = 2049
to_port = 2049
protocol = "tcp"
cidr_blocks = [aws_vpc.mongolab_vpc.cidr_block]
}
}

It’s important to note that allowing access from all the VPC CIDR blocks, as done in the security group for EFS, might not align with best practices for security, especially in a production environment. Ideally, we would structure our network to include separate subnets dedicated to the MongoDB container, spread across multiple availability zones for enhanced reliability and availability. Access to the EFS would then be restricted solely to these specific subnets, significantly narrowing the scope of allowed traffic and enhancing the overall security posture of our deployment.

These security groups are tailored to their respective components, ensuring that each segment of our MongoDB deployment is protected according to its specific requirements.

EFS File System Creation

Now, let’s delve into setting up Elastic File System (EFS) for our MongoDB container, ensuring persistent storage across ECS task restarts or redeployments. This step is crucial for maintaining data durability and availability, especially in dynamic container environments.

EFS File System

First, we create the EFS file system. This file system will be encrypted to ensure data security, a critical feature for any production-grade application.

resource "aws_efs_file_system" "mongolab_file_system" {
creation_token = "mongoefs"
encrypted = true
tags = {
Name = "mongoefs"
}
}

By setting encrypted to true, we ensure that our data at rest is protected. The creation_token is a unique identifier for the file system.

While the configuration above ensures that our EFS file system is encrypted, it uses the default AWS-managed keys for encryption. For production systems, it’s advisable to consider the use of a customer-managed KMS key for encryption. This approach offers additional benefits, including enhanced control over the encryption keys and the ability to audit key usage. Utilizing a customer-managed KMS key allows for a more granular security posture, ensuring that access to the encryption keys is tightly controlled and monitored, aligning with best practices for securing sensitive data in production environments.

EFS Mount Targets

Next, we configure mount targets for the EFS in our subnets. This allows EC2 instances and ECS tasks within our VPC to mount the file system.

resource "aws_efs_mount_target" "efs_mount_target" {
count = length(aws_subnet.subnets.*.id)
file_system_id = aws_efs_file_system.mongolab_file_system.id
subnet_id = aws_subnet.subnets[count.index].id
security_groups = [aws_security_group.efs_sg.id]
}

Here, we create a mount target in each subnet associated with our MongoDB deployment, securing access through the dedicated EFS security group.

IAM Policy for ECS Access to EFS

To ensure our ECS tasks can interact with the EFS, we need to define an IAM policy granting necessary permissions.

resource "aws_iam_policy" "ecs_efs_access_policy" {
name = "ecs_efs_access_policy"
description = "Allow ECS tasks to access EFS"

policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = [
"elasticfilesystem:ClientMount",
"elasticfilesystem:ClientWrite",
"elasticfilesystem:DescribeFileSystems",
"elasticfilesystem:DescribeMountTargets"
]
Resource = aws_efs_file_system.mongolab_file_system.arn
Effect = "Allow"
},
]
})
}

This policy includes permissions for mounting the EFS, writing data, and querying file system details, ensuring our ECS tasks can fully utilize the EFS for storage needs.

Attaching IAM Policy to ECS Task Role

Finally, we attach the newly created IAM policy to the ECS task role. This step authorizes our MongoDB tasks to access and use the EFS according to the defined permissions.

resource "aws_iam_role_policy_attachment" "ecr_efs_access_policy_attachment" {
role = aws_iam_role.ecs_mongo_task_role.name
policy_arn = aws_iam_policy.ecs_efs_access_policy.arn
}

With these configurations, our MongoDB ECS tasks will be equipped to utilize EFS for persistent storage, enhancing our database’s resilience and reliability within the AWS cloud environment.

ECS

Now, let’s proceed with configuring the ECS for our MongoDB setup. We’ll start by creating a CloudWatch log group for logging purposes.

Setting Up CloudWatch Logging

resource "aws_cloudwatch_log_group" "ecs_logs" {
name = "/ecs/mongolab"
retention_in_days = 30
}

This log group will capture logs from our MongoDB container, facilitating monitoring and debugging with a retention period set to 30 days.

Deploying the ECS Cluster

resource "aws_ecs_cluster" "mongolab_cluster" {
name = "mongolab-cluster"
}

An ECS Cluster acts as the backbone for running containers on AWS ECS. This cluster will host our MongoDB container instances, allowing us to manage them as a unified service. By creating a dedicated cluster for MongoDB, we ensure that our database environment is isolated and manageable.

Defining the ECS Task

The ECS Task Definition specifies how the MongoDB container should run. It includes configurations like the Docker image to use (mongo:7), resource allocation (CPU and memory), and the network mode. Importantly, it defines a volume for data persistence backed by EFS, ensuring that our MongoDB data is durable and persists across container restarts and deployments.

We will use Fargate as the launch type, instead of hosting it on EC2, so we don’t have to worry about maintaining the servers.

Given its critical role in deploying and configuring MongoDB on ECS, I’ll delve into each part of the Terraform code for the ECS Task Definition. At a high level, this resource can be dissected into two main categories: the configuration for running the container, which includes settings like compute resources and network mode, and the configuration of the container itself, which encompasses everything from the MongoDB image to environment variables and logging. Let’s start by examining the resource as a whole:

resource "aws_ecs_task_definition" "mongo_task_definition" {
family = "mongolab-mongodb"
requires_compatibilities = ["FARGATE"]
network_mode = "awsvpc"
cpu = "256"
memory = "512"
execution_role_arn = aws_iam_role.ecs_mongo_task_execution_role.arn
task_role_arn = aws_iam_role.ecs_mongo_task_role.arn

container_definitions = jsonencode([
{
name = "mongo",
image = "mongo:7",
cpu = 256,
memory = 512,
essential = true,
portMappings = [
{
protocol = "tcp"
containerPort = 27017
hostPort = 27017
}
]
mountPoints = [
{
sourceVolume = "mongoEfsVolume"
containerPath = "/data/db"
readOnly = false
},
],
environment = [
{
name = "MONGO_INITDB_ROOT_USERNAME"
value = "mongolabadmin"
},
{
name = "MONGO_INITDB_DATABASE"
value = "mongolab"
}
],
secrets = [
{
name = "MONGO_INITDB_ROOT_PASSWORD"
valueFrom = aws_ssm_parameter.mongodb_secret_password.name
}
],
healthcheck = {
command = ["CMD-SHELL", "echo 'db.runCommand(\\\"ping\\\").ok' | mongosh mongodb://localhost:27017/test"]
interval = 30
timeout = 15
retries = 3
startPeriod = 15
}
logConfiguration = {
logDriver = "awslogs"
options = {
awslogs-group = aws_cloudwatch_log_group.ecs_logs.name
awslogs-region = "eu-central-1"
awslogs-stream-prefix = "mongodb"
}
}
}
])

The initial section of the aws_ecs_task_definition resource sets the stage for how the MongoDB container will operate within ECS, specifically under the FARGATE launch type. This part is crucial as it determines the execution environment and resources allocated to your container. Let's break down these configurations:

resource "aws_ecs_task_definition" "mongo_task_definition" {
family = "mongolab-mongodb"
requires_compatibilities = ["FARGATE"]
network_mode = "awsvpc"
cpu = "256"
memory = "512"
execution_role_arn = aws_iam_role.ecs_mongo_task_execution_role.arn
task_role_arn = aws_iam_role.ecs_mongo_task_role.arn
  • family: A unique name for your task definition. Here, it’s set to “mongolab-mongodb,” which helps in identifying the task within the ECS service.
  • requires_compatibilities: Specifies the launch type required by the task. In this case, “FARGATE” indicates that the task is designed to run on AWS Fargate, which abstracts server and cluster management away from the user.
  • network_mode: Set to “awsvpc,” allowing the task to have its own network interface, a private IP address, and security groups. This mode is essential for tasks that require a high degree of network isolation.
  • cpu and memory: Define the computing resources available to the container. Here, the task is configured to use 256 CPU units and 512 MB of memory. These settings should be adjusted based on the workload and performance requirements of your MongoDB database.
  • execution_role_arn and task_role_arn: These parameters link to the IAM roles that we created before to allow our ECS task to access AWS resources. The execution role specifically enables ECS to perform system operations necessary for task management, including pulling Docker images from ECR and managing logs in CloudWatch. The task role is crucial for granting your MongoDB service permissions to directly interact with AWS resources. This includes accessing EFS for persistent data storage and making application-specific AWS API calls.

The second section of the aws_ecs_task_definition resource contains the container_definitions which is a critical piece that outlines the specifics of how your MongoDB container should run.

container_definitions = jsonencode([
{
name = "mongo",
image = "mongo:7",
cpu = 256,
memory = 512,
essential = true,
portMappings = [
{
protocol = "tcp"
containerPort = 27017
hostPort = 27017
}
]
mountPoints = [
{
sourceVolume = "mongoEfsVolume"
containerPath = "/data/db"
readOnly = false
},
],
environment = [
{
name = "MONGO_INITDB_ROOT_USERNAME"
value = "mongolabadmin"
},
{
name = "MONGO_INITDB_DATABASE"
value = "mongolab"
}
],
secrets = [
{
name = "MONGO_INITDB_ROOT_PASSWORD"
valueFrom = aws_ssm_parameter.mongodb_secret_password.name
}
],
healthcheck = {
command = ["CMD-SHELL", "echo 'db.runCommand(\\\"ping\\\").ok' | mongosh mongodb://localhost:27017/test"]
interval = 30
timeout = 15
retries = 3
startPeriod = 15
}
logConfiguration = {
logDriver = "awslogs"
options = {
awslogs-group = aws_cloudwatch_log_group.ecs_logs.name
awslogs-region = "eu-central-1"
awslogs-stream-prefix = "mongodb"
}
}
}
])
  • name: Identifies the container within the task definition. Here, it’s simply “mongo.”
  • image: Specifies the Docker image to use. Using mongo:7 ensures you are running a specific stable version of MongoDB.
  • cpu and memory: Allocate the computing resources for the container. These settings, 256 CPU units and 512 MB of memory, should be adjusted based on your MongoDB workload requirements.
  • essential: Marks the container as essential, meaning that if this container fails, ECS will stop the entire task.
  • portMappings: Maps the container port to the host port. MongoDB’s default port, 27017, is used here, allowing connections to the database.
  • mountPoints: Configures the volume mount for the container. This setup points to a volume named “mongoEfsVolume,” which will be defined bellow in the task definition to use Amazon EFS. This ensures data persistence across container restarts and deployments.
  • environment: Sets the environment variables necessary for initializing the MongoDB instance with a root username, and default database. These values are crucial for securing your MongoDB instance. If these variables are not defined, MongoDB will be deployed insecurely and can be accessed without a username and password.
  • secrets: Sets the environment variables necessary for initializing the MongoDB instance with a root password. The difference between secrets and environment is that variables sent with environment can be seen in clear text in the task definition and logs. Variables in secrets are retrieved from AWS SSM Parameter Store or AWS Secrets Manager and the value is not accessible in the task definition or logs.
  • healthcheck: Defines how ECS checks the health of the MongoDB container. The command used here invokes MongoDB’s internal ping command to ensure the database is responsive. Adjust the intervals and timeouts as necessary for your environment.
  • logConfiguration: Specifies the logging driver and configuration. Here, it’s set to use awslogs to send logs to CloudWatch, utilizing the log group defined earlier. This setup is vital for monitoring and troubleshooting.

The third section of the ECS task definition is volume and it is here you define the external storage volumes that your container will use. In this case, we're setting up a volume for MongoDB to use Amazon EFS, ensuring data persistence beyond the lifecycle of individual container instances.

volume {
name = "mongoEfsVolume"

efs_volume_configuration {
file_system_id = aws_efs_file_system.mongolab_file_system.id
transit_encryption = "ENABLED"
authorization_config {
iam = "ENABLED"
}
}
}
  • name: This is the identifier for the volume within the task definition. Here, it’s named “mongoEfsVolume”, and you have referenced this name in the mountPoints section of your container definition.
  • efs_volume_configuration: This section specifies the details of the EFS volume to be attached to the container.
  • file_system_id: The ID of the EFS file system you want to mount. It is referenced from the EFS resource you defined earlier (aws_efs_file_system.mongolab_file_system.id), ensuring the task mounts the correct EFS filesystem.
  • transit_encryption: Set to “ENABLED” to encrypt the data in transit between your ECS container and the EFS file system. This is crucial for maintaining data security, especially if your ECS instances are distributed across multiple availability zones.
  • authorization_config: This block controls how the ECS task accesses the EFS volume.
  • iam: Setting this to “ENABLED” means that access to the EFS file system is governed by IAM policies.

When the task definition is done we will continue on to establishing service discovery to ensure your MongoDB container is easily reachable from other AWS resources using a DNS name. Service discovery simplifies the communication between your services by providing a stable endpoint.

Setting Up Service Discovery for MongoDB

Creating a Private DNS Namespace

The first step in configuring service discovery is creating a private DNS namespace. This namespace allows services within it to be discovered by name within the specified VPC. Here, we’re creating a namespace named mongolab.local:

resource "aws_service_discovery_private_dns_namespace" "mongolab_monitoring" {
name = "mongolab.local"
vpc = aws_vpc.mongolab_vpc.id
}

Registering the MongoDB Service

With the namespace in place, you can now register your MongoDB service within this namespace. This registration enables other services in the VPC to resolve and communicate with your MongoDB instance using DNS:

resource "aws_service_discovery_service" "mongo_discovery_service" {
name = "mongodb"
dns_config {
namespace_id = aws_service_discovery_private_dns_namespace.mongolab_monitoring.id
dns_records {
ttl = 10
type = "A"
}
}
health_check_custom_config {
failure_threshold = 1
}
}

Key Components

  • name: The name of the service discovery service, mongodb, making it identifiable within the namespace.
  • dns_config: Specifies how the service will be discovered via DNS. The namespace_id ties this service to the mongolab.local namespace. DNS records are configured to have a type "A" with a TTL (time to live) of 10 seconds, ensuring that DNS queries are answered with IP addresses of the running MongoDB service instances.
  • health_check_custom_config: While ECS services typically use health checks defined in the task definition, service discovery allows for custom health check configurations. Here, we specify a failure_threshold of 1, indicating how many consecutive health check failures are tolerated before the service is considered unhealthy.

Integrating service discovery in this manner facilitates seamless connectivity within your AWS environment, making your MongoDB service easily accessible to other applications and services via a friendly DNS name.

We will now continue to the last part of the ECS setup, the ECS Service.

Deploying MongoDB as an ECS Service

This section of your Terraform configuration creates an ECS service for your MongoDB task, which orchestrates the deployment, scaling, and management of your containers.

resource "aws_ecs_service" "mongo_service" {
name = "mongolab-mongodb-service"
cluster = aws_ecs_cluster.mongolab_cluster.id
task_definition = aws_ecs_task_definition.mongo_task_definition.id
desired_count = 1
launch_type = "FARGATE"

network_configuration {
subnets = aws_subnet.subnets[*].id
security_groups = [aws_security_group.mongo_ecs_tasks_sg.id]
assign_public_ip = true
}

service_registries {
registry_arn = aws_service_discovery_service.mongo_discovery_service.arn
}

}
  • name: The name of your ECS service, mongolab-mongodb-service, uniquely identifies this service within the ECS cluster.
  • cluster: Specifies the ECS cluster (mongolab_cluster) where your service will run. This ties your MongoDB service to the previously defined ECS cluster.
  • task_definition: Points to the mongo_task_definition you defined earlier. This tells ECS which task definition to use when launching new instances of your container.
  • desired_count: Sets the number of desired instances of your MongoDB container. For this setup, we are useing a single instance.
  • launch_type: FARGATE indicates that your service will run on AWS Fargate, which abstracts away the server management aspect, allowing you to focus on designing and building your applications.
  • network_configuration: Configures the networking for your service. It specifies the subnets where your containers will be launched, the security groups applied to your containers, and that a public IP should be assigned. The public IP is required so that the service can pull the image from DockerHub.
  • service_registries: This allows your service to be registered with a specified service registry and point to the service discovery created before.

Now all the ECS parts are done and we can continue to the last part of this guide, the EC2 instance to use for test.

Deploying EC2 for testing MongoDB

Since the EC2 is for testing purposes I will not go into detail about the settings for the EC2. It is configured to be able to run mongosh to test the MongoDB.

Key Pair for EC2 Instance Access

First, you’re creating an AWS Key Pair, which will allow you to SSH into the EC2 instance securely. The public key is sourced from a file on your local system.

resource "aws_key_pair" "ec2_keypair" {
key_name = "mongolabkey"
public_key = file("~/.ssh/mongolab.pub")
}

We’re using an Amazon Linux 2023 AMI, a t2.micro instance type for cost efficiency, and situating the instance within a specific subnet of our VPC. The assigned security group ensures network access control, allowing SSH access for our testing purposes.

resource "aws_instance" "mongolab_ec2_instance" {
ami = "ami-01be94ae58414ab2e"
instance_type = "t2.micro"
subnet_id = aws_subnet.subnets[0].id
key_name = aws_key_pair.ec2_keypair.key_name
security_groups = [aws_security_group.ec2_sg.name]
user_data = <<-EOF
#!/bin/bash
# Add the MongoDB repository
echo '[mongodb-org-7.0]
name=MongoDB Repository
baseurl=https://repo.mongodb.org/yum/amazon/2/mongodb-org/7.0/x86_64/
gpgcheck=1
enabled=1
gpgkey=https://www.mongodb.org/static/pgp/server-7.0.asc' | sudo tee /etc/yum.repos.d/mongodb-org-7.0.repo
# Update your system
sudo yum update -y
# Install MongoDB
sudo yum install -y mongodb-org
# Start the MongoDB service
sudo systemctl start mongod
# Enable MongoDB to start on boot
sudo systemctl enable mongod
# MongoDB Shell installation for testing
sudo yum install -y mongodb-mongosh
EOF
}

In the user_data script, we automate the setup to add the MongoDB repository, install MongoDB along with the MongoDB Shell (mongosh), and ensure the MongoDB service is started and enabled to run on boot.

Testing MongoDB

Now that the environment is deployed we can test that the MongoDB is running as expected.
First lets verify that the ECS task is running and healthy either with the CLI or the Console.

CLI

To check the status of your ECS service, you can use the AWS CLI with the following command:

aws ecs describe-services --cluster mongolab-cluster --services mongolab-mongodb-service --region eu-central-1 --output json

In the response, confirm that the service’s status is "ACTIVE" and that desiredCount matches runningCount, indicating that your MongoDB service is running as expected.

{
"services": [
{
"serviceArn": "arn:aws:ecs:eu-central-1:384015117626:service/mongolab-cluster/mongolab-mongodb-service",
"serviceName": "mongolab-mongodb-service",
"clusterArn": "arn:aws:ecs:eu-central-1:384015117626:cluster/mongolab-cluster",
"loadBalancers": [],
"serviceRegistries": [
{
"registryArn": "arn:aws:servicediscovery:eu-central-1:384015117626:service/srv-cufualoujzfd5y5n"
}
],
"status": "ACTIVE",
"desiredCount": 1,
"runningCount": 1,
"pendingCount": 0,
......
}

Console

Alternatively, you can verify the service status through the AWS Management Console:

  1. Log in to the AWS Console.
  2. Navigate to ECS > Clusters > mongolab-cluster > Services > mongolab-mongodb-service > Tasks.
  3. Check that the “Last status” is “RUNNING” and “Health status” is “HEALTHY”.
Console view of ECS Task

With the ECS service confirmed to be running and healthy, the next step is to test MongoDB connectivity directly.

EC2 Instance Connection

SSH into your EC2 instance, which has been set up with the necessary tools and access for testing:

ssh -i ~/.ssh/mongolab ec2-user@<your-ec2-instance-public-dns>

Connect to MongoDB Using Service Discovery DNS Name

Once logged in, use mongosh to connect to your MongoDB database using the Service Discovery DNS name:

mongosh mongodb.mongolab.local --username mongolabadmin

When prompted, enter the password. If everything is configured correctly, you should successfully log into your MongoDB instance.

Pitfalls

One issue you might encounter is running the entire Terraform code at once without first creating the SSM Parameter Store parameter for the MongoDB password. Doing so will configure MongoDB with the placeholder password “dummy,” and no subsequent changes to the parameter’s value will affect this. The MONGO_INITDB environment variables are only utilized by MongoDB during the initial database setup. After the container's first deployment, the database remains initialized with the initial settings, even if you deploy new configurations for the container. This is because the data persists in the EFS, maintaining the state from the initial setup.

If you want to change the password you will need to do it from the MongoDB shell.

Conclusion

By following the steps outlined in this guide, you can set up a secure, efficient, single-instance MongoDB environment ready for production or testing.

For further exploration, check out AWS’s official documentation on ECS and MongoDB’s documentation.

Thank you for taking the time to read this guide. Don’t forget to follow for more AWS content.

--

--

Jens Båvenmark
AWS Specialists

DevOps "trained" CloudOps Engineer with focus on AWS.