Update IP-Address in Route53 on ECS Fargate redeployments

Marc Logemann
AWS Factory
Published in
5 min readOct 12, 2018

We are providing demo systems of our software which we deploy on AWS Elastic Container Service via Fargate. Unfortunately its not possible yet to provide a predefined Elastic Network Interface (ENI) with a fixed IP for a stack which we could reference in Route53. So we created a smart cron job which does the task for you by periodically checking ECS and modifying Route53 zone records.

The problem with a public IP Address in ECS Fargate

If you ever created an ECS Stack with Fargate (instead of old fashioned EC2) you noticed that AWS will apply a new public IP Address each time you start the task. For this to happen you simply need to supply the attribute

assign_public_ip: ENABLED

in the ecs-params.yml which will be picked up by the ecs-cli compose service up command. The advantages of Fargate itself is not the topic of this blog but lets say that you see the advantages but having a persistent public IP across redeployments is not one of them.

Our goal was to bootstrap demo systems of our products pretty quick and on-Demand. Despite that, we wanted to shut off the systems during non-working hours and do redeployments of the software on a weekly basis. Getting a new IP every time was a no-go for us since we didnt want to change the DNS records by hand every time we get a new IP.

The idea to solve the DNS update problem

Unfortunately ECS is not able to apply lambdas on possible events like “Task bootstrapping completed”. You might know these helpful triggers from S3 like “ObjectCreated” where you can easily attach a lambda on such an event. The solution for us was to write a lambda which is not triggered by an ECS event but by CRON in short intervals.

What does the lambda do?

Basically the lambda does four things:

  • Getting the list of tasks of a running cluster
  • Get the ENI of the task
  • Get the public IP Address of the ENI
  • Update the Route53 DNS record for a domain

There are some limitations currently with the implementation which can be easily extended. We assume that there is only one active task at a time for a given cluster and that the ENI has a public IP Address.

How to configure the lambda

We use “The Serverless Framework” (https://serverless.com) for deploying our lambdas, so i will post a snippet from the serverless.yml configuration file. If you dont use this framework, you can easily see what this code creates behind the scenes in AWS.

...
demoroute53:
memorySize:
128
environment:
entryparam: "[{\"cluster\": \"mycluster\",\"domain\": \"demo.myproduct.com\",\"zoneid\": \"Z1Q33876530G25\"}]"
iamRoleStatements:
- Effect: "Allow"
Action:
- "ecs:ListTasks"
- "ecs:DescribeTasks"
- "ec2:DescribeNetworkInterfaces"
- "route53:ChangeResourceRecordSets"
Resource: "*"
handler:
demoroute53.handler
description: changes route53 record set for demo domains
timeout: 10 # optional, in seconds, default is 6
events:
- schedule: cron(*/5 * ? * MON-FRI *)
...

The entryparam is the environment variable we use for submitting config options to the lambda.

[{
"cluster": "mycluster",
"domain": "demo.mydomain.com",
"zoneid": "Z1Q33876530G25"
}]

As you can see its a JSON array of objects which means you can easily provide more than one entity to process. The cluster is the ECS cluster name you want to work on. The domain is the subdomain you want to change in Route53, wich is essentially a DNS record for the hosted zone which is referenced by zoneid.

Furthermore we see the Cron trigger and IAM rights which the scripts needs to work on related AWS resources. If you wonder why i placed them in the function block of the serverless framework config file… there is a plugin (serverless-iam-roles-per-function) which you can use, which allows exactly that.

The javascript lambda code

The implementation is done with javascript and the AWS SDK. Since Medium is not the best choice when it comes to code visualisation, i also created a GIST to copy and paste from.

https://gist.github.com/logemann/62efc6db2133da19711cb3e93cc07063

Basically it does the things i mentioned in the section above. Its also not the best fault tolerant script on earth since i left out try/catch idioms but i dont want to die in beauty with such a worker script. I assume that you will modify it to your needs anyway.

let AWS = require('aws-sdk');
let ec2 = new AWS.EC2();
let ecs = new AWS.ECS();
let route53 = new AWS.Route53();

exports.handler = function (event, context) {
console.log("********************************************");
console.log(context.functionName);
console.log("event: ", JSON.stringify(event));
console.log("entryparam:", process.env.entryparam);
console.log("********************************************");


const result = async () => {

let configArray = JSON.parse(process.env.entryparam);
for (let i = 0; i < configArray.length; i++) {
console.log("--- Processing Cluster: " + configArray[i].cluster + " ---");
let publicIp = await getPublicIpForCluster(configArray[i].cluster);
console.log("Updating record '" + configArray[i].domain
+ "' (" + configArray[i].zoneid + ") with Public IP: " + publicIp);
modifyDnsRecord(configArray[i].cluster, configArray[i].domain, publicIp, configArray[i].zoneid);
console.log("--- End of Processing Cluster ---");
}
};

result().then(() => {
context.done(null, event);
}).catch(reason => {
let error = new Error('failed!');
context.done(error, event);
});

};

function modifyDnsRecord(clusterName, domain, publicIp, hostedZoneId) {
let param = {
ChangeBatch: {
"Comment": "Auto generated Record for ECS Fargate Instance " + clusterName,
"Changes": [
{
"Action": "UPSERT",
"ResourceRecordSet": {
"Name": domain,
"Type": "A",
"TTL": 300,
"ResourceRecords": [
{
"Value": publicIp
}
]
}
}
]
},
HostedZoneId: hostedZoneId
};

route53.changeResourceRecordSets(param, function (err, data) {
if (err) {
console.log(err, err.stack);
}
});
}


async function getPublicIpForCluster(clusterName) {
// lists tasks of cluster
let data = await ecs.listTasks({
cluster: clusterName
}).promise();
let taskId = data.taskArns[0].split("/")[1];

// get Task data
data = await ecs.describeTasks({
cluster: clusterName,
tasks: [
taskId
]
}).promise();
let eniId = "";

// extract "Elastic Network Interface" ENI Id
let detailsArray = data.tasks[0].attachments[0].details;
for (let i = 0; i < detailsArray.length; i++) {
if (detailsArray[i].name === "networkInterfaceId") {
eniId = detailsArray[i].value;
break;
}
}

// get Public IP of the extracted ENI
data = await ec2.describeNetworkInterfaces({
NetworkInterfaceIds: [
eniId
]
}).promise();

return data.NetworkInterfaces[0].PrivateIpAddresses[0].Association.PublicIp;
}

Final thoughts

As soon as AWS decides to provide customer provided ENIs for ECS Fargate stacks, this script is more or less obsolete. But one advantage will remain: even if you could attach a seperate ENI, you would be billed for that ENI when your ECS stack is down because you would block an IP-Address of the Amazon AWS range. It works pretty much like a dyndns system, where a dyndns client also check for the current IP and updates a DNS server somewhere.

If you want to know more about well architected applications on Amazon AWS, feel free to head over to https://okaycloud.de for more infos.

--

--

Marc Logemann
AWS Factory

Entrepreneur & CTO - (AWS) Software Architect, likes Typescript, Java and Flutter, located in the Cloud, Berlin and Osnabrück.