Launching a New Region in AWS

Duygu Esen Dağlı
Delivery Hero Tech Hub
3 min readJun 10, 2019

Last sprint we(Global Joker team, JAWS) have launched a new AWS Region as a second region at N. Virginia in US East, where our first region was at Frankfurt in Europe. We have recently announced that Jaas(Joker as a service) is now working on two regions in US East and Europe. In this post, we would like to share Joker’s architecture and our experiences throughout the process of launching the new region in AWS.

About Jaas

Joker is a global service that offers a time-limited discount for users developed by Yemeksepeti. We opened our platform’s APIs to our companies which are operating in 12 countries within Delivery Hero Holding so that they can also provide real-time Joker discounts to their customers. Most of these companies were using different technology stacks (databases, programming languages and so on). So we decided to use Faas (Functions as a Service) in order to integrate method that was easier to implement and cheaper to maintain. We have designed a serverless architecture to build it and used Amazon’s serverless compute services that we can run our code on it without having to manage servers.

Architecture

  • Joker’s architecture consists of AWS S3, AWS API Gateway, AWS Lambda, AWS Relational Database Service (RDS), AWS SES, AWS CloudWatch, AWS CloudFront
Joker’s Architecture

Migrating Existing AWS Resources to a New Region

As Delivery Hero branched and Joker service is rolled out globally, we needed to deploy our resources to other AWS regions with the concern of possible network latency. We have decided to launch resources in the North Virginia region which is an optimal location for our existing and upcoming countries.

There were exceptional cases that convince us to not migrate all of the resources to the new region. For instance, we are using an S3 bucket to statically host our admin panel and migrating that resource is pointless since AWS CloudFront caches the contents in the closest edge point to the client location.

The following services are manually created and configured.

Amazon VPC: Amazon Virtual Private Cloud (Amazon VPC) enables us to launch AWS resources in a secure local network. We set up subnets and routing tables that make lambda functions accessible over the internet and enable lambdas in a private subnet to connect to the internet through NAT gateway.

Amazon Relational Database Service (RDS): Relational Database Systems are configured as publicly inaccessible which means no one can query the database outside of the local network unless their external IPs are whitelisted by security groups which we configured. We chose to use Amazon Aurora Feature PostgreSQL. We have launched three instances in our VPC: two of them multi-AZ instance for production and a single-AZ instance for test purposes. We have defined the endpoint names of our RDS instances to map the stage to the appropriate instance.

Amazon API Gateway: We have created API Gateways to ensure that Lambda functions are accessible via an endpoint. CORS options were configured and required custom headers were added to allow the header list. We enabled API Gateway Logs in order to be able to monitor issues such as gateway timeout.

AWS Lambda: We created both our core API application lambdas and scheduled tasks and preferred to use lambda alias as test alias points to the $LATEST version configuration and prod alias points to the version we published from $LATEST version configuration. Required subnets and security groups were attached to lambdas in order to be able to access the internet and databases in the local network.

AWS Cloudwatch: As we moved our serverless architecture and database system to the North Virginia region, we enabled Cloudwatch log mechanism to re-create all necessary metrics and alarms in order to monitor the newly created region against unexpected situations.

Why we did this process manually?

Even though there are several significant AWS services like AWS CloudFormation and AWS SAM to facilitate and automate the mission, we preferred manual migration. Since there were obsolete configurations and useless legacy resources in the current region, creating all from scratch enabled the team to have a dynamic, relevant region in every detail.

Global Joker Team, JAWS

--

--