URL Redirection in Serverless

Ilana Polonsky
Melio’s R&D blog

--

In this blog post I’m going to introduce a use case we had at Melio, in which we needed to create a redirection from an old website with custom URL paths to a new website with a different path structure. We’ll look at the serverless components that are part of the solution, and finish with implementation details so you can go ahead and create this solution right away.

But first of all, what does it mean to create a redirection?
When a user asks for a certain URL, this request travels across multiple DNS servers, each holding different subdomain records pointing to the next layer of naming — for example, a “.com” server will point to the DNS server holding example.com record. The last DNS server on the list will return the IP address of the actual web server.

The simplest example of redirection is to change the pointing to a different IP address, this way all requests are forwarded as they are to a different IP. But what if we want to customize the request before forwarding it to the new IP? Let’s take a look at the use case we had…

Challenging use case

Melio is customer facing, and uses an external tool for its service desk. Part of this tool includes creating articles, like popular Q&A regarding our product or technical support.
This product is a hosted service, and was not in our domain. So we created a subdomain ‘help-desk.mycompany.com’ in our domain, and used a DNS record to point to any access to an external website. It’s also important to mention that this website has a unique URL path for all our articles:`htps://old-desk.com/en/articles/<ARTICLE-NUMBER>

After some time using the old desk tool, we decided to move to another service desk for several reasons.
This new service desk is also a hosted service, with a different URL path for articles: “https://new-desk.com/hc/en-us/articles/<NEW-NUMBER>”.

Disclaimer: We can’t change this URL path or control the number associated with each article. We can only change the DNS record to point to the new website. The downside is that the URL path does not match on both ends, and any customer who has saved any original technical guide will be redirected to a nonexistent URL path, and we will lose our credibility.

Page ranking — moving a website from one URL to another can affect page ranking (SEO), so we need to include 301 code messages in our redirection.

The 301 code message is an HTTP response status code, saying “Page Moved Permanently”. The response also includes Location headers that indicate the new location for this page and pass all ranking scores from the old URL to the new URL.

So how do we work around these challenges?

Doing it serverless

Using AWS (or any other cloud provider) is already the first step into serverless. No more traditional way of buying servers, storing them in big cold rooms and taking a trip just to change a broken disk.

In our case, we’re going to use AWS services that enable us to force the action configuration and functionality of the solution, and not the provisioning and management of it.

Serverless services are automatically scaled, and don’t require us to actively provision new resources. For example, in Route 53 you only need to to create a Hosted zone and a DNS record, all network traffic behind the scenes is managed by AWS. So you don’t need to worry about managing DNS servers, and scaling up and down. Fun fact: AWS is committed to making Route 53 100% available (Route 53 SLA).

Component overview

Before jumping into the implementation details, I’d like to explain the components and services that we are going to use:

  • Route 53 is a scalable and reliable DNS service, giving a simple and easy way to route traffic. It is also used for domain registration and management of any subdomain.
  • CloudFront is a content delivery network, helping users to request data. CF uses distribution for setting the content to be delivered, resulting in low latency and low network trafficCloudFront that uses a cache mechanism at the edge location. This is effective in reducing latency by being closer to the user and reducing the load on your origin server.
  • Lambda@Edge is a feature of CloudFront that allows you to run custom code in response to CloudFront events, in the form of AWS Lambda functions.
  • CloudFormation lets you model, provision and manage AWS and third-party resources, by treating infrastructure as code.

Solution overview

The following diagram shows the path a request goes through in our architecture, starting from the user request leaving the computer.

As a request arrives at our local managed subdomain “help-desk.mycompany.com” in Route 53, this DNS record converts the name to a CloudFront distribution address (an alias record pointing to AWS service-Cloudfront).
For each request to the CloudFront distribution, there are two options: the content is cached on the edge location and delivered directly to the user, or the CloudFront needs to retrieve it from the origin we set up.

When setting the origin, and the behavior for this origin, we use the Function associations feature. So when we’re retrieving the data from the origin, we’re invoking the Lambda function and storing it in the cache. So the next request will be automatically answered, and the Lambda function won’t be invoked (lowering the cost).

The Lambda function is responsible for parsing the input request, and returning a response to the user, based on what was asked. The responsible body will contain the 301 error message, with location header for the new URL.

Implementation Details

Let’s take a look at how to configure each of the services introduced:

Lambda function

Create a Lambda function, and use Node.js 14.x runtime.

The following function was used in our case, redirecting to a different URL path:

The function uses a dictionary to convert the old article number to the new one, and passes it to the new URL path. Any other URL requests are redirected to the new home page.

You can customize the return URL value for your specific use case. Create a version for your Lambda and save the ARN of it.

You can use the “Test” feature, and configure it as a template to input a “CloudFront Access Request In Response”, and test different scenarios of requests by changing the “url” value.

For the Lambda function permission, follow this guide. After deploying the Lambda function permission, create a version and save the ARN.

CloudFront distribution

When creating a new distribution use an HTTPS only protocol, and under Function: association: Viewer request enter the Lambda ARN. Add “CNAME: help-desk.mycompany.com”, and make sure you add an SSL certificate for it (Create a certificate).

* If you’re using another distribution for the old website, you’ll need to remove the CNAME and only then add it (two distributes can’t use the same CNAME).

After creating the distribution, you can test how the Lambda and the distribution are behaving — using cURL as the command line tool to create http requests for the distribution. Your distribution has an address that can be reached at, for example, “abcdef.cloudfront.net”.

Route 53

After setting and testing the distribution, all that is left is to change the DNS record to point to the new distribution — this DNS record is an alias type using CloudFront. After changing the record, everything should be synced within seconds. You can test your new configuration using your web browser or cURL, another cool CLI tool is the command “dig”, which collects data about Domain Name Servers.

Wrapping up

As you can see, the solution in a serverless way is super easy to create and get started (just a few steps).
I highly recommend you start using more AWS serverless services, to make the process of provisioning and managing as easy as possible.
Added bonus, it also frees up the team’s time and resources, so they can invest more time on the application side.

Visit our career website

--

--

Ilana Polonsky
Melio’s R&D blog

DevOps Engineer @ Melio. Documenting and sharing my journey in the cloud ☁️