How to Host Internal Websites with AWS ALB, S3 and PrivateLink

Jens Andersson
AWS Specialists
Published in
8 min readOct 31, 2023

I recently stumbled on this blogpost from AWS describing how to host a static internal website with ALB, S3, and PrivateLink.

There are a few different options to set up something like this. It could be the Static website hosting in S3, hosting the site with Cloudfront, with a lambda, etc. There are many different solutions and pros and cons, and with most of them, the website will be public.

What caught my interest with the ALB, S3, and PrivateLink solution is that it will keep all the traffic inside your AWS environment, and you don’t need any other service than you most likely are already using.

Let’s look at how this works and some caveats that might be a blocker if you already have a bucket for your website.

Architecture

https://aws.amazon.com/blogs/networking-and-content-delivery/hosting-internal-https-static-websites-with-alb-s3-and-privatelink/

To get this solution working we need to set this up:

  • S3 VPC Endpoint (type: interface to get ENI’s created that we can use as targets in the ALB target group)
  • An S3 bucket named the same as the FQDN (DNS name) you will use to access the website. For example, if you plan to access your website with web.testdomain.com, your bucket needs to be created with this exact name: web.testdomain.com
  • An available certificate in ACM matches your DNS name.
  • An internal ALB
  • A private route53 zone

Deploy the S3 VPC Endpoint

Go to the VPC dashboard, click Endpoints, and Create an Endpoint.

Under Services, search for s3 and make sure to choose the one with Type: Interface

Select the VPC and subnets you will use. Make sure to use at least 2 AZ for redundancy.

Select a Security Group to protect your endpoint. This Security Group must allow ports 80 and 443 from the ALB you will use.

As an extra layer of security, we can configure a Policy that limits which S3 Buckets we can access through the endpoint. For this test, I will leave it at Full access. We can now click Create endpoint to get the endpoint created!

Give it a few minutes, and the new endpoints should be created with the status Available. Head to the Subnets tab and make a note of the IP addresses and VPC Endpoint ID, as we will use it later.

Create S3 bucket

As mentioned in the beginning, here is one of my caveats. You must name the bucket exactly the same as the FQDN (DNS name) you will be using. If the bucket is not created like this, you will receive an error when accessing the site saying:

<Error>

<Code>NoSuchBucket</Code>

<Message>The specified bucket does not exist</Message>

</Error>

Go to the S3 dashboard and click ‘Create bucket’. Name the bucket your FQDN, and you can leave the rest of the settings as default.

When the bucket has been created, click on it and head to the Permissions tab. Click Edit on the Bucket Policy. Here, we will create a policy that allows the VPC Endpoint to access the bucket. You can use the example policy below:

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VPCE",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": [
"arn:aws:s3:::web.jensa.mytestdomain.io/*",
"arn:aws:s3:::web.jensa.mytestdomain.io"
],
"Condition": {
"StringEquals": {
"aws:SourceVpce": "vpce-01c3d4e8ae4b8596d"
}
}
}
]
}

Your bucket policy should now look something like this.

Before going to the next step, we can upload an example index.html file we will use later to test that the website works.

<!doctype html>
<html>
<head>
<title>Demo</title>
</head>
<body>
<p>This is a test page for your internal website hosted with AWS ALB and S3!</p>
</body>
</html>

Switch to the Objects tab, click Upload, choose your index.html file, and click Upload.

Create a certificate in ACM

Head to the AWS Certificate Manager (ACM) dashboard and click the Request button. Request a public certificate specifying the FQDN (Fully qualified domain name). Here, you can use the exact name or create a wildcard certificate (for example, *.example.com protects www.example.com, site.example.com, and images.example.com).

For this example, I will create a certificate for web.jensa.mytestdomain.io. Choose the Validation method, then click Request.

Click the refresh button if you don’t see your newly created certificate. Click on the certificate, and you will see it says Pending validation.

Click on the Create records in Route 53 button. Verify it found the domain in Route53, then click Create records.

Give it a few minutes, and you will see that the status has changed to Issued.

Create an internal ALB

Head to the EC2 dashboard, go to Load Balancers, and click Create load balancer. Choose Application Load Balancer, specify a Load balancer name, and change Scheme to Internal.

Choose VPC and subnet mappings.

Choose a Security group.

Under Listeners and routing, Change listener protocol to HTTPS.

Click Create target group. Set the target type to IP addresses and specify a target group name.

Under Health checks, change protocol to HTTP, change health check port to Override and “80”, and change Success codes to “200,307,405”.

On the Register targets step, specify the IP addresses we got after creating the S3 VPC Endpoint. Click Include as pending below. Then click Create target group.

Go back to Listeners and routing under the ALB creation, and choose the target group you just created.

Under Secure Listener settings, choose the certificate we previously created.

You can now proceed and click Create load balancer. Give it a few minutes, and it will change State from Provisioning to Active.

Create ALB listener rules

Click on the newly created ALB, and on the Listeners and rules, click on 1 rule to edit the rules.

Change the default rule from Forward to target groups to Return fixed response.

We can now start adding new rules by clicking on ‘Add rule’. Name the rule, add a first condition with type Host header, and set the value to the FQDN of your site. Then create a second condition with type Path and value */

In the Actions tab, change Routing actions to Redirect to URL. Port should be set to #{port}, choose Custom host, path, query, and change Path to: /#{path}index.html

Before creating the rule, set a Priority and give some space between the rules in case you need to add a rule in between in the future.

Now, we need to create that last rule that will forward the traffic to the VPC Endpoints. This time, only create one condition with the type Host header and set the value to the FQDN of your site.

At the Actions tab, choose the previously created target group.

Set a priority, then create the rule.

You should now have 3 Listener rules, and your ALB is ready to send traffic to your website.

Create DNS record

A last step before testing our website is creating a DNS record in our private hosted zone in Route53.

Go to the Route53 dashboard, hosted zones, choose your Private hosted zone, and Create a record.

Set the record name, change the Record type to CNAME, and under Value, use the DNS name of the internal ALB you previously created.

Verify that the record was created.

Test and validate

All the necessary resources have now been created and configured, and you should be able to reach your new website.

Conclusion

For me, this was a new way of hosting a website. Typically, using Static website hosting in S3 or Cloudfront is very useful. As there is no need for any other AWS resources and using private IP addresses for the VPC Endpoint and ALB, all traffic will stay private and secure inside your VPC. And you can easily reach the site using AWS ClientVPN with split-tunnel routing.

When working with website hosting in S3, I would just highlight the risk of what is usually called S3 bucket takeover.

In short, if you don’t have your bucket created but are still trying to expose a website that is trying to use that bucket. An error will tell the user 404 “NoSuchBucket,” the user could then create the bucket, turn on Static website hosting, and maliciously have control of your website.

A more in-depth explanation can be found here: https://socradar.io/aws-s3-bucket-takeover-vulnerability-risks-consequences-and-detection/

So make sure to have your buckets created before creating DNS records and exposing them, and delete old DNS records that are not in use!

--

--