AWS S3 Cross Region Replication: A Deep Dive

Oğuzhan Hızıroğlu
10 min readOct 8, 2023

--

Amazon Web Services (AWS) is a frontrunner in cloud services, and one of its most popular services is Amazon Simple Storage Service (S3). With S3, users can store and retrieve any amount of data at any time. But, what if you want to ensure your data is stored in multiple geographical locations for disaster recovery, compliance, or latency reduction? That’s where S3 Cross Region Replication (CRR) comes into play.

What is S3 Cross Region Replication (CRR)?

CRR is a feature in AWS S3 that automatically replicates data from one S3 bucket in one AWS region to another S3 bucket in a different region. This replication is done at the object level and is performed asynchronously. CRR is useful for various scenarios, such as:

  1. Disaster Recovery (DR): Ensure data availability in case of regional outages or failures.
  2. Compliance: Some businesses have regulatory mandates to store data in multiple, geographically distant locations.
  3. Latency reduction: Serve content to end-users from a region closest to them by having the same data replicated across multiple regions.

Setting up Cross Region Replication

1. Versioning: Ensure that versioning is enabled on both the source and destination buckets. CRR requires versioning.

2. IAM Role: You’ll need an AWS Identity and Access Management (IAM) role that S3 can assume to replicate objects on your behalf.

3. Set Up Replication: Within the S3 console, navigate to the source bucket and choose the “Management” tab. There you’ll find the “Replication” rules where you can define which objects get replicated and specify the destination bucket and region.

Features and Considerations

  1. Existing Objects: CRR does not automatically replicate objects that were stored in the source bucket before the replication configuration was set up. Only new objects created or updated after the replication configuration will be replicated.
  2. Replication Time: While CRR is designed to replicate objects quickly, the time it takes depends on the size of the object and the distance between source and destination regions.
  3. Replication Metrics: AWS S3 offers metrics and events to track the replication status, which can be particularly useful to ensure data consistency and troubleshoot any issues.
  4. Cost: There’s an associated cost for replicating data across regions. This includes storage costs in the destination region and data transfer costs.
  5. Delete Operations: If an object or its version is deleted in the source bucket, it will not be deleted from the destination bucket by default, preserving your data. However, this behavior can be configured.
  6. Transitive Replication: While CRR allows for the replication of data between two regions, AWS S3 does not support chaining or transitive replication where data is replicated from region A to B and then from B to C.

Conclusion

S3 Cross Region Replication is a powerful feature for ensuring data durability and availability across geographical locations. Whether you’re aiming for disaster recovery, meeting compliance requirements, or optimizing for latency, CRR offers a streamlined solution. However, always be aware of the associated costs and ensure you monitor replication metrics to keep your data in sync.

Hands-on Time

Now let’s get our hands dirty. Let’s practically experience how Cross-Region Replication occurs in AWS’s S3 service.

Part 1 — Create Source Bucket with Static Website

  • Let’s go to AWS’s S3 service and create a bucket named “source.replica.oguzhan”. Attention! Bucket must have the following features:
Region                      : us east-1 (N.Virginia)
Block all public access : UNCHECKED (PUBLIC)
Versioning : ***ENABLE***
Tagging : 0 Tags
Default encryption : Server-side encryption with Amazon S3 managed keys (SSE-S3)
Object lock : Disabled

Note: Please, do not forget to select “US East (N.Virginia)” as Region.

Step-1
Step-2
Step-3
Step-4
Step-5
Step-6

At this point, we have created the bucket named “source.replica.oguzhan” to have the desired features. Let’s continue.

  • Click the S3 bucket `source.replica.oguzhan` and upload following files. (At this point we will upload a simple HTML code and a cat image. Write the following code in an HTML file and save it. Then download the cat image below. Upload this HTML file (name it index.html) and cat.jpg files to your S3 bucket.)
<html>
<head>
<title> Cutest Cat </title>
</head>
<body>
<center><h1> My Cutest Cat Version 1 </h1><center>
<center><img src="cat.jpg" alt="Cutest Cat"</center>
</body>
</html>
cat.jpg
Step-7
Step-8
  • Click ‘Properties’ >> ‘Static Website Hosting’ and put checkmark to ‘Use this bucket to host a website’ and enter ‘index.html’ as default file.
Step-9
Step-10
Step-11
Step-12
  • Set the static website bucket policy as shown below (‘PERMISSIONS’ >> ‘BUCKET POLICY’) and change ‘bucket-name’ with your own bucket.
Step-13
Step-14
  • Use following policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::don't forget to change me/*"
}
]
}
Step-15
Step-16
  • Open static website URL in browser and show it is working.
Step-17
Step-18
Step-19

Part 2 — Create Destination Bucket with Static Website

  • Let’s go to AWS’s S3 service and create a bucket named “destination.cross.region.replica.oguzhan”. Attention! Bucket must have the following features:
Region                    : us-east-2 (**Ohio)
Allow all public access : UNCHECKED (PUBLIC)
Versioning : ***ENABLED***
Tagging : 0 Tags
Default encryption : Server-side encryption with Amazon S3 managed keys (SSE-S3)
Object lock : Disabled

Note: Please, do not forget to select “US East (Ohio)” as Region.

Step-20
Step-21
Step-22
Step-23
Step-24
Step-25
Step-26
  • Click ‘Properties’ >> ‘Static Website Hosting’ and put checkmark to ‘Use this bucket to host a website’ and enter ‘index.html’ as default file.
Step-27
Step-28
Step-29
Step-30
Step-31
  • Set the static website bucket policy as shown below (‘PERMISSIONS’ >> ‘BUCKET POLICY’) and change ‘bucket-name’ with your own bucket.
Step-32
Step-33
  • Use following policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::don't forget to change me/*"
}
]
}
Step-34
Step-35
Step-36

Part 3 — Creating IAM Role for Bucket Replication

  • Go to ‘IAM’ Service on AWS management console. Click ‘Policies’ on the left-hand menu.
Step-37
  • Select ‘Create Policy’.
Step-38
  • Select ‘JSON’ option and paste the policy seen below.
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:Get*",
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::source.replica.oguzhan(change me)",
"arn:aws:s3:::source.replica.oguzhan(change me)/*"
]
},
{
"Action": [
"s3:ReplicateObject",
"s3:ReplicateDelete",
"s3:ReplicateTags",
"s3:GetObjectVersionTagging"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::destination.cross.region.replica.oguzhan(change me)/*"
}
]
}
Step-39
Step-40
Step-41
  • Enter the followings as policy name and description.
Name            : yourname.cross.replication.iam.policy

Description : yourname.cross.replication.iam.policy
Step-42
  • Click ‘Create Policy’.
Step-43
  • Go to ‘Roles’ on the left hand menu and click ‘Create Role’.
Step-44
Step-45
Type of Trusted Entity      : AWS Service
Use Case : S3
Step-46
  • Click ‘Next’.
Step-47
  • Enter ‘yourname.cross.replication.iam.policy’ in filter policies box and select the policy.
  • Click ‘Next’.
Step-48
  • Enter the followings as role name and description.
Role Name           : yourname.cross.replication.iam.role
Role Description : yourname.cross.replication.iam.role
Step-49
  • Click ‘Create Role’.
Step-50
Step-51

Part 4 — Configuring (Entire) Bucket Replica

Part 4.1 — Configuring Bucket

  • Go to S3 bucket on AWS Console.
  • Select ‘source.replica.oguzhan’ bucket.
Step-52
  • Select ‘Management’ >> ‘Replication Rules’ >> ‘Create Replication Rule
1. Replication Rule Name            : MyReplicationRule
2. Status : Enable
3. Source Bucket :
- Choose a rule scope as "Apply to all objects in the bucket"
4. Destination :
- Choose a bucket in this account
- Bucket name : destination.cross.region.replica.yourname
5. IAM Role : yourname.cross.replication.iam.role
6. Encryption : Unchecked
7. Destination Storage Class : Unchecked
8. Additional Replication Options : Leave it as is
Step-53
Step-54
Step-55
Step-56
Step-57
Step-58
Step-59

Part 4.2 — Testing

Step-60
Step-61
  • Go to VS Code and change the line in ‘index.html’ as;
        <center><h1> My Cutest Cat Version 1 </h1><center>
| | |
| | |
V V V
<center><h1> My Cutest Cat Version 2 </h1><center>
<html>
<head>
<title> Cutest Cat </title>
</head>
<body>
<center><h1> My Cutest Cat Version 1 </h1><center>
<center><img src="cat.jpg" alt="Cutest Cat"</center>
</body>
</html>
  • New Version of HTML:
<html>
<head>
<title> Cutest Cat </title>
</head>
<body>
<center><h1> My Cutest Cat Version 2 </h1><center>
<center><img src="cat.jpg" alt="Cutest Cat"</center>
</body>
</html>
  • Go to ‘source.replica.oguzhan’ bucket and upload ‘index.html’ and ‘cat.jpg’ again.
Step-62
Step-63
  • Go to ‘destination.cross.region.replica.oguzhan’ bucket, copy ‘Endpoint’ and paste to browser.
Step-64
Step-65
Step-66
Step-67
  • Show the website is replicated from source bucket.
Step-68
Step-69

Conclusion (Hands-on)

In this hands-on exercise, an in-depth implementation of AWS S3’s Cross-Region Replication was carried out. Objects introduced to the source bucket in the North Virginia (us-east-1) region were systematically replicated to the destination bucket in the Ohio (us-east-2) region. This replication process effectively mirrored the latest version of content from North Virginia, allowing for an updated representation of the static website in Ohio. Leveraging the features of S3’s Cross-Region Replication, a dependable and consistent content delivery mechanism was established across disparate regions.

Oğuzhan Selçuk HIZIROĞLU

AWS Golden Jacket Winner | AWS Champion Authorized Instructor

--

--

Oğuzhan Hızıroğlu

AWS Ambassador | AWS Golden Jacket (13 X AWS) | AWS AAI Community Difference Maker Award Winner | Champion AWS Authorized Instructor (AAI) | SysOps A.