Building a Multi-Region Unreal Engine Dedicated Server Fleet in AWS with Terraform and Packer

Luke Brady
12 min readSep 2, 2020

--

Today I will walk you through the process of building, deploying, and managing a multi-region Unreal Engine dedicated server fleet in AWS. This tutorial can be used as a reference for when you and your team decide that you need to create your own set of services for deploying and managing a large scale game server fleet in AWS. All of the code for this tutorial is available here: https://github.com/UnrealOps/ue4-fleet-management and be sure to check out my other blog posts on working with Unreal Engine in AWS.

WARNING: In this tutorial, I will be using my AWS accounts’ default VPC. This is not recommended for production and should be used as a reference, not a guide.

What You Will Learn

  1. How to use Packer to build an Unreal Engine dedicated server AMI and copy it to multiple AWS regions. In this tutorial, we will be deploying to us-east-1, us-west-2, eu-west-1, eu-central-1, and ap-southeast-1.
  2. How to deploy auto scaling groups to multiple regions with identical AMIs to host game sessions geographically close to your players.
  3. How to use Terraform to manage your game server infrastructure.

By the end of this tutorial you will have successfully deployed a global and highly available Unreal Engine game server fleet in 5 regions and 15 availability zones.

What You Will Need

  1. Tutorial source code — https://github.com/UnrealOps/ue4-fleet-management
  2. An AWS Account
  3. Unreal Engine source code — https://www.unrealengine.com/en-US/ue4-on-github
  4. Visual Studio 2019 Community Edition — https://visualstudio.microsoft.com/vs/
  5. Terraform — https://www.terraform.io/downloads.html
  6. Terragrunt — https://terragrunt.gruntwork.io/docs/getting-started/install/
  7. Packer — https://www.packer.io/downloads
  8. Git — https://git-scm.com/downloads

Helpful Tips

Before we begin, there are a couple configuration values within the Unreal Engine that I would like to bring to your attention: DDoS detection and server tick rate.

Because we will be deploying game servers around the globe that will be open to the internet (this will depend on the type of game and the security requirements of the dedicated server fleet), I thought it would be useful to mention that Unreal Engine has DDoS detection built in that can alleviate the affect of a DDoS attack on a single server instance. You can enable DDoS detection by navigating to Engine/UE4/Config/BaseEngine.ini in the Visual Studio solution explorer, searching for DDoSDetection, and setting the boolean bDDoSDetection equal to true.

Setting bDDoSDetection to true will enable DDoSDetection in our dedicated server, improving overall player experience when a DDoS attack occurs.

I also wanted to point out the NetServerMaxTickRate setting located just above the DDoSDetection setting located in the BaseEngine.ini file. This setting tells the dedicated server how many updates, or ticks, the server should send to the game client per second. For example, a game like Valorant sends 128 updates per second to each game client so the value here would be 128. If you are creating a social experience like Fortnite Party Royale, you may only need to set the max tick rate to 20 because there is not as much data to replicate. This will also allow you to save on server costs, because you can allocate the proper number of CPUs depending on the replication and performance requirements of your game.

NetServerMaxTickRate sets the max tick rate of the server. Tick rate is the amount of times the server sends an update to the client per second.

Uploading the Dedicated Server to S3

Before we create the AMI that will run our dedicated server, we will need to build the dedicated server, create a tar archive, and upload the dedicated server binary into an S3 bucket. The S3 bucket will be used by Packer to securely copy our dedicated server binary into the running Packer builder instance before creating the AMI. For this tutorial, I created a level with multiple assets to simulate a real game. In previous tutorials I have used the default assets that come included with the engine, but for this tutorial I thought it would be interesting to use assets from the Unreal Marketplace.

Here are the links to all of the assets used within this level.

The level I will be using during this tutorial.

Now that we have a level, we need to build and package the Unreal Engine dedicated server for Linux. If you have never built an Unreal Engine dedicated server, please refer to this tutorial where I walk you step-by-step through the process: https://medium.com/swlh/building-and-hosting-an-unreal-engine-dedicated-server-with-aws-and-docker-75317780c567.

To package the dedicated server in the Unreal Engine editor, click File -> Package Project -> Linux -> Linux. You will then be prompted to supply an output directory. I always package my server binaries under the Binaries/Linux folder within the project’s root directory.

Packaging the Linux dedicated server.

After the Linux server build has completed, it is time to tar the server binary and upload it into S3. We will use Terraform to create the S3 bucket, and to follow along, you will need to clone the source code for this tutorial here: https://github.com/UnrealOps/ue4-fleet-management. With the cloned source code, there are a few modifications that need to be made before we create the S3 bucket. Open the terraform/buckets/terraform.tfvars file and add the AWS profile that you will be using to authenticate to AWS. You will also need to add the region you would like your S3 buckets to be deployed to, as well as the name of both the Terraform state S3 bucket and the dedicated server S3 bucket.

After adding values to the four variables in the terraform.tfvars file, it is time to create the S3 buckets. To create the S3 buckets with Terraform, run the following commands from within the terraform/buckets folder.

terraform init
terraform plan -out plan
terraform apply plan

After running terraform apply, you will see that there are 5 resources that will be created and you will be prompted to confirm that you want to apply these changes. Enter “yes” and Terraform will create two S3 buckets, a KMS Key to encrypt your S3 buckets, and an S3 bucket policy that will restrict public access to the buckets.

Although the S3 buckets have been created, we still have local Terraform state. We can use the Terraform state bucket we just created to house the Terraform state for our AWS infrastructure. To move the Terraform state to the state bucket, we will need to modify the terraform.tf file located within the terraform/buckets folder. Add the following code to the terraform.tf file.

terraform {
backend "s3" {
bucket = "<your_terraform_state_bucket_name>"
key = "terraform-state/buckets/statefile"
profile = "<your_aws_profile>"
region = "<your_region>"
}
}
My terraform.tf configuration.

All we need to do is re-initialize Terraform and it will seamlessly handle copying our Terraform state to S3.

terraform init
Enter “yes” when you are prompted by Terraform to copy the existing local state to the S3 bucket.

Now that our Terraform state has been moved to S3, we can upload the dedicated server to the S3 bucket. Before we upload the dedicated server, run the following commands from your terminal to tar and gzip the server binary.

cd %UNREAL_ENGINE_PROJECT_ROOT%/Binaries/
tar -czf dedicated-server.tar.gz Linux/
My project’s Binaries folder.
Tarring the Linux directory.

After compressing your server binary, upload to S3 with the following command.

aws s3 cp ./dedicated-server.tar.gz s3://<dedicated_server_bucket>/

With the dedicated server binary uploaded into S3, it time to build the dedicated server AMI.

Building the Dedicated Server AMI

The next step is to create an IAM instance profile that our Packer builder can use to access the AWS services required to successfully build the AMI. I have supplied an IAM instance profile in the code base for this tutorial. To create the IAM instance profile, cd into terraform/iam and edit the terraform.tf file and the terraform.tfvars file. You will need to add the region of your S3 bucket, the S3 bucket’s name, and the AWS profile you are using to authenticate. After adding your values to the files, you can run the following commands to build the IAM role, policy, and instance profile required to build the AMI.

# Change the current working directory to terraform/iam.
cd terraform/iam
# Initialize Terraform and create the IAM role and instance profile.
terraform init
terraform plan -out plan
terraform apply plan

After creating the IAM instance profile, it is finally time to use Packer to build the dedicated server AMI. Take a look at the packer/variables/dedicated-server-ami-vars.json file. You will need to update the dedicated_server_bucket variable and the profile variable. All of the other variables can be left alone. The provision script assumes that the base image is Ubuntu, so you will run into problems if you change the source AMI. You will also want to edit the dedicated-server.service file located under scripts/services/dedicated-server.service. This file allows systemd to control the management of the Unreal Engine server process. Below is my configuration, but you will need to change the name of your server’s shell script so that systemd can successfully start up your game server binary.

[Unit]
Description=Unreal Engine Dedicated Server
StartLimitAction=reboot
[Service]
ExecStart=/usr/bin/dedicated-server/Linux/LinuxServer/<Your_Server>.sh
Nice=10
Restart=Always
RestartSec=1
User=unreal
[Install]
WantedBy=multi-user.target

To create the AMI, use the following Packer command:

packer build -var-file=packer/variables/dedicated-server-ami-vars.json packer/templates/dedicated-server-ami.json

This command will take the variable file and use it to create an AMI with the values you supplied. This process will take some time as Packer will handle copying the AMI to all supplied regions. After the AMI has been created, we can start to deploy our infrastructure with Terraform.

Your output will be similar, but with different AMI ids.

Deploying the Server Fleet with Terragrunt

For this section of the tutorial, I will be using Terragrunt to deploy the multi-region server fleet. It will allow us to use the same terraform.tf configuration without having to copy and paste within each module. You will need to edit the terragrunt.hcl file located here: terraform/fleet-infrastructure/terragrunt.hcl.

remote_state { 
backend = "s3"
config = {
bucket = "<terraform_state_bucket>"
profile = "<aws_profile_name>"
key = "${path_relative_to_include()}/terraform.tfstate"
region = "<region>"
encrypt = true
}
}

We will start the deployment in us-east-1. To deploy the us-east-1 server fleet, cd into terraform/fleet-infrastructure/fleet-us-east-1. If you would like to change the min, max, and desired capacity values of the autoscaling group, feel free to change the default values in the terraform/fleet-infrastructure/fleet-us-east-1/terraform.tfvars file. The default values provided with the code base can be seen below.

region = "us-east-1"
profile = "<your_profile>"
# Fleet parameters
fleet_capacity = 3
fleet_max_size = 5
fleet_min_size = 1
instance_type = "c5.large"

Run the following commands to deploy to us-east-1 with Terragrunt.

terragrunt init
terragrunt plan -out plan
terragrunt apply plan

After the deployment has completed, open up EC2 in the AWS console and go to us-east-1. Click Instances and you should see all of your dedicated server instances starting to spin up.

My dedicated server fleet in us-east-1.

Once your instances are up and running, it is time to connect the game client to one of the servers within the fleet. Just for fun, pick a random instance within the fleet. It should not matter which server we connect to because each instance is running an identical version of the game server code.

For this example, I will try and connect to the first two instances within the list below.

I will test with 34.235.159.139 and 54.224.181.168.

To connect to one of your instances, open up your project in the Unreal Engine editor. Double click your EntryLevel and select Blueprints -> Open Level Blueprint.

Now create a connection between the Event Begin Play node and a new Open Level node. You can now change the value of the Level Name parameter in the Open Level node to the public IP address of your instance running in AWS.

Change the value of Level Name to your instance’s public IP address and port 7777.

After entering the IP address, click Compile in the upper left hand corner of the Blueprint editor. With the level’s Blueprint compiled, let’s check to see if we can successfully connect to the server by starting the game. Exit the Blueprint editor and click Play. If everything was set up correctly, the client will connect to the server and the server will tell the client to travel to the default level.

The client successfully connected to the server!

After connecting, you can run around your level just like you would from your local machine during development. Now let’s test out another instance in the fleet. All we have to do for this step is change the IP address in the level Blueprint.

After updating the IP address in the Level Name parameter, recompile the Blueprint.

Compile the Blueprint, click Play, and see if you connect to the instance. You should have the same result as the first instance.

Running around my level after connecting to the instance.

We can now move on and deploy the rest of our server fleets in ap-southeast-1, eu-central-1, eu-west-1, and us-west-2. To deploy these, cd into each region’s directory and run the same terragrunt commands that we ran for us-east-1.

Note: If you are using AWS free tier, you may receive the following error when deploying to multiple regions: “Your request for accessing resources in this region is being validated”. If this occurs, please wait until the automated request is approved and redeploy.

# Change the directory to each region.
cd terraform/fleet-infrastructure/fleet-<region>/
# Run terragrunt to deploy to each region.
terragrunt init
terragrunt plan -out plan
terragrunt apply plan

Once you have successfully deployed each fleet, you should see three instances in all 5 regions. You will also notice that each instance is within a different availability zone. This is because the auto scaling group is configured to deploy to availability zones a, b, and c within each region to prevent a full service outage if an availability zone were to go down.

Congratulations, we just deployed 15 instances across 5 regions and 15 availability zones. We can now test the connection from the game client to the game servers within each region.

I tested ap-southeast-1 first. I was able to connect to the instance, but it took a few seconds because my client was sending and receiving information from a server in Singapore.

Connecting to an instance in ap-southeast-1.
I successfully connected, but it took a few seconds.

I also tested instances in both eu-west-1 and eu-central-1. I was able to connect to both and the load times were noticeably faster than when I connected to ap-southeast-1.

Connecting to an instance in eu-west-1.
Connected to an instance in eu-west-1.

Depending on your geographic location, you will notice that some connections are faster than others. This demonstrates the importance of keeping game servers close to your players to reduce latency and improve the overall player experience.

Conclusion

Thanks for taking the time to work through this tutorial. I hope you learned a lot of new material and had fun along the way. You can destroy all of the resources created in this tutorial by running terraform destroy in each of the following directories:

terraform/fleet-infrastructure/fleet-ap-southeast-1
terraform/fleet-infrastructure/fleet-eu-central-1
terraform/fleet-infrastructure/fleet-eu-west-1
terraform/fleet-infrastructure/fleet-us-east-1
terraform/fleet-infrastructure/fleet-us-west-2
terraform/iam/
terraform/buckets

Thanks again and have a wonderful day!

--

--

Luke Brady

A blog fueled by code, coffee, and a healthy dose of video games. Follow me on Twitter: @LukeBrady105