Compromising a vulnerable GCP, INE-Labs GCPGoat walkthrough. Part-1
Hello Everyone,
As we start the new year, it’s important to remember the ever-evolving nature of cybersecurity and the need to constantly stay up-to-date on the latest threats and best practices.
What this article is all about?
Today we’ll go over GCPGoat: A Damn Vulnerable GCP Infrastructure by #INE-Labs, which challenges compromising the application and gaining admin access to the GCP where the application is hosted.
https://github.com/ine-labs/GCPGoat
What we will learn with this?
After we walk through the challenge and gain access to the environment, we will try to summarise how this could have been mitigated and what best practices for securing the GCP environment could have been implemented.
GCPGoat provides a variety of challenges, including application attacks and cloud misconfiguration, which will result in admin privileges for the Web-app and privileged access to the GCP project. We will focus solely on breaking down the steps that will lead to access to the GCP project where the application is hosted.
If you are in need of instructions on how to set up the GCPGoat application on your personal GCP account, you can find them at the end of this post; if you are just looking for a walkthrough, then let’s get started.
If the installation was successful, a cloud function URL of Web-app would be provided at the end, as shown below.
Let’s take a look at the web app and the portal. The initial challenge is to get past the incorrectly configured bucket (Dev storage) on the same let’s crawl all the blogs on the portal and collect all calls being made, capturing the calls made while accessing the portal showed multiple calls made to storage Google api for objects access on bucket.
The above-captured data shows that the bucket name was prod; let’s see if we can get all of the bucket’s objects.
Accessing the bucket to get all objects is denied, but individual objects can be accessed, indicating that the bucket has fine-grained ACLs.
After some reading and assistance, found a storage API document for testing bucket permissions here. This will enable you to determine whether the relevant permission is associated with the bucket when doing an unauthenticated or authenticated query.
We’re going to use an unauthenticated API call to check whether the bucket IAM permissions(ref) are mapped to “allUsers” on the same.
curl \
'https://storage.googleapis.com/storage/v1/b/prod-blogapp-4b96f7070b93b339/iam/testPermissions?permissions=storage.objects.create&permissions=storage.objects.delete&permissions=storage.objects.get&permissions=storage.objects.getIamPolicy&permissions=storage.objects.list&permissions=storage.objects.setIamPolicy&permissions=storage.objects.update&permissions=storage.buckets.delete&permissions=storage.buckets.get&permissions=storage.buckets.getIamPolicy&permissions=storage.buckets.setIamPolicy&permissions=storage.buckets.update'
{
"kind": "storage#testIamPermissionsResponse",
"permissions": [
"storage.objects.get"
]
}
Thus, we just have “storage.objects.get” for bucket “prod-blogapp-4b96f7070b93b339” which isn’t helpful for our further exploitation; instead, let’s find out if we have a dev bucket by simply changing the word “prod” to “dev” in the bucket name above.
Yes, we can access the dev bucket on the same and run testIamPermissions unauthenticated call, and obtain the bucket’s permissions.
Despite the fact that we were unable to retrieve all of the objects in the dev bucket, there is an intriguing permission set to “allusers” highlighted above (storage.buckets.setIamPolicy). It appears that we can set IAM permissions to the bucket on the same, so let’s do so using the gsuitl command as shown below.
To guarantee there are no permissions/account overlap that are already present on the local setup, it is best to use a new docker instance with the gcloud-sdk when exploiting the permission in order to make unauthenticated calls.
gsutil iam ch allUsers:objectViewer gs://dev-blogapp-4b96f7070b93b339
The above command will set the “storage object viewer” role to “allusers” which will set “storage.objects.lists” as well so we can get all objects in the bucket on the same.
Let’s list everything on the buckets and see if we have anything interesting. If you look in the shared directory, you’ll find some interesting files with the names config.txt and ssh keys directory at /shared/files/.ssh/
Let's dump them locally using command
“gsutil cp -r gs://dev-blogapp-4b96f7070b93b339/shared/files/* /tmp/”
after looking at the file config.txt, we can see the one of host with public IP address, username, and path to the ssh key file. It also appears that we can connect to the instance’s port 22 also.
But ssh is failed with permission denied error as below
The ssh key file has overprivileged permissions, as indicated by the error above. Permissions to the ssh key private file should only be limited to the owner and no one else. Therefore, updating the file’s permissions and retrying the ssh did succeed, and I was able to connect to the box.
After gaining access to the instance wasn’t able to run any sudo commands but however had access to execute gcloud CLI on the instance.
And found the instance is binded with default compute engine service account which comes with editor role by default, this allowed to get running instances on project via ‘gcloud compute instances list’.
So we have another instance with named ‘admin-vm’ on same project and we are currently on ‘developer-vm’ lets try get into ‘admin-vm’
To be continued on next article…
Tips on application setup
Follow through with the instructions given for the installation of the application on GitHub. The best approach would be running the steps over the cloud shell Terminal, running locally would face multiple challenges.
- If running locally ensure you have terraform installed on the same and gcloud SDK.
also if you already have multiple accounts set locally ensure the configurations are set to the respective project and account ID where the application needs to be hosted, confirm running “gcloud config configurations list”.
- Once you have the account set up with permissions on the creation of a new project follow the default installation steps mentioned here.
- Run ‘terraform plan’ before apply to confirm the changes that going to be deployed, though the list is huge but would be good to grab to a file via command terraform plan -no-color > output.txt.
- After terraform apply is completed successfully you would be given with a cloud function URL for web app which is vulnerable to attack for exploitation.
Hope by now if you are looking to host the application on your GCP for exploring is all set now, if you encountering any issues feel free to comment below on sharing.