Misconfigured $3 Bucket - A Semi Opened Environment

Yukesh Kumar [ 3th1c_yuk1 ]
Techiepedia
Published in
6 min readJun 27, 2021

First of all, just learn to recon and improve your methodology in recon don’t just follow another one’s recon tip if you do so there is no difference between you and them so if you see a #bugbountytip in Twitter or somewhere just wake your creativity up with that tip!

Hello Ethic Hackers,
I’m Yukesh alias 3th1c_yuk1. In this blog, I’ll discuss about S3 bucket misconfiguration which I got in Redbull and this is my first writeup so if you found any vulnerability in my writeup don’t exploit it ;) [grammatical errors].

A week before I just concentrated once again in the Redbull program. I already reported 50+ bugs in Redbull but all become duplicates and Not Applicable so I lost my hope in red bull because it’s a vast scope and there are more hackers continuously reporting bugs in that program so I decided to think different.

INITIAL PHASE :

I started to enumerate subdomains from redbull.com and I have a script to do that automatically (I will discuss that in another writeup). Simultaneously I started to get the acquisitions of Redbull with https://tools.whoisxmlapi.com/reverse-whois-search this site is more compatible to get the acquisitions of a target and it has a feature to export the result as a CSV file. Next, I opened the CSV file with Notepad and I replaced all the unwanted things like ‘,’ etc with space, and same as first I started to enumerate subdomains on the acquisitions too with my script in a loop. Yes, I like to loop ;)

The command I used to loop my script :

cat redbull-domains.txt | while read HOST ; do ( bash 3th1c.sh $HOST ) ; done | tee -a redbull-domains.txt

Command breakdown :

  • cat domains.txt→ To read the lines in the specific file.
  • |→ It’s a pipe command which sends the output of one command to another.
  • HOST→ It’s a first line in the specific .txt file.
  • do ( our script syntax ) ; done→ It’s a basic command in C and C++.
  • tee -a→ Reads the standard input and writes it to both the standard output and one or more files. It basically breaks the output of a program so that it can be both displayed and saved in a file also -a used to append the input with the output.

SECOND PHASE :

So after getting some of the subdomains from RedBull and its acquisition next step is to probe for alive domains with the project discovery tool httpx ( which I like to use ) but this time I used some flags. I recommend you first go through the usage and flags of the specific tool, so that you can use the tool efficiently.

The Command I used :

cat redbull-domains.txt | httpx -status-code --path .s3.amazonaws.com

Flags breakdown :

  • status-code → Shows the status code of the probed alive hosts
  • --path → The path we need to append with the output

So by doing this I didn’t get any 200 status code and only 404 and some 503. I started a different approach, I used another command to get only the first-level and second-level domain of the subdomain because some S3 buckets have names like a single word and it’s up to the developer to set a name so I started to do this.

The command I used :

cat redbull-domains.txt | cut -d “.” -f1 | sort -u | tee redbull-buckets.txt ; cat redbull-domains.txt | cut -d “.” -f2 | sort -u | tee -a redbull-buckets.txt

Command breakdown :

  • cut → This command is used to cut the sections of the each line
  • sort -u → Removes the duplicate output of each lines.

FINAL PHASE :

So after doing this I got a file named Redbull-buckets.txt with some first-level and second-level domains then I used the same httpx tool command.

The Command I used :

cat redbull-buckets.txt | httpx -status-code — path .s3.amazonaws.com

But this time by doing this I got some 403 status codes so I started to get all the 403 hosts and I checked one by one by copying and pasting the hosts in the browser but all I got was “AccessDenied” but I didn’t give up on this.

Sometimes S3 buckets are not accessible in browsers but we can check the buckets in another way “aws-cli”

The AWS Command Line Interface (CLI) is a unified tool to manage your AWS services. With just one tool to download and configure, you can control multiple AWS services from the command line and automate them through scripts.

First, you need to configure the aws-cli in your terminal. You can do it by using the command “aws configure” and get the access key id and secret key by creating an account on aws and paste it in there.

Then there are so many commands for aws-cli but we use only four commands mostly…

Commands :

  1. For viewing the objects in a bucket : aws s3 ls s3://[bucket_name]
  2. For writing an object in a bucket : aws s3 cp [FILE] s3://[bucket_name]
  3. For downloading an object from a bucket : aws s3 cp s3://[bucket_name]/[FILE] ~/
  4. For removing an objects from a bucket : aws s3 rm s3://[bucket_name]/[FILE]

I started to check all the 403 hosts for viewing the contents in a bucket but all are the same “AccessDenied” but still, I didn’t give up.

Tried to access it in terminal but still “Access Denied"

Just started to check for writing an object in the bucket and after some time a result grabbed my attention: upload ../[FILE] to s3://bucket_name/s3_bucket.svg

So I visited the bucket and viewed my uploaded file but it says once again “Access Denied” and yes I hate this word :(

After reading the documentation of AWS I got to know that we can able to change the policy of the bucket if it is vulnerable. So I used the following command to change the policy of my target to the public.

Final Command :

aws s3 cp [FILE] s3://[bucket_name] --acl public-read

After I entered this command I can able to view my uploaded file publicly ;)

Created a nice report and submitted it to Redbull, the team validated the security issue and has been accepted in four days and you know what will be the reward ;)

IMPACT :

A bad configuration of your Buckets Amazon S3 can have big impacts such as attacks of corruption of data, distribution of malware, even typical attacks like ransomware. An attacker can able to perform some unauthorised actions with the aws-cli and he can able to inject a malicious code into a file, uploading this in your bucket results in some severe impact.

MITIGATION :

https://docs.aws.amazon.com/AmazonS3/latest/userguide/security-best-practices.html

FEW TAKEAWAYS :

  1. Be Creative
  2. Build your methodology
  3. Learn basics of Linux
  4. Learn at least the basics of programming I won’t like to code but still, I know the basics.

TIMELINE :

P.S — This is my first writeup and if you came up with any suggestions or doubt you are always welcome …

TWITTER - https://twitter.com/3th1c_yuk1

LINKEDIN - https://www.linkedin.com/in/3th1cyuk1/

--

--