Unveiling Amazon S3 bucket names
In this post I’m going to show you different ways and techniques to reveal bucket names that may be hiding behind a regular domain.
What for, you may ask? Well, if you want to make signed requests to the bucket via the Amazon CLI/S3 API and step into the dark path of weak ACLs (for example), you just simply need it.
Note: I’m not going to explain how to determine if a host we are currently targeting is indeed an S3 bucket or not. For this, Frans Rosén did an excellent job explaining how to identify buckets (and some ways about how to obtain bucket names, as well) in the post I linked earlier.
The first technique is a pretty known one — the domain could just be an alias of the S3 endpoint name. A CNAME is a special record in the DNS, which basically maps one domain name to another, referred to as the Canonical Name.
In order to get the bucket name, we fire up our console and run
nslookup against the domain:
# nslookup images.somedomain.com
images.somedomain.com canonical name = images-storage-somedomain.s3.amazonaws.com
In this case, the S3 bucket name would be
images-storage-somedomain. Easy, right?
Another common setup, the fully qualified domain name (FQDN) could be the actual S3 bucket name. Following the same example, the FQDN in this case would be
images.somedomain.com, so we can try to check if it exists or not:
Be mindful in this case, there is a small chance that we are hitting an existing S3 bucket, but it’s a(n) different/unrelated one.
If listing objects (
s3:ListBucket) is enabled for us in the S3 bucket, the name will be disclosed in the
<Name>bucket_name</Name>element, as can be seen next (this example is from the flAWS Amazon CTF):
A cool trick twitted by Daniel Matviyiv, we simply add the
%C0 url-encoded character at the end of an object URL. The bucket name will be disclosed in the
<URI>/bucket_name/...</URI> XML element:
This trick has some limitations, though. In order for it to work, the web-server/application should be internally redirecting/referencing the S3 bucket by URL path, and the bucket should not be in the domain name (e.g.):
If instead of that we have that the web-server/application is redirecting/referencing the full S3 endpoint name (e.g.):
The trick will not work (you will likely going to get the same error, but the bucket name won’t be disclosed in the
URI XML element).
This is a quite nice technique, Amazon offers the possibility to download files from S3 using the BitTorrent protocol. As per their documentation:
Retrieving a .torrent file for any publicly available object is easy. Simply add a “?torrent” query string parameter at the end of the REST GET request for the object. No authentication is required. Once you have a BitTorrent client installed, downloading an object using BitTorrent download might be as easy as opening this URL in your web browser.
The interesting thing about this functionality is that the S3 bucket name is disclosed inside the
.torrent file. So, in order to get it, we would need to:
- Discover a publicly available object in the S3 bucket (Brute-forcing, Google, Wayback Machine history, try to get creative here).
- Add the
?torrentquery string parameter at the end of the object.
- Download the torrent file and parse it. For this task we can use a script like torrent_parser, which will generate a nice JSON output.
- (Optionally) Prettify it!
If everything else fails, stick to brute-forcing 😜. There is a nice Ruby script called lazys3 which basically generates different patterns and permutations based on a seed word (e.g. company name), issues an HTTP request, grabs the response header, and shows them to you as long as it’s not returning 404.
Again, be mindful, we might be engaging with a bucket that really doesn’t belong to a target in our scope.
What about the region?
You also need the region? No problem, here you have a list of all possible regions for S3 buckets:
When interacting with an S3 endpoint in the form of
s3.amazonaws.com/<bucket_name>/(no region specified) we are in the
us-east-1 (N. Virginia) region by default.