Unveiling Amazon S3 bucket names

localh0t
4 min readFeb 16, 2019

--

Introduction

In this post I’m going to show you different ways and techniques to reveal bucket names that may be hiding behind a regular domain.

What for, you may ask? Well, if you want to make signed requests to the bucket via the Amazon CLI/S3 API and step into the dark path of weak ACLs (for example), you just simply need it.

Note: I’m not going to explain how to determine if a host we are currently targeting is indeed an S3 bucket or not. For this, Frans Rosén did an excellent job explaining how to identify buckets (and some ways about how to obtain bucket names, as well) in the post I linked earlier.

CNAME

The first technique is a pretty known one — the domain could just be an alias of the S3 endpoint name. A CNAME is a special record in the DNS, which basically maps one domain name to another, referred to as the Canonical Name.

In order to get the bucket name, we fire up our console and run nslookup against the domain:

# nslookup images.somedomain.com
Server: 8.8.8.8
Address: 8.8.8.8#53
Non-authoritative answer:
images.somedomain.com canonical name = images-storage-somedomain.s3.amazonaws.com
<...>

In this case, the S3 bucket name would be images-storage-somedomain. Easy, right?

FQDN

Another common setup, the fully qualified domain name (FQDN) could be the actual S3 bucket name. Following the same example, the FQDN in this case would be images.somedomain.com, so we can try to check if it exists or not:

images.somedomain.com.s3.amazonaws.com

Be mindful in this case, there is a small chance that we are hitting an existing S3 bucket, but it’s a(n) different/unrelated one.

Listing Enabled

If listing objects (s3:ListBucket) is enabled for us in the S3 bucket, the name will be disclosed in the <Name>bucket_name</Name>element, as can be seen next (this example is from the flAWS Amazon CTF):

The bucket name is disclosed in the XML “Name” element.

%C0 Trick

A cool trick twitted by Daniel Matviyiv, we simply add the %C0 url-encoded character at the end of an object URL. The bucket name will be disclosed in the <URI>/bucket_name/...</URI> XML element:

The bucket name is the string between the first 2 forward slashes (/).

This trick has some limitations, though. In order for it to work, the web-server/application should be internally redirecting/referencing the S3 bucket by URL path, and the bucket should not be in the domain name (e.g.):

s3.amazonaws.com/<bucket_name>/<some_object>

If instead of that we have that the web-server/application is redirecting/referencing the full S3 endpoint name (e.g.):

<bucket_name>.s3.amazonaws.com/<some_object>

The trick will not work (you will likely going to get the same error, but the bucket name won’t be disclosed in the URI XML element).

Torrent

This is a quite nice technique, Amazon offers the possibility to download files from S3 using the BitTorrent protocol. As per their documentation:

Retrieving a .torrent file for any publicly available object is easy. Simply add a “?torrent” query string parameter at the end of the REST GET request for the object. No authentication is required. Once you have a BitTorrent client installed, downloading an object using BitTorrent download might be as easy as opening this URL in your web browser.

The interesting thing about this functionality is that the S3 bucket name is disclosed inside the .torrent file. So, in order to get it, we would need to:

  1. Discover a publicly available object in the S3 bucket (Brute-forcing, Google, Wayback Machine history, try to get creative here).
  2. Add the ?torrentquery string parameter at the end of the object.
  3. Download the torrent file and parse it. For this task we can use a script like torrent_parser, which will generate a nice JSON output.
  4. (Optionally) Prettify it!
The S3 bucket name is disclosed in the “x-amz-bucket” attribute.

Brute-forcing

If everything else fails, stick to brute-forcing 😜. There is a nice Ruby script called lazys3 which basically generates different patterns and permutations based on a seed word (e.g. company name), issues an HTTP request, grabs the response header, and shows them to you as long as it’s not returning 404.

Again, be mindful, we might be engaging with a bucket that really doesn’t belong to a target in our scope.

What about the region?

You also need the region? No problem, here you have a list of all possible regions for S3 buckets:

us-east-2
us-east-1
us-west-1
us-west-2
ap-south-1
ap-northeast-3
ap-northeast-2
ap-southeast-1
ap-southeast-2
ap-northeast-1
ca-central-1
cn-north-1
cn-northwest-1
eu-central-1
eu-west-1
eu-west-2
eu-west-3
eu-north-1
sa-east-1

When interacting with an S3 endpoint in the form of <bucket_name>.s3.amazonaws.com or s3.amazonaws.com/<bucket_name>/(no region specified) we are in the us-east-1 (N. Virginia) region by default.

References

--

--