Secure and Controlled Access to Private Files in Amazon S3 Buckets

Hamid Reza Salimian
5 min readMay 7, 2023

--

Storing and managing files securely is a critical aspect of any application. Amazon Simple Storage Service (S3) provides robust, scalable, and secure storage for files. However, when dealing with sensitive data, it’s essential to implement controlled access to ensure that only authorized users can view these files. In this post, I’ll walk you through a solution that combines authentication, temporary URLs, caching, and cookies to provide secure and efficient access to private files in any bucket.

Step 1:

Generate Presigned URLs

Amazon S3 provides a powerful feature called pre-signed URLs, which allows you to grant temporary access to private objects stored in a bucket. These URLs enable users to view or download files without requiring AWS security credentials. Instead, the pre-signed URLs are generated using your AWS access keys, and they are only valid for a specified duration. This makes them an ideal solution for providing controlled access to private files in an S3 bucket.

  1. Set up a private Amazon S3 bucket and upload a file to it, ensuring that the object is not publicly accessible.
  2. call GetObject!, In your backend, use the AWS SDK for JavaScript(or any programming language) to create a function that generates pre-signed URLs for your S3 objects. The function accepts parameters such as the bucket name, object key (file name), and the desired expiration time for the URL. GetObject command, specify the bucket name, object key, and expiration time (in minutes or hours). The command returns a temporary URL that is publicly accessible and valid for the specified duration.

Here’s an example using Node.js and the AWS SDK for JavaScript:

const AWS = require('aws-sdk');

// Configure the AWS SDK with your AWS credentials and region
AWS.config.update({
accessKeyId: 'your_access_key_id',
secretAccessKey: 'your_secret_access_key',
region: 'your_aws_region'
});

// Create an S3 instance
const s3 = new AWS.S3();

// Function to generate a presigned URL
function generatePresignedUrl(bucket, objectKey, expires) {
const params = {
Bucket: bucket,
Key: objectKey,
Expires: expires // Duration in seconds
};

return new Promise((resolve, reject) => {
s3.getSignedUrl('getObject', params, (error, url) => {
if (error) {
reject(error);
} else {
resolve(url);
}
});
});
}

// Example usage
const bucketName = 'your_bucket_name';
const fileName = 'your_file_name';
const expirationTime = 60 * 60; // 1 hour

generatePresignedUrl(bucketName, fileName, expirationTime)
.then((url) => {
console.log('Presigned URL:', url);
})
.catch((error) => {
console.error('Error generating presigned URL:', error);
});

Step2.

We Need a Fixed URL, Not a Temporary One

On-Demand Generation URL and Caching them to the Rescue!

Since the pre-signed URLs generated in the previous step are temporary, we cannot save them in the database for long-term use, We want a fixed URL for our users, right? Instead, we can create a custom route to serve files on-demand and cache the URLs to improve efficiency. By doing this, we can generate a temporary URL when a user requests a file and redirect the user to the URL. This allows us to use the custom route in our HTML, such as <img src="/files/image1.png">.

To implement this solution, follow these steps:

  1. Create a custom route in your Express backend, for example, a GET route to /files/:filename.
  2. Inside the route handler, first, check if a cached URL exists for the requested file. If not, call the generatePresignedUrl function to create a new pre-signed URL.
  3. Cache the generated URL with an expiration equal to the pre-signed URL’s expiration.
  4. Redirect the user to the cached or newly generated URL using a 302 redirect.
Generate Presigned URLs flow

Here’s a simple Node.js example using Express and the memory-cache package for caching:

const express = require('express');
const cache = require('memory-cache');


const { generatePresignedUrl } = require('./generatePresignedUrl'); // Import the generatePresignedUrl function from the previous example

const app = express();
const PORT = process.env.PORT || 3000;

app.get('/files/:filename', async (req, res) => {
const fileName = req.params.filename;
const bucketName = 'your_bucket_name';
const expirationTime = 60 * 60; // 1 hour

// Check if the URL is cached
let url = cache.get(fileName);

if (!url) {
try {
//-- Generate a new presigned URL if not cached
url = await generatePresignedUrl(bucketName, fileName, expirationTime);

// Cache the URL with the same expiration as the presigned URL
cache.put(fileName, url, expirationTime * 1000); // Expiration in milliseconds
} catch (error) {
console.error('Error generating presigned URL:', error);
res.status(500).send('Error generating presigned URL');
return;
}
}

//-- Redirect to the cached or newly generated URL
res.redirect(302, url);
});

app.listen(PORT, () => {
console.log(`Server listening on port ${PORT}`);
});

But, wait … Should I call XHR for each file?

Step3.

Set Cookie for Authorization

In part 2, we created a constant route for each file, making it easier for our users to access their desired content. But now, we need to ensure that these routes are secure and not accessible by the public. So, how can we add authentication to our requests without resorting to XHR for every single file or exposing sensitive information in GET requests? The answer is simple: cookies!

Let’s dive into how we can leverage cookies for a secure and straightforward solution.

  1. Using cookies for authentication: Instead of embedding credentials in GET requests or making an XHR call for every file, we can set custom credentials in cookies. By This way, every subsequent request will automatically include these cookies
  2. Setting up secondary tokens: When a user logs in to the app, we can generate a secondary token specifically for accessing assets like images or videos. This token can be stored as a cookie, ensuring that every request for these assets also contains the authentication token.
  3. Creating a middleware for cookie validation: In our backend, we can implement middleware to check the validity of the authentication cookie. If the cookie is valid, we can continue the request process and redirect the user to the corresponding file URL in Amazon S3.
  4. Keeping it safe: It’s essential to avoid using primary credentials (like the main bearer token) for setting cookies. Instead, always use secondary tokens to prevent any potential security risks.

By implementing this cookie-based approach, we can secure our file access routes while maintaining a seamless and efficient user experience. Plus, we can avoid the pitfalls of exposing sensitive information in GET requests or resorting to XHR calls for every file. Just remember to always use secondary tokens for added security, and you’ll be all set!

--

--