Using AWS S3 with Node.js

Srijan Saha
Alexa Developers SRM
4 min readSep 16, 2020

AWS or Amazon Web Services is the biggest and most popular cloud computing platform out there. AWS provides some of its easy to use and scalable services to its users for free for one year.

One of AWS’s most popular services, Amazon S3 or Amazon Simple Storage Service provides object storage through a web service interface. We can access the documents stored in S3 not just from the web interface that AWS provides, but also with backend frameworks like Node.js

The first step to get started with S3 will be to create an AWS account and then select ‘S3’ under the ‘Services’ tab in the navbar. Create a new bucket with the relevant details and click on next.

Once this is done, you are ready to start building your Node.js application. Go ahead to the desired directory in your system, open a code editor, and type the following command in the terminal:

npm init -y

This will initialize your Node.js application with default settings. Now you can go ahead and download the required dependencies for the project:

npm install express multer body-parser aws-sdk multer-s3 dotenv

Create the server file in your root directory and name it ‘index.js’, and start a simple express server.

const express = require("express");
const app = express();
const PORT = process.env.PORT || 3000;
app.listen(PORT, () => {
console.log(`Listening on port ${PORT}`);
});

You can check that your server is running on localhost port 3000 by typing the following command in the terminal:

node index.js

Now go the AWS console and get the credentials by which you can access your bucket remotely.

Create a .env file in your root directory and paste those credentials under the following variable names:

AWS_S3_BUCKET=NAMEOFYOURBUCKET
AWS_ACCESS_KEY_ID=YOURACCESSKEYID
AWS_SECRET_ACCESS_KEY=YOURPRIVATEACCESSKEYID
AWS_DEFAULT_REGION=REGION

Now create a file by the name of s3UploadClient.js in the root directory. This file will be holding the function with which we will upload files to S3.

const aws = require("aws-sdk");
const multer = require("multer");
const multerS3 = require("multer-s3");
require("dotenv").config();
const s3 = new aws.S3({
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
region: process.env.AWS_DEFAULT_REGION,
});
const upload = multer({
limits: {
fileSize: 1024 * 1024 * 5,
},
storage: multerS3({
s3: s3,
bucket: process.env.AWS_S3_BUCKET,
contentType: multerS3.AUTO_CONTENT_TYPE,
acl: "public-read",
metadata: (req, file, cb) => {
cb(null, { fieldName: file.fieldname });
},
key: (req, file, cb) => {
cb(null, "files_from_node/" + Date.now().toString() + file.originalname);
},
}),
});
module.exports = {
upload,
};

The object s3 that we have created in this file holds the credentials with which we can access our bucket. We then define the function upload which uses multer to handle the file uploads. Here we specify the key limits to limit the size of the documents uploaded, and storage, which in turn uses the multer-s3 dependency that we installed earlier to specify the bucket details. The callback defined in key specifies the location where we wish to save our file. In this case, our file will be stored in the files_from_node folder, and the name of the file will be the current epoch time concatenated with the original file name. Once this is done, we export this middleware to be used in our index.js file using the command module.exports.

You can go back to your index.js file and paste the following lines of code:

const express = require("express");
const bodyParser = require("body-parser");
require("dotenv").config();
const { upload } = require("./s3UploadClient");
const app = express();
app.use(bodyParser.urlencoded({ extended: false }));
app.use(bodyParser.json());
app.post("/upload", upload.array("inputFile", 1), (req, res) => {
console.log(req.files[0]);
console.log(req.files[0].location);
res.status(200).json({
response: req.files[0],
});
});
const PORT = process.env.PORT || 3000;app.listen(PORT, () => {
console.log(`Listening on port ${PORT}`);
});

Our index file now consists of the code that we used to start a basic express server, and also the one we will use to upload files to our bucket. We have imported and saved our s3UploadClient.js file under the constant upload. Notice we are also using the body-parser module to parse the data that we will get in the body of the requests.

Notice that our upload route, ‘/upload’ consists of the middleware upload. We have specified that we are expecting an array of size 1 in the request. Once this request is fired, the req.files variable will consists of an array of objects which will hold the details of the file uploaded. Since we have specified that we will only accept 1 file, we can console log the 0th index of the req.files array.

The second console log, i.e. req.files[0].location, will give you a URL by which you can access the uploaded file. Note that S3 stores files and gives a unique URL for each of them for ease of access. If you want this file to be available for others to see, you can go to your S3 dashboard, select this bucket, go to the ‘Permissions’ tab, and uncheck ‘Block Public Access’.

Now that you know how to upload files to AWS S3, you can code your application in such a way that it stores this URL and the name for the file in a database (like MongoDB) using a similar POST request, and also create normal GET requests to get those URLs from the database.

Note- Make sure you check the free tier limit of AWS S3 beforehand.

--

--