Patrick Shaughnessy
Jun 9, 2016 · 4 min read

Today I tackled a tough problem — file uploads with MongoDB and GridFS. The tricky part wasn’t so much the file uploading part, but rather the putting it all together. I found a lot of resources for individual pieces of the puzzle, and figured it’d be good to share what I learned about the whole picture.

The Goal

I needed to allow a user to upload a pdf file and store it in MongoDB. Later I’d need to retrieve that file and display it on the client again somehow. The app was MEAN stack with Angular 1.5 and the usual suspects. This led me to outlining the basics requirements:

  1. HTML file input and submission handler to send the file to the backend
  2. An Express route to receive the file upload
  3. Store the file in MongoDB
  4. Retrieve the file using the reference created when storing

Using GridFS wasn’t entirely necessary for this project as the pdf documents were probably going to be less than 16MB. But like any good developer, I was curious about it and just wanted to try it.

The Front End

On the Angular side of things, I used nervgh’s angular-file-upload, although I think I might want to check out the other option, ng-file-upload, next time as there seem to be fewer issues reported.

The HTML is pretty straightforward:

<input type=”file” nv-file-select uploader=”uploader” options=”uploadOptions” />
<button ng-click="submit()">Submit</button>

In the controller, I instantiated a new FileUploader instance, which is provided by the angular-file-upload package. You pass it some options (which seem to be a bit finicky, but maybe I was doing it wrong). Finally, just call the upload method on the first file in the queue on submit.

.controller(‘myController’, function (currentUser, $scope, FileUploader, $sce) {
$scope.uploader = new FileUploader();
var uploadURL = ‘/api/upload/’ + currentUser._id $scope.uploadOptions = {
queueLimit: 1,
autoUpload: true,
url: uploadURL

$scope.submit = function(){
if (!$scope.uploader.queue[0]) return;

One weird thing to note is that the uploader instance didn’t like being named anything other than “uploader”, so having multiple file uploaders on the same view / controller was not possible. A work around is using directives with isolate scopes.

The Back End

On the backend, I sent the file to my file upload route. In express, I used the connect-multiparty middleware to parse the file out of the request. This makes it available as req.files.

The next part can get a little heavy, but at a high level here’s what’s happening:

  1. Connect the GridFS stream service to the existing mongo driver and database instance created when you start your app
  2. Create a GridFS write stream that will handle chunking the file and storing it in MongoDB
  3. Use the native file system (“fs”) to read the file as a file stream and pipe it to the GridFS write stream
var express = require('express');
var router = express.Router();
var multiparty = require('connect-multiparty')();
var User = require('../models/User');var fs = require('fs');
var mongoose = require('mongoose');
var Gridfs = require('gridfs-stream');'/upload/:id', multiparty, function(req, res){
var db = mongoose.connection.db;
var mongoDriver = mongoose.mongo;
var gfs = new Gridfs(db, mongoDriver);
var writestream = gfs.createWriteStream({
mode: 'w',
content_type: req.files.file.mimetype,
metadata: req.body
fs.createReadStream(req.files.file.path).pipe(writestream); writestream.on('close', function(file) {
User.findById(, function(err, user) {
// handle error
user.file = file._id;, updatedUser) {
// handle error
return res.json(200, updatedUser)
fs.unlink(req.files.file.path, function(err) {
// handle error

As you can see, the process creates a temporary file in memory, which should be deleted once it’s fully piped into MongoDB using fs.unlink.

I chose to store only the file _id on the user model here, but that’s just one option. Essentially now I had two additional collections in my database: fs.files and fs.chunks. The two work together to keep all the pieces of your files organized.

Back To The Front

So what about displaying that stored file? Well, in my case, I just wanted to plop it into an iframe so here’s what that looked like.

<iframe ng-src="{{pdfContent}}" width="100%" />

In the controller, there’s a bit of fancy handling of the fetching of file data. Namely, watch out for the following:

  • Set the responseType to ‘arraybuffer’ on the get request
  • Toss your response data into an array before passing it to a new Blob
  • Use angular’s $sce service to use the file as a trusted resource
function fetchImage(fileID) {  $http
.get('/api/download/' + fileID, {responseType: 'arraybuffer'})
var file = new Blob([], {type: 'application/pdf'});
var fileURL = URL.createObjectURL(file);
$scope.pdfContent = $sce.trustAsResourceUrl(fileURL);

I’m not 100% clear on how that all works, but my understanding is that it basically creates a file in memory that the browser can access to display the pdf.

On the backend, I needed that /download route to send me the actual stream rather than the file. Check it out:

router.get('/download/:id', function(req, res) {
var readstream = gfs.createReadStream({

For brevity here, I didn’t include connecting to the mongo client and driver again. In fact, I think it’s better to do that once on initializing the app.


And there you have it — front to back file uploads with MongoDB and GridFS. It was a bit frustrating at times, but pretty cool to watch some of the console logs on those streams. Try it out for yourself and let me know what you think!

Patrick Shaughnessy

Written by

Full Stack Javascript Developer. Student of life.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade