Webpack vs Browserify : By The Numbers
I have been searching for solid numbers about what real impact Webpack could have on my application when switching from Browserify. I found a few numbers from Kate Hudson.
As Kent C. Dodds graciously pointed out, this is a benchmark comparison of a specific app, your actually milage may vary.
The Too Long Didn’t Read, if you are contemplating a switch from Browserify to Webpack and have apprehension about what kind of degradation you would see in the app; the difference between the two is minor. You should feel just fine choosing Webpack, your app will not bloat, nor will the unrolling processes slow it down, in some instances you might find that it is leaner and runs faster.
There were mainly three areas I was really interested in. Size of the deployed package, total memory consumption from the unpacker, and time the packer spent registering, and preparing the modules for the app. I was really happy and surprised to see the numbers. While in here, we can also pick apart what is different between the two in our debugging tools.
To reiterate, there were no external loaders used in the making of these numbers. This is a straight CommonJS to CommonJS comparison.
Lets get down to it.
Memory usage was something that I was really interested in when looking at the two packers. I often write JS/Web software for embedded systems, and every kilobyte really matters when users push devices to the limits. I loaded up the Todo application from a clean state, allowing both to come to a rest, and then took a HEAP Shot.
I was surprised to see that Browserify required nearly 1MB more memory to unroll the same number of modules, as Webpack. The winner here is clearly Webpack, there is a lot you can cram into that 1MB of space.
It is not surprising to see that Browserify uses more memory than Webpack. When we take a look at the allocated objects for each version of the application, it’s clear that Browserify requires the use of more primitive objects to pack and unroll than it’s competition.
We see this represented in the figure above, we see the order of objects ranked from highest retained memory usages, to lowest.
Knowing that Browserify requires a significant upgrade in memory use for our base application, it would be interesting to see while unrolling what effect Webpack will have on our device’s CPU.
Ignoring the idle time, my hands are not quick enough, and my Mississippi counting seems to be off, but looking at the program run time down through the garbage collector, we can see that the times are fairly relative. The self time is only different by 2–3 milliseconds.
One thing of note, is that Webpack appears to to lose reference to original module naming, unfortunately turning CPU tracing into an anonymous garbage bin to sift through.
It’s nice to see the garbage collection times not differing, but we wouldn’t expect anything to be released here, since we’re measuring only unrolling of our code.
Size of Deployable Code
Running both packers, using UglifyJS compression with mangling while minifying both sets of source, we see that the disparity between the rolled code, and the minified code to be 1Kb, or about 10% more code could ride along with your app in the same foot print, that is a lot of added experience for your user at practically no cost.
Note: Running this through a web server environment that gZips, Browserify is 3.62KB while the Webpack version is 3.28KB. (still about 10%).
Just some more numbers
We have already reached a conclusion that the numbers appear to offer Webpack an advantage, and depending on your deployment environment that advantage could be beneficial.
I collected a few more numbers from our Todo application, and I thought I would share them here as well.
The unrolling timeline
Ignoring the total time (idle time included) — it’s those slow hands again. I took a look at a slice of 200ms timeline of both versions of the app. The good news is that the pie chart looks almost exactly the same.
The flip side, is that Webpack is indeed giving a slight advantage to getting our code to execution time faster. Around 1.5ms , small, but measurable. This most likely will have negligible impact on code that is being unrolled at the start, but if you have a lot of modules, this situation could exacerbate.
The Flame Chart
I hope a lot more of us are getting into the profiling tools available to us. One of my favorites is the Flame Chart. It is easy to see where your app is spending a lot of time, or how deep the calls go that the app is executing.
The Flame Charts look nearly identical. While Webpack spends a few milliseconds longer completing the same set of tasks, We can visualize our minor execution speed up due to the depth of calls. In the Webpack chart the stacks on the Y axis are shorter, meaning the calls were not as deep.
I hope this overview of an application being packed and having the unrolling process analyzed was as eye opening for you, as it was for me. Webpack can save us disk space, reduce the payload we send over the wire, unroll faster, throttle the CPU less, and consume less RAM. It doesn’t come without a cost, but that cost is certainly mitigated by all the positives Webpack can bring your project.