The Hitchhiker’s Guide to Next.js
How Next.js evolved, and why it’s one of the best React frameworks on the market
Recently, there has been a lot of hype around Next.js — a Web framework based on React. But is the hype warranted? What improvements does Next.js provide over the existing tools, like the feature-rich, developer-friendly
create-react-app? or the simple, tried-and-true, static site generation tool — Jekyll?
Is Next.js just going to add more bloat to your app? Is it really the future of frontend development, or is it just a fad? What problems is it really solving?
In this article, I’ll try to answer all of these questions and more.
In this article, I don’t draw a direct comparison between Next.js and its main competitor, Gatsby. These frameworks both have very similar features and have their own strengths. They are both evolving rapidly and have thriving ecosystems surrounding them, so it’s too difficult to make a comparison that will still be relevant even a month from now.
What Next.js is, and what it isn’t
Many people have heard that Next.js is basically just a “static site generator” that allows writing your code in React. Some React developers I have talked to in the past have followed the rule of thumb that if they want to create a static site, they should use a tool like Next.js or Gatsby, but if they want a dynamic site with more features, they should stick with the tried and true
But Next.js is a lot more than just a static site generator. It’s much more powerful than Jekyll, which is primarily designed to host simple, static sites. Next.js supports the same static-site generation features as Jekyll, in addition to all the features of
create-react-app, and much, much more. And if you deploy Next.js on Vercel, you get an incredibly smooth, no-nonsense, “zero-config” developer experience.
In other words, static site generation is just one thing that Next.js is great at, but it is by far its main selling point. I wouldn’t call Next.js a static site generator, but rather an automatic static optimization framework, which is opinionated about embracing React at its core.
When a Next.js app is deployed on Vercel, it is also an excellent serverless framework, ideal for building a JAMstack application.
But what do these things mean? How does “automatic static optimization” differ from static site generation? And what do I mean by “serverless?”
Lastly, should you use Vercel to deploy your Next.js app, or something else?
Once again, I’ll try to answer or at least touch on all of these questions in this article, and more.
How Next.js came to be: a brief history of Web development
Before I start raving about Next.js, let’s try to understand how Next.js evolved as a technology.
If you have been developing Web apps since the early days, you can skip this part without really missing out on anything.
In the early 1990s, when the Web was first introduced, most of the Web consisted of a bunch of HTML files stored on a bunch of computers in different places.
You’d make a request to a specific Web server using a URL. The request would ultimately get delivered to a Web server, which would then respond back with an HTML file. Finally, your browser would display the HTML document to you.
This was pretty slow in those days, mainly because computers were slower, and Internet infrastructure was not as advanced as it is today. But all in all, this was a pretty solid system. It was simple, and it worked. Life was good.
This “static file” approach was very limited, though — what about pages that need to display some data from a database, which is changing all the time? What about pages that contain information that contains info which should only be rendered some of the time? Do we have to create separate HTML files for every possible page that the user might see?
To enable more dynamic Web pages, technologies like PHP were introduced, which made it easy to return different HTML responses for the same URL, dependent on different conditions. This approach is called server-side rendering, because the server “renders” (generates) the HTML page on-the-fly for each request, and then sends it back to the user.
PHP was a fantastic idea. It allowed you to write some code in HTML, but then as soon as you realized that a section of the HTML you were writing had to be dynamically computed, you just switch into PHP mode and then start writing the logic that decides what belongs in the current chunk of HTML. Then, you could switch back to HTML mode, and carry on with plain old HTML.
The server-side rendering approach enabled tons of great Web page features (like email, online banking, and more), but at the cost of page load time. Every time the server needed to read some data from a database, data could be traveling hundreds or even thousands of miles across the globe.
Even if data were transmitted at the speed of light, downloading an HTML page could take several seconds, especially as more requests are added and the amount of data loaded into the page increases.
But despite these drawbacks, this approach worked well, and people were happy.
Server-side rendering was only one way that the Web became more interactive and interesting.
Using AJAX, Web pages could update themselves with fresh data without the user needing to fully reload the page. This made Web pages feel even more like desktop apps, which let you update your data and see the result almost immediately without the screen going blank and being rendered from scratch.
Client-side rendering and SPAs
The “new world” of client-side rendering, JS frameworks, and SPAs looked something like this:
As libraries like jQuery, Ember, Angular, React, and Vue were introduced, and tools like
bower made it even easier to use these libraries, it became even more convenient and trendy to do 100% of the rendering right in the Web browser.
Problems with client side rendering
Before, with server-side rendering, the browser just requested the final HTML response directly from the server, and maybe a few scripts would load to add a few more things to the page.
This all takes time.
Loading time is made even worse when many scripts and other files (like images and CSS) are loaded at the same time, which is often the case for modern Web apps.
Now, you might be thinking that browsers, data centers, CPUs, disk drives, network infrastructure, caching, RAM, are all getting better and faster as time goes on, so is this extra cost even noticeable?
Well, these advancements don’t happen fast enough to keep up with the demands of modern Web apps. Worse yet, users in countries with fewer resources (where the average computer is older and less powerful, and Internet infrastructure is not upgraded as frequently), are stuck with a worse experience, with Web pages that load painfully slowly.
And there was an even more pressing issue: the mobile Web. More people were using their mobile devices to access Web pages. When smartphones and laptops first came out, they had only a fraction of the power of desktop computers, and client-rendered pages built with frameworks like Angular.js were simply too slow — the phones themselves were not powerful enough, and wireless network speeds were not as great as they are today.
In addition to being slower than server-side rendering, client-side rendering introduced another huge problem: Search Engine Optimization (SEO).
If you don’t know how SEO works, here’s a brief summary. Search engines like Google have special programs called “crawlers,” which “crawl” the Web by starting with a curated list Web pages (e.g. wikipedia.org), then following all the clickable links from those pages, and then following the links from those pages, and so on. Eventually, they will reach every part of the “public Web.”
<script> tags in them that needed to be run in order to see the final page! So, client-side rendered pages were said to have “poor SEO.”
However, this solution introduced many problems of its own. It is slower and costlier for search engines to simulate a Web browser when scanning your page, and often more error-prone. Google has even written a comprehensive guide on how to write your Web page to be SEO-friendly.
“Fantastic,” thought developers everywhere. “Even more problems for us to worry about when developing Web apps!”
The return of server-side rendering
Many Web developers who had gotten on board with client-side rendering ultimately took a step back and did some thinking when they realized their pages were too slow.
They realized that ultimately, they had to make the switch back to server-side rendering. But at the same time, they wanted to stay free of the perils of PHP.
Meanwhile, people did not want to ditch their Web frameworks like Angular and React. Libraries like ReactDOMServer, vue-server-renderer, and Angular Universal made it even easier to render the app on a Node.js server using the same code that the client would have used, if the page were client-side rendered instead.
At this point, life is pretty good. We’re now back to server-rendered pages which are much lighter on the client, and much smarter, in the sense that they deliver the initial HTML response to the client sooner.
So, at this point, it might seem like we’ve solved most of our problems, and we have a pretty good system going on.
In reality, though, there were a few big problems that needed addressing:
- Server-side rendering with client-side hydration was not easy to set up! Even with frameworks like Meteor and modern React tools (like
create-react-app), developers had to read lots of documentation to understand how server-side rendering works, why they need it, and how to wire it up. For most people trying to get something done, it was just too much work.
- Taking #2 a step further, even the pages which read from a database can still be represented as static pages! The app’s “skeleton” HTML can be sent straight to the client, and then the client can make AJAX requests to fetch data. So ideally, we don’t need a complicated Web server to host our pages! We just need a separate API server that the static HTML file can make its requests to.
With these optimizations, loading a Web page might look something like this:
“But why would we want all of our HTML files to be static? Won’t that mean that we will have less feature-rich apps?”
“How does this static file approach improve anything? We still have to make a Web server in order to get the static file anyway, right? So why not just request the file directly from our cool Node.js server that will do our server-side rendering on the fly?”
Well, the answer to this is a bit complicated.
The first thing to understand is that when you visit a Website, your Web browser cannot do anything until it gets a response back from the Web server. As a result, this initial HTML request is the single biggest bottleneck in the entire page load sequence. Even if your page loads dozens of scripts and stylesheets and you have done tons of optimizations to make them all load super fast, none of that can happen until the initial HTML comes back!
So, we really want to make sure that the browser gets back the initial HTML response for our site as quickly as possible.
“So, what do static files have to do with the initial HTML response coming back quickly? Can’t we just run a bunch of copies of our server around the globe, close to where users are located? That way, requests to our servers would come back super fast.”
Sure, that’s one way to do it. But that would also be super expensive! Most people can’t afford to pay for so many servers to be running around the globe at all times.
Fortunately, we have CDNs, which are cheap, global networks of machines that are heavily optimized for storing static files around the world. If we could somehow make sure that all of our Web pages were just static HTML files, then we could leverage CDNs to cheaply distribute our app around the world, making it readily available to users.
Side note: In the latest Web jargon, you might hear that a file is served from “the edge” — that’s pretty much what this is referring to. You might have also heard of Google’s “AMP” pages — AMP leverages Google’s CDN to store and serve special types of HTML files (called AMP files), which place tight restrictions on the HTML in order to prevent it from doing anything that would slow down the initial page load.
So, with static HTML served from a CDN, we get the initial HTML to the user super quickly, and the page loads almost immediately if they are close enough to a CDN edge.
But it gets even better. If a user has already visited a page, then the HTML file might already be stored in their browser’s cache. If the cached HTML is already up to date, then the CDN server responds with
304 Not Modified, and the browser knows to serve up the HTML directly from disk. The CDN doesn’t even have to send the HTML file back to the browser!
With caching and CDNs, a Web page’s loading sequence now looks more like this:
…but can we optimize this even more?
Yes! The initial page load sequence can be optimized using tons of ingenious techniques like prefetching, preloading, preconnecting, SWR, font and stylesheet inlining, service worker caching, and much more — but I won’t go into all of those techniques here. The main speedup I want to focus on is the one gained by serving up a static HTML page so that it can be cached by CDN servers, because this single speedup has the biggest positive impact on page load time.
So, now we know that CDN technology is the primary tool at our disposal for ensuring that the initial HTML response is returned as quickly as possible.
With this in mind, here’s our new strategy for making sure our site loads as quickly as possible:
- If a particular page on our site is 100% static (contains no dynamic content whatsoever), then we should make sure it is available on a CDN as a static HTML file. Otherwise, the user would have to make an unnecessary request to our server, which may be far away from them and will incur unnecessary costs by recomputing the same exact HTML output for every single request.
- If a Web page does have dynamic functionality, then we should try to make as much of the page static as possible, so that the static parts can load almost instantly, and then we should enhance the page by fetching the data that we need from our server.
Is there a better way?
Next.js to the rescue: automatic static optimization!
So, I mentioned near the start of the article that Next.js’s killer feature is automatic static optimization. It exactly implements this strategy that we talked about above, making as much of the site static as possible, and producing CDN-friendly static output for pages that are 100% static.
And it does all of this automatically!
Okay, that sounds amazing. But how is it possible?
It’s actually pretty simple! Next.js works like this:
- Each page (“route”) in your application is associated with a single React component. For example,
/homeis associated with
- Pages can optionally fetch data asynchronously if they need to, using a special function called
getServerSideProps, which gets attached to the component. These props are passed to the component as props when rendered.
getServerSidePropsis used, then the page is marked “dynamic.” Otherwise, it’s marked “static.”
- Pages that are marked “static” are built once at compile time, and they are distributed to CDNs.
To make a page static and still have dynamic functionality, all you have to do is avoid using
getServerSideProps, and instead use a library like SWR that makes it easy to request data on the client side instead of the server side.
Incremental static generation
“But what about pages like
/products/:productid? How can these types of pages be generated statically? Wouldn’t we have to generate static pages for all possible products at build time? What if we add more products?”
These are great questions, and Next.js has a great answer: incremental static generation.
The way this works is that the first time a page is requested, Next.js can make a request to your server to render the page, but also save a static copy of the response and distribute it to CDNs.
So, the first time someone views a product, it will be a bit slow, but then it will be super fast for everyone else that views that product afterwards.
Check out this demo of incremental static regeneration to see it in action!
The routing system — how flexible is it?
I mentioned before that Next.js has a file-system based routing architecture, associating each route with a React component.
This architecture is somewhat controversial, and has caused many developers to stray away from Next.js entirely. Some developers wind up with the belief that requiring a file for each route is just an odd, inflexible design choice that doesn’t scale to larger apps. But how much merit is there to these kinds of claims?
Next.js’s page-based routing system is indeed very opinionated. But that’s because it is heavily optimized to make the most common use cases extremely fast. Specifically, Next.js performs automatic code splitting when it builds each page, so that the code for each page is only loaded when you request that page, and no sooner.
For those rare cases where you don’t want to deal with Next’s routing system, (which will probably only happen if you are migrating a legacy app to Next.js), there is a beautifully simple escape hatch: Next.js exposes a minimal API that lets you imperatively render any page from your Node.js server. Here’s an example. But in my experience, there has been no need to use the imperative API. Even that linked example could be implemented using file-based routing by creating a file called
Next.js deployment: how easy is it?
Next.js sounds fantastic. But how do you launch an app built with Next.js?
One option is to use Vercel, which allows you to launch a Next.js app and get the static files super close to your users in only a few minutes.
Vercel works like this:
- You push your code to GitHub, then grant Vercel access to the repository.
- Vercel automatically builds your app and deploys it to a live domain ending with
.now.sh. You can also purchase a domain through their site, if you want a custom domain. Vercel automatically rebuilds your app and deploys to this URL every time you push your code to Git.
- Vercel distributes the statically generated HTML files to its CDN network, so you don’t have to worry about how the static pages will get close to your users.
- Vercel automatically sets up a serverless cloud function for each page that uses
getServerSideProps, as well as each API route in your app. If you’d like to learn more about serverless architecture, you can read more here. If you’ve ever worked with AWS Lambda or Google Cloud Functions, you’ll quickly realize that Vercel’s approach is way easier to deal with.
For me, this experience was super smooth and saved loads of time. By contrast, I tried deploying a Next.js app on Google Cloud, and it was a hassle. I had to create a Dockerfile that builds the app and then listens on
$PORT, set up a
cloudbuild.yaml file to build a Docker image from the app, configure a build trigger to automatically build the app when it was pushed to GitHub, update my
cloudbuild.yaml file to automatically clean up unused Docker images, and set up a billing alert to make sure I was always on the free tier.
After all that configuration, I still needed to get the CDN working, figure out whether it’s possible to get Google Cloud working with Next.js’s new incremental static generation, and figure out how to get pages and API routes set up as Firebase Cloud Functions. Overall, it doesn’t seem worth it. Services like Google Cloud and AWS are extremely flexible, but from what I have seen, they are not as optimized as Vercel is for deploying Next.js apps.
Services like Netlify and GatsbyCloud are other services that offer a similar, “zero-config” approach to launching a Next.js app, but it’s hard to give an opinion on those services that wouldn’t be outdated a month from now.
- Next.js is a powerful React framework optimized for super fast page load times that can handle both static and dynamic sites extremely well.
- To fully leverage the power of Next.js, avoid using
getInitialPropsunless it is absolutely necessary, and make sure that you write your dynamic pages by fetching data on the client side using a library like SWR.
- Even for pages that accept dynamic URL parameters like
/products/1234, you can use Next.js’s incremental static generation feature to generate static pages at runtime.
- When deployed on Vercel, Next.js “just works.” Your app is automatically distributed to servers around the globe so that it loads super fast everywhere. Since Vercel automatically manages servers for you, you never have to worry that you’re paying for servers that aren’t being used, or not paying for enough servers, making your app slow.
Thanks for reading! Hopefully you enjoyed the article and have a better understanding of Next.js!