A PWA for PVD Geeks: A Case Study in Performance, pt. 2
This is the second part in a series. You can read the first installment here. The third installment is now available here.
When the Providence Geeks website project nearing production readiness, as a weekend project I decided to run a Lighthouse audit on the website. (spoiler alert: it wasn’t pretty).
Before getting into the report itself, here’s a quick rundown on the stack being used:
- webpack — Module bundler for the frontend web applications, with a special focus on improving the developer experience around building and optimizing web applications of all asset types.
- Cloudfront (AWS) — Content Delivery Network (CDN), handing HTTP requests to either S3 (static assets) or EC2 (API requests), sitting behind Amazon’s Route 53
For a more detailed breakdown, please feel free to check out the project’s Technical Architecture documentation.
So, let’s get to the scores! 💯
So yeah, that performance score is prettyyyy, prettyyyy, pretty bad. Just to make sure it wasn’t a mistake, I ran it a couple times. 0 each time. Ooph.
So in reviewing all the feedback presented by Lighthouse, I wanted to try and go for low hanging fruit first that would yield the best bang for the buck, as it were. As our Accessibility score was quite high, I opted to focus my first few action items in the Performance and Best Practices categories, specifically:
- Enable gzip compression
- Inline critical CSS
- Lazy load images
- Implement some Best Practices (HTTPS / Noopener)
It should be noted that the PWA category overlaps with a lot of audits in other categories, so even if you aren’t intending to specifically make a PWA now or saving it for later (like I am!), chances are focusing on getting high scores in the other 3 categories will contribute to that score organically. So yeah, that’s pretty sweet! 👍
So yeah, that 0 ain’t no joke and the whole point of all of this is for performance, right?
Here is what the full Performance audit report looked like.
That’s a lot of red. Looking at the Network tab in Chrome, I can confirm that while the application is pretty small by most standards (code / dependency wise) it’s transferring 2.0MB!
So let’s look into enabling gzip compression and what it all means. Here’s the expanded view of that section of the audit.
Compressing files can greatly improve performance since smaller file sizes will take less time to transfer over the wire, and although there is a cost for decompressing a file, the gains are greatly offset by the improvements in performance, by a large margin.
In technical terms, this recommendation means configuring a web server to respond to a request that includes the header Content-Encoding: gzip with a compressed version of that file instead, which the browser will then decompress after it has downloaded it. Cool!
As CloudFront is being used I configured CloudFront to compress these static asset files for us! Within our CloudFront distribution’s Behaviors settings, all I had to do was check this box!
If you are following along in your own AWS account, now would be a good time to ensure HTTP/2 in distribution settings
Now if I take a look at the network tab in Chrome and load the application again, I can confirm that our files are now returning with the response header Content-Encoding: gzip!
The impact of this simple change was immediately felt in our staging environment. The application loaded much faster and now only transfers 992KB!! ⚡🔥
Not bad for a couple mouse clicks! I bet that will do wonders for our Performance score, right!?
Wow, just enabling gzip compression gave us a super awesome boost as our score is now at 65!
Note: if enabling gzip compression and using a CDN with objects already in the cache, you must invalidate them first so they can be returned anew with gzip enabled.
Inline Critical CSS
In re-running the audit to get to our new Performance score, I found Lighthouse had uncovered another significant Performance issue; render blocking due to CSS!
This highlights an important distinction in performance tuning. It’s not just enough to make files small so they download quicker, but also the files themselves need to be optimized so the browser can start the application itself quicker. The latter will be an ongoing challenge for all applications as code and feature size grows.
This will be covered more in the next installment of this series as I plan to get more in depth with code splitting and lazy loading our bundles.
The recommendation here from Lighthouse was to inline critical CSS, which meant loading only the necessary CSS needed for our application’s startup in a
<style>tag in the
<head>of the document (as opposed to a
<link> tag), and then preload the remaining CSS. This prevents the browser from completely blocking rendering while it loads (via an HTTP request) and parses all of the application’s mission critical CSS, thus making the rendering time much quicker.
As webpack is our bundling tool of choice, I decided to use
HtmlCriticalWebpackPlugin, which when configured to analyze build output of an application’s
index.html, will automatically implement both of those above technique for us (inlining and preloading). After making this change, I can see two new changes in the built
- Inlined CSS
<link>tag for our remaining CSS, in particular the addition of the
rel="preload"attribute. Looking good! 😎
As our application uses SASS, I was seeing duplicate CSS in our output. so I also took this an opportunity to integrate a webpack plugin called OptimizeCSSAssetsWebpackPlugin to really slim down our output as much as possible.
So….. what do we have to show for all the work so far?
Well, that was a huge improvement! But while we’re here, let’s look at one more thing Lighthouse is telling us…
Lazy Loading (Offscreen) Images
Currently the website display images in a table on load, but because of the height of the hero banner header, the user will not see any of these images until they scroll down. So why load these images if they’re not needed? Exactly what Lighthouse thinks!
For this React application, I integrated a component to handle lazy loading these images, and even included a pretty sweet looking image fading component at the same time! 🎉
I was also able to reduce the size of these images by picking a more appropriately sized image from the Meetup.com API.
So, there should be a solid payoff for this effort, right? Well….
Mostly. Our network requests went down for sure, but the overall performance score itself only went up one point. However, this did add a very nice user experience element, which is part of what all of this effort is all about (performance as a component of user experience), so I think it was still worth the investment at the end of the day.
Here’s the initial analysis report by Lighthouse for the Best Practices category.
We’re going to focus on these two issues right now:
HTTPS is important, even if your site itself may not deal with sensitive information directly. HTTPS is important for encrypting the connection between user and server, which goes a long way toward mitigating Man In The Middle (MITM) attacks, which could allow a bad actor to inject malicious code into the files sent by your sever.
To implement HTTPS, I had to do some configuration in AWS. First, I used AWS’s Certificate Manager service to request a SSL certificate the pvdgeeks.org domain, using a wildcard domain.
Make sure you are in the N. Virginia (US-EAST1) region when you request the certificate in order for it to work with Cloudfront.
Once the certificate had been issued, I then attached it to the CloudFront distribution in its settings.
Make sure all your links can be loaded over HTTPS! A trick I use is to create URLs that look like this:
Note: In the next article in the series, I will go into more details on HTTPS practices as part of the PWA recommendations (e.g. like ensuring all HTTP traffic redirects to HTTPS and setting up HTTPS on our WordPress instance)
noopener is a good practice to follow, in particular when links open to new domains, as it prevents the linked site from getting access to the referring page through the use of
window.opener. It can also brings with it some additional performance benefits.
To implement, it was as simple as finding all
<a> tags in the application that had
target="blank" and adding the additional attribute
rel="noopener noreferrer". Easy as that!
The Big Reveal
So now that we’ve gone through some of these actions, where do we stand now?
And our payload size is quite small and network waterfall quite reasonable.
Note: Our score went down (from 77 earlier in this article) during the writing of this because new performance audits were added to Lighthouse.
All in all, all of these changes were very straight forward and in total took about two days to implement. There are numerous topics for us to cover still, in particular:
- Code Splitting and Lazy Loading to offload loading of pages that aren’t needed just to load the application
- Improving the Critical Rendering Path of the application to get the application loading in under 3s (using lazy loading, code splitting and bundle optimizations)
- Automating the auditing of the application so the application ncan make sure to stay up to date as Google pushes new Lighthouse auditing metrics (as evidenced in our score changing 6 points in the space of the couple months writing this piece).
So keep your eyes open for the next installment in the series, coming soon! In the meantime, check out this great case study by the master himself, Addy Osmani.
Authors: Treebo: Lakshya Ranganath, Chrome: Addy Osmanimedium.com
Feel free to tweet me your Lighhouse score, I’d be happy to help if you have any questions!