Analysis of Google Lighthouse Report
Google Lighthouse is an open-source tool that checks the quality of web applications. Google Lighthouse checks web application performance, availability, and search engine optimization. It also measures how well a website conforms to Google’s guidelines for progressive web apps and calculates scores based on these norms.
The Google Lighthouse report consists of five audits:
- a performance score,
- an accessibility score,
- best practices scoring,
- SEO scoring, and
- PWA scoring.
The web app performance audit checks the speed at which your website loads. It measures how well your site performs in terms of rendering, JavaScript execution, and network latency. The accessibility audit checks if your site is accessible to people with disabilities by using a screen reader or other assistive technology. The best practices score looks at how well your site conforms to Google’s recommendations for progressive web apps (PWA). It evaluates whether you use HTTPS, service workers, and other modern web technologies.
In this blog, I’ve covered the most common audit of the lighthouse.
Lighthouse performance scoring:
You receive a Lighthouse Performance score based on metrics, not opportunities or diagnostics. However, you should remember that improving your opportunities and diagnostics is likely to enhance metric values — therefore there is an indirect relationship between these three items.
There is a lot of variation in your overall Performance score and metric values that aren’t related to Lighthouse. In most cases, fluctuations are caused by changes in underlying conditions. For example:
- A/B tests or changes in ads being served
- Internet traffic routing changes
- Testing on different devices, such as a high-performance desktop and a low-performance laptop
- Browser extensions that inject JavaScript and add/modify network requests
- Antivirus software
How the Performance score is weighted
Performance scores are weighted averages of metrics. Metrics with a higher weighting have a more significant impact on your overall Performance score than do those with lower weights. When you view your Scorecard, reports don’t display metric scores — they’re calculated behind the scenes and may change over time depending on new data sources fed into Zendesk’s stats machine learning algorithm. You’ll see your overall score and a breakdown of the metrics that contributed to it. In addition, you’ll see a list of tools that Zendesk has determined may have an impact on your Performance score. You can disable them one by one to test out their impact on your Scorecard. You’ll see your overall score and a breakdown of the metrics that contributed to it. In addition, you’ll see a list of tools that Zendesk has determined may have an impact on your Performance score. You can disable them one by one to test out their impact on your Scorecard.
How metric scores are determined
Once Lighthouse has gathered the performance metrics, it converts each raw metric value into a score from 0 to 100 by looking at where the metric falls on its scoring distribution. The scoring distribution is derived from the performance metrics of real website performance data on the HTTP Archive, which is itself a log-normal distribution.
For example, Largest Contentful Paint represents the time between when a user initiates a page to load and the point at which that page displays its primary content.
Based on real website data, top-performing sites render the Largest Contentful Paint in about 1,220ms, so that metric value is mapped to a score of 99.
Going a bit deeper, the Lighthouse scoring curve model uses HTTPArchive data to determine two control points that set the shape of a log-normal curve. The 25th percentile of HTTPArchive data becomes a score of 50 (the median control point), and the 8th percentile becomes a score of 90 (the good/green control point). While exploring the scoring curve plot below, note that between 0.50 and 0.92, there’s a near-linear relationship between metric value and score. Around a score of 0.96 is the “point of diminishing returns” as above it, the curve pulls away, requiring increasingly more metric improvement to improve an already high score.
Lighthouse accessibility scoring
The Lighthouse Accessibility score is a weighted average of all accessibility audits. Weighting is based on axe user impact assessments.
Each accessibility audit passes or fails. Unlike Performance audits, a page doesn’t get points for partially passing an accessibility audit. For example, if some buttons on a page have accessible names, but others don’t, the page gets a 0 for the Buttons that do not have an accessible name audit.
Lighthouse Best Practices Audits scoring
The Lighthouse Best Practices Audits score is a weighted average of all Best Practices audits. These checks highlight opportunities to improve the overall code health of your web app.
Each Best Practices audit passes or fails. Unlike Performance audits, a page doesn’t get points for partially passing a Best Practices audit. For example, if the HTML code was missing the <!DOCTYPE html> declaration on a page, Lighthouse flags pages without the <!DOCTYPE html> declaration to improve code health.
Feedback
Your feedback always helps me to improve please share your feedback.