Digital Discovery 101: How to Technically Evaluate a Website

At the heart of determining what is and what isn’t working on a website is the technical evaluation. While one part of the equation is the heuristic evaluation taking a look at the front end of the site, the technical evaluation is meant to get under the hood of the site to determine if the infrastructure of the platform is working to benefit the site owner and unique visitors or if the infrastructure of the site is hindering its ability to provide the user with a simple and great web experience.

Be it for a new or an existing client, proactively assessing the front and back end issues of a website will help to keep that website fresh, fast, and error free. To do this, you should be running a heuristic evaluation of the platform — UX, pathways, content, linking pathways etc. — and a technical evaluation of the platform — meta content, site structure, alt text, speed, coding issues, script load errors etc.

For the sake of this article, we are going to tackle five how to elements of technical website evaluations to learn what to look when under the hood.

1. Rip Apart the Infrastructure to Map the Bones

First things first, get yourself a copy of Screaming Frog. For those of you who do not know, Screaming Frog is an excellent web spider tool which has the definitive goal of analyzing the back end technical infrastructure of a website. Screaming Frog will allow you to evaluate:

  • Meta Content (Title Tags, Descriptions, Alt Text, H1 — H6)
  • Status Codes (200, 301, 404, 505 etc.)
  • Image Libraries (Source, Destination, Alt Text)
  • Internal and External Linking Structures
  • Site Depth and Width
  • Site Speed
  • Website Content Based on URL Page Source

Although the tool is older than Moz, SEMRush and the like, think of Screaming Frog as your ability to apply the Google indexation algorithm directly to a single website. It is your tool to index and make sense of infrastructural bones of a platform.

Screaming Frog is a constant excellent tool to understand what is under the hood of any given website.

When utilizing Screaming Frog, you should be technically evaluating the following:

  • Is the content infrastructure of the website set up for success? Does the website contain the basic back end content needed — meta — for the site to be indexed properly?
  • Does the site in question provide Google with the file indexes which help its spider map and index your website? Does it contain a working HTML and XML Sitemap? Does it make use of logically and thematically related external deep and internal contextual linking structures?
  • Is the site physically working? Is the status code of each page 200 (ok), or is the status code of each page segmented between 301’s, 404’s and 505’s? If so, what is the underlying issues causing server side errors, pages to time out, or dead links? Are the permanent redirects set up properly and are they helping or hurting your site?
  • Are the file sizes of the site too large for the compute resources (bandwidth, CPU, RAM, Disk) which power it? Is the site loading within the optimal time span or is both the size and unstructured depth of the site negatively impacting page and domain load times?
  • Is the signal content of the site aligned to the page location that content lives on? Is there contextual relevancy between page URL, Title Tag, Meta Descriptions, Contextual Keywords, H1 — H6, and CTA’s? Does the content of the site send the right signals to Google for it to be indexed in the best fashion?
  • What is the structure of the website? Is the structure — the physical site map — set up in logical content funnels? Is the structure of the website balanced between all navigation content silos or is it heavy handed within two of the silos and light within the remaining three? To support the structure, are their natural linking points to drive and ping pong user traffic within and between content/conversion points?

Screaming Frog is an older SEO tool however do not make the mistake in thinking that due to age, the tool is any less useful now than it was when it first launched. The first step to technically diagnosing a website is to understand what is under the hood. Screaming Frog is perfect for this.

2. Inspect to Understand the Network

The second phase of technical evaluation is inspecting the website itself with the developer tools most clients — Chrome, IE, Mozilla, Firefox — provide you with. To do this navigate to the Inspect element (Chrome shown below).

The inspect tool is your way to technically understand what is actually happening on your domain when a webpage loads and processes scripts along with a tool that allows you to temporarily edit webpages. Like mostly everything else, a website is a collection of processes all running parallel, cascading or in congruity with one another. These processes — called scripts — are the coded parameters of the site which compose its content.

A great example of this are scripts for tracking user behaviors. The Google Analytics and Google Tag Manager Scripts — shown within the network tab as Analytics.JS and GTM.JS — are loaded on the site in combination with other likewise scripts for the express purpose of capturing user behavior.

These scripts carry both load (size) and timing waterfall approximations (when it loads in sequence with other page scripts). As you can guess, the larger the script (how much code is needed to implement), the slower the load speed will be. Likewise, and just as important, when the script loads on the page will have a variety of consequences. Those consequences, for GA and GTM, will be the successful or not successful tracking of user behaviors.

When assessing the network, you should ask yourself:

  • What is the size of this script? Can it be optimized by cleaning up code or can it be optimized by deploying the script in a different language?
  • What is the load time of the script on page? Does the script load after a variety of other technologies or does it load at the start of the waterfall?
  • Are all tracking implementations firing correctly? Are all tracking implementations held within the correct JS container? Do any tracking implementations cause redundancies or add unnecessary data outputs for the platform?
  • Are the CSS and HTML implementations of each page element working with one another? Are there any errors in the syntax causing a link to time out or a visual element to show itself incorrectly across hardware platforms?
The inspect element, given through browser client development tools, is another avenue to understand the physical scripting, CSS, padding, HTML, etc. elements of any given website.
  • Are all elements of the site set up to perform under load or will a spike in traffic break certain containers? Are all elements of the site set up to perform between all hardware profiles (desktop to variety mobile) or will certain elements negatively impact mobile ranking factors?

Using the network tab along with all elements of provided dev tools, will allow you to directly understand what coded elements within the infrastructure of your site are working or working against you. Don’t be scared to jump into dev tools, they will only help lead you to a smart optimization recommendation.

3. View and Rip Apart Page Source

Just like the inspect element, another critical element of technically diagnosing a website is being able to competently make sense of a web page source. I am not going to get into the technicalities of page source — you can find a basic image guide below and a wonderful explanation here via Kiss Metrics— however you should, at the very least, understand the different sections of page source code (header, title, body, footer, scripts, etc.) and how they relate to one another.

Page Source. Learn to read and make sense of it.

Whereas the inspect element will enable you to technically assess waterfall issues, page source will allow you to assess HTML, CSS, JS etc. code issues.

4. Cross Compare Metrix of Performance

Alright, so you have ripped apart your site and asked the right internal questions to begin to recommend changes. This is good but the next step is cross comparison of platforms to similar thematically relevant websites. To do this, two excellent tools are GTMetrix, Google Speed Test, and Pingdom.

GTMetrix, Google Speed Test, and Pingdom will enable you to directly compare and contrast multiple sites to determine:

  • Site speed
  • Page speed load times
  • Waterfall script issues
  • File size libraries
  • Website load time from multiple international locations/servers
  • Overall site health

The data output of comparison will lead you to ask a basic question: if my competitors and overall competition in the space are performing better than my site in terms of speed, size, load times, etc., what are they doing better than I can learn, emulate, and build from?

Website landscape scrape.

One of the most important lessons of digital analysis is understanding that someone, somewhere, is doing it better than you. As such, locate those better examples to emulate rather than reinventing the wheel.

5. Diagnose Traffic & Tech

Lastly before you can determine and put forward a solid website optimization recommendation, you need to understand traffic flow and backend pure tech of the platform.

For traffic, unless you have ownership or admin access to site metrics with free (GA) or paid (Tableau), you could be left guessing at overall traffic flow/performance of a site. To get around this, check out platforms like SimilarWeb.com, SEMRush.com, SpyFu.com, and AHrefs. Along with these sites, download popular browser client tool bars like MOZ or Page Rank to determine domain authority in combination with traffic flow.

3rd Party Website Traffic Analysis.

These tools will give you direct insight into market performance of the site in question. From the competitor set, this insight (while not 100% baked) will enable you to ask the right questions to determine what you can learn from your competitor set — both good and bad — and what you can build from.

Lastly, diagnosing the technologies behind a website, like using the Inspect element, is another avenue to understand how a site functions. For this, NetCraft, BuiltWith, and SimilarTech (SimilarTech is good but the first two will get the job done).

On a granular level, a tool like BuiltWith provides you access to the real time functionality of a website. The platform provides you with excellent insight into what technologies make a site run. By understanding this, you can direct deduce what the capabilities of that site are or what the downfalls of the site are given current market evolution of the digital platform.

Web tools like these will give you direct insight into:

  • The type of CMS a website uses
  • The email providers they deploy to carry out messaging/DB management
  • The NameServer and Hosting profile of a website
  • Deployed website frameworks and code bases
  • JS Libraries
  • Website Tracking Implementations
  • Deployed CDN’s
  • Website Encoding

…among a variety of other site diagnostics. The short of this being: by understanding the technology profile of a website, you can determine capabilities of that platform and downfalls.


If you start from this basic guide of five steps, you will have a framework for putting together a rock solid technical website evaluation yet remember, this is only one part of the equation. The technical evaluation should work in combination with the heuristic and creative (look, feel, content) evaluations. All of which should lead to recommendations for betterment.


Brad Yale can be reached for comment at byale@thebloc.com. He enjoys getting into the guts of web platforms to understand mechanics from load speed to compute resources.

He is, as you might have guessed, a giant nerd.