The Worst Technical & Contextual SEO Mistakes You Can Make (2020 Update)

Roman Adamita
11 min readSep 12, 2019

--

Have you noticed that SEO in the last 3–4 years is a tricky process, which requires more strategic thinking and better user experience than before? Sure it’s just a little part of them: Here’s more than we expected.

As far as I can tell, your most prominent reason why you are here is that you need more traffic, more leads & revenue from organic search. Each of us needs more traffic in the long term, and for this, we make various SEO efforts. But are these efforts efficient for the future of your brand? We need to measure all of them and convince ourselves that improvements will be most trusted.

So what challenges will we see in the evergreen SEO process?

There are a few things we need to know:

  • Google, in 2008 has 1 trillion indexed unique URLs in search results.
  • Just in 2012, Google users searched for more than 1 trillion queries. Now we do over 70K searches per second.
  • In 2016, Google gathered up search analysts and experienced engineers (like ex-Googler Matt Cutts) to improve the search results in a better user experience side. And consequently, they result in more than 3,234 improvements to Google Search. By the time Google regular improve and update their algorithms. Last year (in 2018), they ran over 650K experiments and 3K+ improvements — you can see updated details there.
  • In 2017 Google run over 2,453 changes, and a year later, run over 3,234 improvements (this number in 2016 was 1,653) — confirmed count.
  • 93% of the experience on the Web begins with a search engine.
  • Zero-click searches are up 30% in 2 years (Rand Fishkin’s full presentation). If the search query is generic, regional, question-driven, etc. it will make sense.
  • SEO will be dead when Google (or other big search engines) stops to update their algorithms, and when your competition will die.

I think it’s enough to understand how long is the process of SEO now and maybe beyond. The last one, do you know how to hide a brand in search results? Easy peasy!

Page 2nd of the Google search results

Let’s come to the central thought. Below you will see some worst SEO mistakes (and how to avoid them), which I experienced with various brands and get better performance (take a look at one example — Migros’ SEO Case Study).

Shortly in this post, you can find SEO issues on these pieces:

  1. Crawling & Indexing
  2. Technical SEO
  3. Content/Context SEO

Now, focus and be ready!

Indexing Issues & How to Avoid

Forget or Skip Using Robots.txt Rules

You have probably heard about robots.txt file and maybe heard how important it is to use this file carefully. There in robots.txt file User-agent, Disallow, and Allow is the most famous lines — if you don’t understand how to use these rules, you shouldn’t wonder about that and permit a webmaster. Robots.txt is used by 500 million websites that were adopted by all major search engines (Google, Yandex, Bing, etc.).

The Robots.txt File
https://yourdomain.com/robots.txt

How robots.txt file works?
How robots.txt file works?

It is not necessary to use this TXT file on your leading directory of the website. But, if you didn’t use, then you need to know that:

  • Major search engines will be crawling all URL paths of your website (search results, filtered URLs, thin pages, etc.)
  • You will treat crawl-budget (website’s crawl rate limit)which is very important for index health,
  • Many of your web pages’ ranking will change every day because they may be cannibalization.

For example, if you want to block your search results’ URL path to Google, use that rule:

User-agent: Googlebot

Disallow: /search

If you want to block the search results, but some of these* allow to indexing:

User-agent: Googlebot

Disallow: /search

Allow: /search?q=cheese

*Some of your search results are specialized and optimized nicely then break them to indexing. If there are no reasons, block all of them. I’d like to share with you a fantastic guide about how robots.txt works and how to use rules in the best way: Here’s a blog post by Matthew Henry.

Failed URLs in XML Sitemap

Supposing you want to hike to a specific destination, you are alone, and there’s no wifi, no G.P.S. How can you walk to this destination? Guide, am I right? And so your website needs to be pointable for significant search engines.

Ups, I forget one thing. You may don’t know what the XML sitemap is. It was just one of those things, like a regular technical SEO doer.

XML Sitemap is one of the most important (after robots.txt) files for your website’s URLs. There you can add all your web pages to show to Google or another one. If you already have that file, Google will crawl them every day.

The file can be in any URL path. There is an example:

Inside XML Sitemap File

Remember those things:

Forget to Use Meta Robots “noindex” Tag.

If you want to deindex (remove from the index permanently), you should add <meta name= “robots” content= “noindex”> code into <head> side. Robots.txt is only one part of your crawl management.

As John Mueller said:

“robots.txt is for controlling crawling, not for controlling indexing of URLs.”

— Twitter Status

Where should you use the “noindex” tag?

Meta Robots “nofollow” tag code
  • URL path which you disallow from robots.txt,
  • Pagination pages (these pages will never have a high rank).

Pass Over the 404 Pages

Still? Really?!

A little break? → Still D.R.E.

I can’t sometimes believe what I see when starting the process of URLs’ health. It’s a long story, but yeah, continued.

“404 Not Found” pages are a pain in the neck. They may be in Google’s index and still get clicks. If any of that clicks will come to an empty page, I’m sure most of the visitors will exit from this page — meaning you lose potential revenue.

You have a lot of ways how to identify these 404 pages:

  1. Google Search Console’s Coverage Report
  2. Track with Google Analytics (Needed Google Tag Manager)
  3. Analyze with DeepCrawl / Screaming Frog SEO Spider

And so

After you find all failed URLs, fire to 301 redirects. How? Where?

  • Similar to your failed URL path (most recommended).

or

  • Pages which is already gain traffic.

or

  • Direct to #1 level categories or main page.

Technical SEO Issues & How to Avoid

Killer of SEO: Page Load on Mobile is Slower than Desktop Devices

Most users use smartphones — so Googlebot uses a smartphone too, for the most crawling. Over 50% of search results are from mobile indexing, and if your website’s mobile version doesn’t load faster, then you lose more than 53% of potential customers.

“If your site takes longer than two seconds to load, 53% of your customers lose interest.”

Think with Google

Think with Google

We are in an age of mobile-first indexing, where Google dominates. So, providing a fast experience on each device will contribute to your sales and, thus, to your SEO visibility.

I’d like to share two significant brands’ experienced examples:

#1 — Amazon (2012)

In 2012 Amazon’s page load time rose to 1 second and lost $1.6 billion.

Source: Fast Company

#2 — Walmart (2016)

For every -1 second of improvement, they experienced up to a 2% increase in conversion rate.

Source: Web Performance Today

#3 — eBay (2020)

For every 100ms improvement in search page loading time, eBay.com saw a 0.5% increase in Add to Cart count.

Source: Web.dev

It’s just the most prominent brands’ 3 examples that say if you need a difference in visibility and revenue, be fast like a cheetah! Not only on mobile, at all devices.

So how to test the load speed of pages? I have a few useful tools for you:

Above I just share tools that I use, you can choose any of those. But remember, when you identify the page speed issues, you will need a full-stack developer and a knowledgeable SEO doer. I recommend to you before that to take a look at Ian Lurie’s blog post about “A developer’s guide to SEO.”

Choose who you want your website to be.
Choose who you want your website to be

Misusage in Canonical Tags

how canonical tag works

Canonical tag helps to show which URL is the original version/should be indexed by search engines. For example, if you have pagination pages, and if you add below code to /cheese?page=2 this page.

<link rel=“canonical” href= “/cheese” />

Google will index just your first page of the category. Now you understand a little bit how it works, but I think you will be impressed to learn more details — waiting for you here.

I see many issues on canonical tags, these are:

  • Pages without any canonical URL,
  • Use the same canonical URL to duplicated pages (e.g., https://yourdomain.com and http://www.yourdomain.com),
  • Canonicalized to an unrelated page (e.g., yourdomain.com/cheese and yourdomain.com/category/cheese),
  • Without entire domain extension (HTTP/HTTPs, W.W.W./non-WWW, or with/-out slash),
  • Wrong quotation marks
  • Use to non-indexable pages (301, 404, 502, etc.),
  • Use A.M.P. (Accelerated Mobile Pages) URL version to desktop source codes (e.g., yourdomain.com/cheese and yourdomain.com/cheese/amp/),
  • Use JavaScript in-code to all pages (e.g., rel= “canonical” href=“{{MetaTags.canonical}}”).

There are too many cases; these are just a few of them. So what you need to do to detect these canonical tag issues?

I will show you a more comfortable way: Start to use DeepCrawl, crawl your website, and look at “canonical” issues. If you have more than one report, then start step-by-step to resolve them. After you solve the canonical tag issues, download Screaming Frog SEO Spider and paste all the URLs there to see differences which your developer does. There you can see many reports, but all that you want to see is the “Canonicals” tab.

Screaming Frog SEO Spider, List Mode

Do Not Load Third-party Tools Directly When Page is Opening

To be more precise, I’ve shared a screenshot below.

WebPageSpeed Waterfall Results

If you have an eCommerce website, then surely you are using a few third-party tools for users/your customers. If you load all of the tools directly when visitors come to your website, you are in trouble with time to interactive*. Take a look at the above waterfall view; there is an example that shows how much time needed to load-fully a page.

*What is time to interact?

Google PageSpeed Insights — Lab Data

What do you need to do with third-party tools?

  1. Firstly and except Google tools (G.A., G.S.C., G.T.M., etc.), you should pick up all javascript codes of your tools.
  2. Add the below code to your source codes.
The code for load third-party tools after scrolling

window.addEventListener(‘scroll’,() =>
setTimeout(() => {
// load third-party tools here
}, 1000),
{ once: true }
)

The logic of this J.S. event is third-party load tools after a user scrolling your page. Tools like Hotjar, Taboola are for your visitors, and that is why we need to load them after scrolling.

Content/Context SEO Issues

Exceed the Meta Title Pixel Limit

Meta titles are a part of the section for the main summary thing of the content on the page. It’s just seen to users, which is on the search engine results. If you want to level-up your knowledge about the meta title tag, Moz already wrote a guide.

Perhaps you’ve heard that there’s a 60–70 character limit to appear your page meta title fully on Google search results. However, Google doesn’t have a definite character limit — it can be a max 600 px.

What does the 600-pixel limit mean for meta titles?

What does the 600-pixel limit mean for meta titles?

One letter can be longer than another. For example, M is a letter, and so N is a letter. However, there is a pixel difference between the two. To have a more explicit example, I prepared a visual as follows.

Meta title pixel limit in Google SERP

As you will see, both titles with the same number of characters are displayed differently. One on Google limits the title to “…” while the other one appears. Here is what we need to understand; we may show clear titles for users searching on Google while we do not exceed the pixel limit.

How to optimize meta titles and boost your C.T.R.?

I have no words after Brian Dian’s case study:
Here’s What We Learned About Organic Click Through Rate

Pass Over High Traffic Potential of the Old Blog Posts

You may have old blog posts with a low traffic rate. Keep your eyes there before publishing new posts. If you don’t want to lose your blog’s potential traffic, measure the performance of all blog posts.

A few questions to ask before publishing a new post on the blog:

— What is the average ranking of your old blog posts?

— Do you get the traffic performance you expect?

— What is the average session duration?

— What % of the traffic you have per month based on the search volume of the main keywords of the content?

— What is the CTR of the main keywords of the content?

— Do you have an extensive brief according to specific topics?

— Do you compare your post with the best competitors in terms of ranking?

— It may bring natural backlinks?

Update your blog posts periodically, prepare comprehensive content that will remain evergreen for next year. Instead of focusing on a single keyword, use relevant keywords that may search by many users. You are not obligated to write blah blah blah just because that keywords are higher in search volume — understand what is mean intent-based. Prepare the best one brief; it helps to content writers.

Here is a part of my content brief:

You can catch a few ideas from this part of the content brief → You can download and use it free — Detailed Content Brief

I’m not going to close here; I’ll be working on this content to keep more comprehensive and evergreen.

If you have more time, you can read my latest Medium posts:

Future of Voice Searches & Impact on SEO
Do You Care About Your Website’s Externally Linked 404 Pages?

Follow me on Twitter: @AdamitaRoman

--

--