Do you know how to deal with 404 error ?

Share !

Typical 404 errors are returned when a user attempts to visit a web page that doesn’t exist. Web crawlers like Bing bots & Google bots discover content by following links from one page to another. 404 errors may occur because of the following reasons.

  1. Directory removed
  2. Directory Renamed
  3. Broken links / Misspelled URLS
  4. .Htaccess
  5. Special Cases

Dealing with Not Found errors:

It is very frustrating when a user visit on a page and it returns 404. 404 errors may lead the High Bounce rate and also can kill potential leads as well. 404 errors effect on the user experience Along with the trust factor of the website. It may ultimately effects on the on the website ranking. Although Google officially has written in their support note that 404 errors don’t impact your site’s ranking in Google. However, many other consultants are agreed that 404 errors can cause the SERP issues in the long run. Characteristically, 404 errors happen because of the typo errors and misconfiguration. Although, Google has increased their efforts to recognize and crawl links in embedded content in JavaScript as well, but still they are finding it hard to crawl the relative urls in Java scripts.

Here we go, let’s dig the 404 errors.

Finding 404 Errors By Using Google Webmaster:

I love working with Google’s webmaster tool. I always recommend using Google resources to figure out the SEO issues. Google webmaster is something which can give you the real picture of a website & errors too.

Step 1: Verify your website on Google webmaster & go to Crawler Errors.

Step 2: Click on the listed urls & get the exact path from where they are linked.

  • See from where these links are coming from. You can check it by clicking on the Linked from.
  • Fix or remove links from your own site.

Once you find the urls, fix them. You can easily dig the source code to find the broken Links. These links are removed urls or Deleted directory otherwise.

Dealing with Incoming Broken Links:

Identifying the proposed traffic from typomistakes URL’s to justify with such urls either you can set 301 redirection or can contact with the webmasters to fix the broken links.

Removing Broken Links from Hidden sources:

Visiting every page one by one may prove the donkey job sometimes, & possibly you can miss a few urls to judge. To make it more effective you can use the SEO tool Kit from Microsoft. The other tool is Xenu. Both are good to find the broken links (and many more).

Dealing with Broken files by using the browser:

If you are using Google chrome or firefox then you can check the broken links by using browsers. You just need press F12 and You can easly track if the url (JS, CSS, or links) is broken. However there are few addons are also available to track the broken links. But I love to work with browsers.

404s are a perfectly normal (and in many ways desirable) part of the web. You will likely never be able to control every link to your site, or resolve every 404 error listed in Webmaster Tools. Instead, check the top-ranking issues, fix those if possible, and then move on.

When to return a 404 status code:

Removing a page from the website will lead 404 error. I would suggest few steps before removing any url from your website.

  • Redirect the Old URL’s on their new destination therefore the users who are visiting the old urls can redirect over new url
  • If you are removing any content permanently let the page return 404 or 410 but, do nt miss to remove that url from google indexing.(to avoide the Organic bounce rate). Google treats 410s (Gone) the same as 404s (Not found).

Returning a code other than 404 or 410 for a non-existent page (or redirecting users to another page, such as the home page, instead of returning a 404) can be problematic. Such pages are called soft 404s, and can be confusing to both users and search engines.

Unexpected 404 errors

In Crawl Errors, you might occasionally see 404 errors for URLs you don’t believe exist on your own site or on the web. These unexpected URLs might be generated by Googlebot trying to follow links found in JavaScript, Flash files, or other embedded content.

For example, your site may use the following code to track file downloads in Google Analytics:

<a href=”helloworld.pdf” onClick=”_gaq.push([‘_trackPageview’,’/download-helloworld’]);”>Hello World PDF</a>

When it sees this, as an example, Googlebot might try to crawl the URL http://www.example.com/download-helloworld, even though it’s not a real page. In this case, the link may appear as a 404 (Not Found) error in the Crawl Errors feature in Webmaster Tools.

Google strives to detect these types of issues and resolve them so that they will disappear from Crawl Errors.


Originally published at www.sudhanshu-seo.com on April 6, 2015.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.