Three ways your website could be blocking search engines–and how to fix them

Three ways your website could be blocking search engines–and how to fix them

No matter how much search engine optimization work that you do on your website, all of your efforts will be in vain if you are inadvertently blocking search engines on your site.

There are three primary ways that your website could be blocking search engines.  In this post, we’ll walk you through each one and show you how to fix them.

1.) WordPress settings could be blocking search engines

If you’ve built your website or blog with WordPress, a common reason that your website may be blocking search engines is the Privacy setting within the WordPress dashboard.

To see if your website is inadvertently blocking search engines through WordPress’ settings, log in to your dashboard and choose Settings > Privacy.  If you would like search engines to index your site – and, in most cases, you do – your selection should be “Allow search engines to index this site.”  

Setting WordPress to “Ask search engines not to index this site” doesn’t guarantee that your website won’t be crawled.  Instead, it adds two methods for requesting that search engines not crawl your site.  For non-Wordpress sites, these are the two methods that may be used to block search engines manually, so you’ll want to check them, too.

2.) A “robots” meta tag could be blocking search engines.

Websites use “meta” tags to add additional information about the content of the page.  Found in the page’s header, meta tags convey information that may be relevant to search engines.

One of these meta tags is the “robots” meta tag.  This meta tag gives search engines instructions on how to interact with the page.  Here’s an example:

<META NAME="ROBOTS" CONTENT="NOINDEX, NOFOLLOW">

“Noindex” would instruct search engines not to add your page to its index.  ”Nofollow” would instruct search engines not to follow the links on your page.

If your intention is to make your website visible to search, you should ensure that you don’t have a noindex/nofollow meta robots tag.  If you do have this tag on your page and you’d like search engines to crawl and index your site, you should remove it from your source code entirely.  Since the meta robots tag defaults to index/follow, you don’t need to have a tag that specifically tells search engines to browse your site.

3.) Your robots.text file may prevent search engines from crawling your site.

Robots.txt refers to a simple text file that may be stored on your web server that, similar to the meta robots tag, gives search engines instructions on how to crawl and index your site.

To check for a robots.txt file, point your web browser to yourdomain.com/robots.txt.  If you get a 404/”Not found” error, you don’t have a robots.txt file, so it wouldn’t be blocking search engines.

If you do find a robots.txt file, the command that will instruct search engines not to explore your site is Disallow: /

Some content management systems, like WordPress, may add a Disallow code for specific directories on your site (such as your administration area, etc.), but as a general rule of thumb, if you see “Disallow” in your robots.txt file, there is a good possibility you may be blocking search engines.

Read more about the robots.txt file and its capabilities in Google’s webmaster resources.

Correcting site visibility problems

To correct these issues, you or your webmaster can simply log in to your website via an FTP connection and edit the offending files to solve the problem.  If you need some more help, please get in touch!  We offer reasonable SEO setup and maintenance packages for small- to medium-sized businesses.

Leave a reply