How To Ruin SEO Traffic – Robots.txt

As most know, SEO traffic is something that gets built up over time. You spend upfront time doing research, picking your targets, then getting your hands dirty with the details. It is very rewarding to be able to take a site and start making improvements to see the traffic increase from the search engines. The #1 most annoying thing that can happen is a robots.txt gets uploaded that starts to tank your search traffic. To give some background, a robots.txt file is a simple text file that gives the search engines instructions like where to find your sitemap and what not to crawl. You might use a robots.txt file to disallow a search engine from crawling your content management system’s (CMS) admin or another password protected part of your site.

An example of a robots.txt file that would disallow a WordPress CMS admin would look like this:

User-agent: *

Disallow: /wp-admin

Sitemap: http://www.domain.com/sitemap.xml

If I didn’t want any search engines crawling my site, I would do this:

User-agent: *

Disallow: /

That simple little / can get a whole entire site removed from the search engine results pages (serps). Time and time again I’ve seen sites fall to this terrible problem. One of the ways I’ve seen it happen is the development company uses a server where they test new features, code, etc. They use a robots.txt to tell the search engines to not index the development site since it would be a duplicate of the live site. When they finish their programming, they copy files from the development site to the live site, forgetting about the robots.txt and taking it live. Weeks or even months later, someone is putting something together and finds the site not ranking for even their brand name.

4 thoughts on “How To Ruin SEO Traffic – Robots.txt”

  1. Honestly you would have to be an idiot to do this.. also who developes a site on root level.. anyone with half a brain will use dev.domain or /dev etc…

Leave a Reply