.... Internet marketing resources, ecommerce web site design tutorials
  Taming the Beast - quality web marketing and ecommerce development services .... .

 

Return to web marketing and ecommerce articles index


Robots, Agents and Spiders - Identifying Search Engine Crawlers

If you've been surfing search engine optimization web sites, you've no doubt come across the above being mentioned on many occasions.

Crawlers, Agents, Bots, Robots and Spiders

Five terms all describing basically the same thing, but in this article they'll be referred to collectively as spiders or "agents". A search engine spider is an automated software program used to locate and collect data from web pages for inclusion in a search engine's database and to follow links to find new pages on the World Wide Web. The term "agent" is more commonly applied to web browsers and mirroring software.

If you've ever examined your server logs or web site traffic reports, you've probably come across some weird and wonderful names for search engine spiders, including "Fluffy the Spider" and Slurp. Depending upon the type of web traffic reports you receive, you may find spiders listed in the "Agents" section of your statistics.

Not all spiders are good

Who actually owns these spiders? It's good to know the beneficial from the bad. Some agents are generated by software such as Teleport Pro, an application that allows people to download a full "mirror" of your site onto their hard drives for viewing later on, or sometimes for more insidious purposes such as plagiarism. If you have a large or image heavy site, the practice of web site stripping could also have a serious impact on your bandwidth usage each month. 

Banning spiders and agents

If you notice entries like Teleport Pro and WebStripper in your traffic reports, someone's been busy attempting to download your web site. You don't have to just sit back and let this happen. If you are commercially hosted, you'll be able to add a couple of lines to your robots.txt file to prevent repeat offenders from stripping your site. 

The robots.txt file gives search engine spiders and agents direction by informing them what directories and files they are allowed to examine and retrieve. These rules are called The Robots Exclusion Standard.

To prevent certain agents and spiders from accessing any part of your web site, simply enter the following lines into the robots.txt file:

User-agent: NameOfAgent
Disallow: /

Ensure that you enter the name of the agent exactly as it appeared in your reports/logs e.g. Teleport Pro/1.29 and that there is a separate entry for each agent. Skip a line between entries. You could do the same to exclude search engine spiders, but somehow I don't think you'll really want to do this :0). The "/" in the above example means disallow access to any directory. You can also disallow access by spiders and agents to certain directories e.g.

User-agent: *
Disallow: /cgi-bin/

In this example the asterisk (wildcard) indicates "all". Don't use the asterisk in the Disallow statement to indicate "all", use the forward slash instead.

Need to learn about  search engine ranking and optimization strategies or want to monitor the SEO health of your web sites (and those of your competitors)? One of the most comprehensive set of online search engine optimization tools around -  SEOMOZ.

If you don't have a robots.txt file, create one in notepad and upload it to the docs directory (or the root of whichever directory your web pages are stored in). Never use a blank robots.txt file as some search engines may see this as an indication that you don't want your site spidered at all! Have at least one entry in the file.

Unfortunately, defining web stripper agents and spiders in your robots.txt file won't work in all cases as some mirroring software applications have the ability to mimic web browser identifiers; but at least it's some protection that may save you some valuable bandwidth.

If you're not able to create a robots.txt file, which is usually the case if you are hosted by a free hosting service, use the robots exclusion meta tag on your pages.

Search engine spider identification

The following is a basic listing of search engine spider names and their "owners". This is by no means complete, as there are many thousands of search engines on the Internet, but it covers the more common beneficial spiders. Look for these in your traffic reports or search for the names through your server logs to discover which pages they have been spidering. You'll find that many of the entries will also have accompanying numbers or letters e.g Googlebot/2.1 or Slurp.so/1.0

Spider name 

Spider owner

Googlebot  Google.com 
MSNbot Search.msn.com
Ask Jeeves/Teoma  Ask.com 
Architext spider  Excite.com 
FAST-WebCrawler  FAST (AllTheWeb.com) 
Slurp  Inktomi.com 
Yahoo Slurp Yahoo Web Search
ia_archiver  Alexa.com
Scooter  AltaVista.com 
crawler@fast   FAST (AllTheWeb.com)
Crawler  Crawler.de 
InfoSeek sidewinder  InfoSeek.com 
Lycos_Spider_(T-Rex)  Lycos.com 

If you have spotted any significant activity from these spiders in your reports or logs, there's a good chance that you'll be listed on that particular search engine. But you'll need to be patient; some Search Engines take far longer than others to refresh their databases!

Need to learn about  search engine ranking and optimization strategies or want to monitor the SEO health of your web sites (and those of your competitors)? One of the most comprehensive set of online search engine optimization tools around -  SEOMOZ.

Further learning resources:

Learn more about positioning in our SE optimization tutorials section.

Studying Web Traffic and Server Logs. What is a hit? What is a visitor? What is a page view? Traffic statistics terminology and methods of web site traffic reporting.

A basic tutorial on the use of Meta Tags in improving search engine rankings. A solid set of meta-tags is an important component of any overall promotion strategy.

What do all those browser error codes and server response codes mean? Try our server response code reference.

Michael Bloch
Taming the Beast.net
http://www.tamingthebeast.net
Tutorials, web content, tools and software
Web Marketing, eCommerce & Development solutions. 
____________________________

Copyright information.... This article is free for reproduction but must be reproduced in its entirety & this copyright statement must be included.  Visit http://www.tamingthebeast.net to view great articles, tutorials and tools for site owners, web developers and Internet marketers! Subscribe for free to our popular ecommerce/web design ezine!

Click here to view article index 

Online meeting & webinar software review
Powerful, easy to use collaboration tools that can help improve your marketing sales and training efforts. Learn more about these services in this review & try a free trial!

The best shopping cart software
Our reviews of some of the best shopping carts around - free ecommerce solutions  through to premium services offering affiliate programs, marketing modules & online soft goods delivery.  Shopping cart software guide 

Autoresponder software/mailing list manager
 Read our beginners guide and reviews of all-in-one autoresponder & email marketing software solutions.

Credit card transaction fraud screening!  Effective fraud screening is an essential part of running an online businesses. Fraud transactions cost you money and can threaten your merchant account. Pick up a stack of transaction screening tips in this free guide! 

Need some advice/tools for writing/creating a web design, development or marketing proposal?

 

 

 

Home

 

Get paid cash taking online surveys - free to join online 
survey companies that will pay you cash for your opinion!

In Loving Memory - Mignon Ann Bloch

copyright (c) 1999-2011  Taming the Beast  Adelaide - South Australia 

Profile - Contact - Privacy - Consultants Portfolio 

Search Site - Terms of Service - Social/environmental