Spider

Spider

A spider (also known as a crawler) is a program that browses web sites extracting information for [[search engine]] databases. Spiders can be summoned to a site through search engine registration or they will eventually find your site by following [[link|links]] from other sites (assuming you have links from other sites).

Spider Tips

  • Spiders do not read pages as browsers do. They generally cannot execute [[JavaScript]], including links performed by scripting, or frames links.
  • While spiders will explore your site using hyperlinks, they typically will only go so many levels deep. Therefore, it is likely that a visiting spider may not index your entire site. You may need to register multiple pages with a search engine.
  • If you have sensitive information you do not want indexed on a search engine you can use meta tags or a special instruction file to block them from certain pages.
  • Your web server logs can tell you when and what pages a spider has visited.

Further Reading

Related Posts

Improve Your Site With Screaming Frog for SEO

Kendall Westbrook
Read more

Web Words Explained: Search Engine Optimization

Tabatha Patterson
Read more

Three Marketing Web Tools To Be Thankful For

Kendall Westbrook
Read more

10 Ways to Drive More Traffic while Delighting Visitors

Tom McCracken
Read more

Accelerating Your Web App Launch: Lessons from AI Development

Tom McCracken
Read more

Beyond Numbers: Teaching Financial Literacy Through AI

Tom McCracken
Read more