View Full Version : If it's impossible to examine all the documents on the Web, how do the search engines

10-11-2012, 02:28 PM
They use software programs known as robots, spiders or crawlers. A robot is a piece of software that automatically follows hyperlinks from one document to the next around the Web. When a robot discovers a new site, it sends information back to its main site to be indexed. Because Web documents are one of the least static forms of publishing (i.e., they change a lot), robots also update previously catalogued sites. How quickly and comprehensively they carry out these tasks varies from one search engine to the next.