PDA

View Full Version : seo off page



ajaykr
07-19-2011, 05:50 AM
Hi.. what is robot or spider?

konetkar500
07-19-2011, 09:21 AM
A Web crawler is a computer program that browses the World Wide Web in a methodical, automated manner or in an orderly fashion.


This process is called Web crawling or spidering. Many sites, in particular search engines, use spidering as a means of providing up-to-date data. Web crawlers are mainly used to create a copy of all the visited pages for later processing by a search engine that will index the downloaded pages to provide fast searches. Crawlers can also be used for automating maintenance tasks on a Web site, such as checking links or validating HTML code. Also, crawlers can be used to gather specific types of information from Web pages, such as harvesting e-mail addresses.


A Web crawler is one type of bot, or software agent. In general, it starts with a list of URLs to visit, called the seeds. As the crawler visits these URLs, it identifies all the hyperlinks in the page and adds them to the list of URLs to visit, called the crawl frontier. URLs from the frontier are recursively visited according to a set of policies.

kathrinefernand
07-19-2011, 03:27 PM
Many search engines use programs called robots to locate web pages for indexing. These programs are not limited to a predefined list of web pages, instead they follow links on pages they find, which makes them a form of intelligent agent. The process of following links is called spider, wandering, or gathering. Once they have a page or document, the parsing and indexing of the page begins

WilliamBLee
07-20-2011, 06:59 AM
Robot is the procedure to crawl your website. You can see the best example google web master tool.

samueld
07-21-2011, 05:34 AM
A Web crawler is a computer program that browses the World Wide Web in a methodical, automated manner or in an orderly fashion.


This process is called Web crawling or spidering. Many sites, in particular search engines, use spidering as a means of providing up-to-date data. Web crawlers are mainly used to create a copy of all the visited pages for later processing by a search engine that will index the downloaded pages to provide fast searches. Crawlers can also be used for automating maintenance tasks on a Web site, such as checking links or validating HTML code. Also, crawlers can be used to gather specific types of information from Web pages, such as harvesting e-mail addresses.


A Web crawler is one type of bot, or software agent. In general, it starts with a list of URLs to visit, called the seeds. As the crawler visits these URLs, it identifies all the hyperlinks in the page and adds them to the list of URLs to visit, called the crawl frontier. URLs from the frontier are recursively visited according to a set of policies.

Well explained. Thanks for sharing nice information.

KimmyCool
07-21-2011, 05:48 AM
A spider is an electronic robot which moves through the web examining websites in order to hit on search engine databases and rank them according to the targeted ranking criteria for search engines. Bot & crawler are some other similar names for spiders.

reinblend
07-22-2011, 06:47 AM
Many Internet users have some questions like how the search engine is works? Robot is one object (you can say) is mostly responsible for explore search engine results. You can also called it Web crawler.

markhenry
07-22-2011, 07:07 AM
i agree with reinblend but this is very general please tell how spider get info form our pages i means the 200 verticals of ranking ?

jasmine bell
07-23-2011, 12:23 AM
Hi.. what is robot or spider?

Robot - A robot is a program that runs automatically without human intervention. Typically, a robot is endowed with basic logic so that it can react to different situations it may encounter. One common type of robot is a content-indexing spider, or webcrawler.

Spider - A spider is an automated program that "crawls" the Web, generally for the purpose of indexing web pages for use by search engines. Because most web pages contain links to other pages, a spider can start almost anywhere. As soon as it sees a link to another page, it goes off and fetches it. Large search engines have many spiders working in parallel.

schwa
07-23-2011, 03:44 AM
The Robot is a detecting program by Google to crawl on your webiste to check the information and web content of your site.