Results 1 to 10 of 10

Thread: seo off page

  1. #1
    Senior Member
    Join Date
    Jun 2011
    Location
    Delhi
    Posts
    273

    seo off page

    Hi.. what is robot or spider?

  2. #2
    Registered User
    Join Date
    Jun 2011
    Location
    New Jersy
    Posts
    63
    A Web crawler is a computer program that browses the World Wide Web in a methodical, automated manner or in an orderly fashion.


    This process is called Web crawling or spidering. Many sites, in particular search engines, use spidering as a means of providing up-to-date data. Web crawlers are mainly used to create a copy of all the visited pages for later processing by a search engine that will index the downloaded pages to provide fast searches. Crawlers can also be used for automating maintenance tasks on a Web site, such as checking links or validating HTML code. Also, crawlers can be used to gather specific types of information from Web pages, such as harvesting e-mail addresses.


    A Web crawler is one type of bot, or software agent. In general, it starts with a list of URLs to visit, called the seeds. As the crawler visits these URLs, it identifies all the hyperlinks in the page and adds them to the list of URLs to visit, called the crawl frontier. URLs from the frontier are recursively visited according to a set of policies.

  3. #3
    Senior Member
    Join Date
    Jun 2011
    Posts
    221
    Many search engines use programs called robots to locate web pages for indexing. These programs are not limited to a predefined list of web pages, instead they follow links on pages they find, which makes them a form of intelligent agent. The process of following links is called spider, wandering, or gathering. Once they have a page or document, the parsing and indexing of the page begins

  4. #4
    Junior Member
    Join Date
    Jul 2011
    Location
    USA
    Posts
    3
    Robot is the procedure to crawl your website. You can see the best example google web master tool.

  5. #5
    Member
    Join Date
    Jul 2011
    Posts
    77
    Quote Originally Posted by konetkar500 View Post
    A Web crawler is a computer program that browses the World Wide Web in a methodical, automated manner or in an orderly fashion.


    This process is called Web crawling or spidering. Many sites, in particular search engines, use spidering as a means of providing up-to-date data. Web crawlers are mainly used to create a copy of all the visited pages for later processing by a search engine that will index the downloaded pages to provide fast searches. Crawlers can also be used for automating maintenance tasks on a Web site, such as checking links or validating HTML code. Also, crawlers can be used to gather specific types of information from Web pages, such as harvesting e-mail addresses.


    A Web crawler is one type of bot, or software agent. In general, it starts with a list of URLs to visit, called the seeds. As the crawler visits these URLs, it identifies all the hyperlinks in the page and adds them to the list of URLs to visit, called the crawl frontier. URLs from the frontier are recursively visited according to a set of policies.
    Well explained. Thanks for sharing nice information.

  6. #6
    Registered User KimmyCool's Avatar
    Join Date
    Jul 2011
    Posts
    1
    A spider is an electronic robot which moves through the web examining websites in order to hit on search engine databases and rank them according to the targeted ranking criteria for search engines. Bot & crawler are some other similar names for spiders.

  7. #7
    Registered User
    Join Date
    Jul 2011
    Posts
    1
    Many Internet users have some questions like how the search engine is works? Robot is one object (you can say) is mostly responsible for explore search engine results. You can also called it Web crawler.

  8. #8
    Registered User
    Join Date
    Jul 2011
    Posts
    3
    i agree with reinblend but this is very general please tell how spider get info form our pages i means the 200 verticals of ranking ?

  9. #9
    Quote Originally Posted by ajaykr View Post
    Hi.. what is robot or spider?
    Robot - A robot is a program that runs automatically without human intervention. Typically, a robot is endowed with basic logic so that it can react to different situations it may encounter. One common type of robot is a content-indexing spider, or webcrawler.

    Spider - A spider is an automated program that "crawls" the Web, generally for the purpose of indexing web pages for use by search engines. Because most web pages contain links to other pages, a spider can start almost anywhere. As soon as it sees a link to another page, it goes off and fetches it. Large search engines have many spiders working in parallel.

  10. #10
    Senior Member schwa's Avatar
    Join Date
    Mar 2011
    Location
    Chengdu, Sichuan, China
    Posts
    427
    The Robot is a detecting program by Google to crawl on your webiste to check the information and web content of your site.
    My name is Linda Chow, I am running a small online clothing store where you can find and shop sweet Lolita dress, Gothic Lolita dress and many other lolita accessories | My New Store: Cheap Latex Clothing

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  

  Find Web Hosting      
  Shared Web Hosting UNIX & Linux Web Hosting Windows Web Hosting Adult Web Hosting
  ASP ASP.NET Web Hosting Reseller Web Hosting VPS Web Hosting Managed Web Hosting
  Cloud Web Hosting Dedicated Server E-commerce Web Hosting Cheap Web Hosting


Premium Partners:


Visit forums.thewebhostbiz.com: to discuss the web hosting business, buy and sell websites and domain names, and discuss current web hosting tools and software.