Results 1 to 7 of 7
  1. #1
    Senior Member
    Join Date
    Sep 2021
    Location
    India
    Posts
    464

    What do you understand by crawling?

    What do you understand by crawling?

  2. #2
    Member
    Join Date
    Oct 2021
    Location
    USA
    Posts
    64
    Quote Originally Posted by Cyril30 View Post
    What do you understand by crawling?
    Crawling is when Google or another search engine send a bot to a web page or web post and “read” the page. This is what Google Bot or other crawlers ascertain what is on the page. Don’t let this be confused with having that page being indexed. Crawling is the first part of having a search engine recognize your page and show it in search results. Having your page crawled, however, does not necessarily mean your page was (or will be) indexed. To be found in a query from any search engine, you must first be crawled and then indexed.

    Upon being created or updated; how does Google know to examine your page?

    Pages are crawled for a variety of reasons including:

    Having an XML sitemap with the URL in question submitted to Google
    Having internal links pointing to the page
    Having external links pointing to the page
    Getting a spike in traffic to the page
    To ensure that your page gets crawled, you should have an XML sitemap uploaded to Google Search Central, previously known as Google Search Console (formerly Google Webmaster Tools) to give Google the roadmap for all of your new content.

  3. #3
    Member
    Join Date
    Oct 2021
    Location
    Surat, Gujarat
    Posts
    59
    Quote Originally Posted by Cyril30 View Post
    What do you understand by crawling?
    Crawling is the discovery process in which search engines send out a team of robots (known as crawlers or spiders) to find new and updated content. Content can vary — it could be a webpage, an image, a video, a PDF, etc. — but regardless of the format, content is discovered by links.
    Googlebot starts out by fetching a few web pages, and then follows the links on those webpages to find new URLs. By hopping along this path of links, the crawler is able to find new content and add it to their index called Caffeine — a massive database of discovered URLs — to later be retrieved when a searcher is seeking information that the content on that URL is a good match for.

  4. #4
    Member
    Join Date
    Oct 2021
    Location
    Surat, Guajarat
    Posts
    56
    Quote Originally Posted by Cyril30 View Post
    What do you understand by crawling?
    When a crawler visits a website, it picks over the entire website’s content (i.e. the text) and stores it in a databank. It also stores all the external and internal links to the website. The crawler will visit the stored links at a later point in time, which is how it moves from one website to the next. By this process the crawler captures and indexes every website that has links to at least one other website.

  5. #5
    Member
    Join Date
    Oct 2021
    Location
    Surat, Gujarat, India
    Posts
    64
    Quote Originally Posted by Cyril30 View Post
    What do you understand by crawling?
    Crawling is the discovery process in which search engines send out a team of robots (known as crawlers or spiders) to find new and updated content. Content can vary — it could be a webpage, an image, a video, a PDF, etc. — but regardless of the format, content is discovered by links.

    Googlebot starts out by fetching a few web pages, and then follows the links on those webpages to find new URLs. By hopping along this path of links, the crawler is able to find new content and add it to their index called Caffeine — a massive database of discovered URLs — to later be retrieved when a searcher is seeking information that the content on that URL is a good match for.

  6. #6
    Registered User
    Join Date
    Dec 2021
    Location
    Surat, Gujarat
    Posts
    41
    Quote Originally Posted by Cyril30 View Post
    What do you understand by crawling?
    In the context of SEO, ‘crawling’ basically refers to a robot which is automatically surfing the web, categorizing and analyzing each site it encounters.

    There are two main types of such programs in SEO. The first one is the crawlers used by the main search engines, such as Google, Bing or Yandex. The second type is private commercial crawlers, used, for example, by SEO toolsets to establish and maintain fresh index of links and backlinks on the internet.

    Crawling may sound pretty easy, by in practice if you want to achieve a useful speed and maintain a current database of links, you’ll need a ton of equipment.

  7. #7
    Senior Member
    Join Date
    Aug 2021
    Location
    931 Clayton St San Francisco, CA 94117 United States
    Posts
    117
    Quote Originally Posted by Cyril30 View Post
    What do you understand by crawling?
    Hello,

    "In the SEO world, Crawling means “following your links”. Indexing is the process of “adding webpages into Google search”. ... Crawling is the process through which indexing is done. Google crawls through the web pages and index the pages."

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  

  Find Web Hosting      
  Shared Web Hosting UNIX & Linux Web Hosting Windows Web Hosting Adult Web Hosting
  ASP ASP.NET Web Hosting Reseller Web Hosting VPS Web Hosting Managed Web Hosting
  Cloud Web Hosting Dedicated Server E-commerce Web Hosting Cheap Web Hosting


Premium Partners:


Visit forums.thewebhostbiz.com: to discuss the web hosting business, buy and sell websites and domain names, and discuss current web hosting tools and software.