This post is also available in: Español Italiano

Robots Txt Crawl Delay

What is a robots.txt file?

Robots Txt Crawl Delay

A robots.txt file is one of the most common ways to inform a search engine where it can and cannot go on a website. It's simply a text file that contains the instructions that search engine spiders, also known as robots, read in order to adhere to a strict syntax. This file may be used to inform search engines about the engagement guidelines for your website. Search engines examine the robots.txt file on a regular basis for instructions on how to crawl the site. Directives are the terms used to describe these instructions. The search engine will crawl the entire website if the robots.txt file is missing.
Robots.txt is important for website SEO since it instructs search engines on how to best scan the site. You may use this file to avoid duplicating material, stop search engines from accessing certain areas of your site, and direct them to explore your site more effectively. The Crawl-Delay directive in the Robots.txt file and its application are discussed in this post.
A robots.txt file informs search engines about the rules of engagement for your website. Sending the appropriate signals to search engines is an important component of SEO, and the robots.txt file is one of the methods to express your crawling preferences to search engines.
The robots.txt standard has undergone a number of changes in 2019: Google suggested a Robots Exclusion Protocol modification and made their robots.txt parser open source.
TL;DR
Google's robots.txt translator is remarkably lenient and adaptable.
In the event of a conflict between directives, Google assumes portions should be limited rather than unfettered.
Search engines examine a website's robots.txt file on a regular basis to determine whether it contains any instructions for crawling the site. These are referred to as directions.
Search engines will explore the entire website if there is no robots.txt file or if there are no appropriate directives.
Despite the fact that all major search engines obey the robots.txt file, certain search engines may opt to disregard (parts of) it. While robots.txt directives send a powerful signal to search engines, it's vital to remember that the robots.txt file is a list of optional directives for search engines rather than a requirement.
Why should you be concerned with robots.txt?
From an SEO standpoint, the robots.txt file is crucial. It instructs search engines on how to crawl your website most effectively.
You may use the robots.txt file to prohibit search engines from accessing particular portions of your website, avoid duplicating material, and provide search engines with useful advice on how to crawl your site more effectively.
However, be cautious when editing your robots.txt file: it has the ability to render large portions of your website unavailable to search engines.
Robots.txt is frequently overused to decrease duplicate material, resulting in the death of internal linking, therefore use it with caution. My advise is to only use it for files or sites that search engines should never view or that allowing into can have a substantial impact on crawling. Log-in areas that create several urls, test areas, and areas with multiple facetted navigation are all common instances. Also, keep an eye on your robots.txt file for any problems or changes.
The vast majority of problems I see with robots.txt files come into one of three categories:
The use of wildcards incorrectly. It's pretty typical to find areas of the site that were supposed to be closed off being blocked off. Directives might sometimes contradict with one another if you aren't careful.
Someone, such as a developer, has made an unintentional update to the robots.txt file (typically when releasing new code) without your awareness.
Inclusion of instructions in a robots.txt file that do not belong there. Robots.txt is a web standard that has several limitations. I've seen a lot of devs make up directives that don't function (at least for the mass majority of crawlers). That can be innocuous at times, but not always.
How does a robots.txt file appear?
A simple robots.txt file for a WordPress website may look like this:
User-agent:
* Disallow: /wp-admin/
Based on the example above, let's go over the anatomy of a robots.txt file:
The user-agent specifies which search engines the directives that follow are intended for.
The symbol * denotes that the instructions are intended for use by all search engines.
Disallow: This is a directive that tells the user-agent what content they can't see. /wp-admin/ is the path that the user-agent can't see.
In a nutshell, the robots.txt file instructs all search engines to avoid the /wp-admin/ directory.

Robots Txt Crawl Delay
Robots Txt Crawl Delay

Robots Txt Crawl Delay – What Is It?

Robots Txt Crawl Delay

Crawl-delay is an unauthorized robots.txt directive that may be used to avoid servers from being overloaded with queries. Search engines like Bing, Yahoo, and Yandex may become crawl-hungry at times, and they can be slowed down by employing this directive. Though various search engines understand the command in different ways, the end result is often the same.
The crawl-rate is defined as the amount of time it takes for a bot to make two queries to a website. It refers to how quickly the bot can crawl your page. The crawl-delay parameter instructs the bot to wait a certain amount of time between queries.
Crawl-delay is a good approach to keep bots from using a lot of hosting resources. However, while utilizing this directive in the robots.txt file, caution is advised. The search engines are only permitted to access 8640 pages each day if they impose a delay of 10 seconds. This may appear to be a significant quantity for a tiny site, but it isn't for larger ones. This strategy is an excellent solution to conserve bandwidth if you don't get any traffic from such search engines.

Robots Txt Crawl Delay
Robots Txt Crawl Delay

How does Google interpret crawl-delay?

Robots Txt Crawl Delay

The crawl-delay parameter is ignored by Google. As a result, there's no need to be concerned about the impact of such a command on your Google rankings. You may use it securely to cope with other aggressive search bots. Even though Googlebot crawling is unlikely to cause issues, you may still use the Google Search Console to reduce the crawl pace for Google. Here's how to set the crawl-rate for the Google bot in a few simple steps.
Go to Google Search Console (the old one) and sign in.
Choose the website for which you want to set the crawl-delay.
Choose ‘Site Settings' from the gear icon located in the upper right corner.
Look for the ‘Crawl Pace' option, which has a slider for customizing the crawl rate. By default, the rate is set to a suggested value.

Crawl-delay: 10 for Bing and Yahoo

Bing and Yahoo both accept the crawl-delay directive; in the instance of crawl-delay: 10, they'll divide a day into 10 second windows, crawling a maximum of one page each window.

Yandex and crawl-delay

Yandex recognizes the crawl-delay directive, and if crawl-delay: 10 is used, they will wait at least 10 seconds before requesting another URL.
Despite the fact that Yandex supports this directive, they urge that you use Yandex Webmaster , their own version of Google Search Console where you may customize the crawl pace .

Baidu and crawl-delay

Because Baidu does not support the crawl-delay directive, they will disregard it, much as Google. Baidu Webmaster Tools allows you to set your preferred crawl frequency.

Robots Txt Crawl Delay
Robots Txt Crawl Delay

Why We Use Crawl Delay?

If your website has a significant number of pages, and many of them are linked from the index, it's conceivable that the bot crawling the site makes too many queries in a short amount of time. As a result of the high volume of traffic, hosting resources are likely to be depleted on an hourly basis. If your website has this issue, one solution is to create a crawl-delay of 1-2 seconds so that the search bot searches the site at a moderate rate, avoiding traffic spikes. Crawl-delay directives are supported by search engines such as Yahoo, Yandex, and Bing, and may be used to retain them for a time. Setting a crawl-delay of 10 seconds indicates that after crawling the website once, the search engines will wait ten seconds before re-accessing it.
Each time a search bot crawls the site, it consumes a significant amount of bandwidth and other server resources. Crawlers may quickly deplete the resources of websites with a large number of pages and content, such as e-commerce sites. To safeguard visitors' resources, use the robots.txt file to prevent bots from accessing pictures and scripts.
Crawl-Delay Rule Ignored By Googlebot
Search engines such as Bing, Yahoo, and Baidu added the Crawl-delay directive for robots.txt files, and they still react to it. The goal was for webmasters to be able to designate how long a search engine crawler should wait between single queries in order to reduce server load. Despite the fact that this is a good concept, Google does not support the crawl-delay rule since their servers are dynamic and keeping a time frame between requests makes no sense for them. Because most servers can handle so much data per second, the time between requests value stated in seconds is no longer useful.
Rather of following the crawl-delay criterion, Google modifies the crawling based on the server response. If a server fault or lag is detected, the crawling is slowed. Webmasters can designate which portions of their websites they don't want indexed in the robots.txt file.
The robots.txt file is a useful tool for controlling how crawlers reach your website. The user experience for visitors and the SEO of the website can both benefit from properly creating this file. Bots will be able to arrange and display stuff in the SERPs the way you want it to be displayed if you allow them to spend time crawling the most relevant items. Crawl-delay is a handy directive for controlling aggressive search engine bots and saving server resources for your site and users.