I should probably add this SEO tip too because the purpose of robots.txt is confusing: If you want to remove/deindex a page from Google search, you counterintuitively need to allow the page to be crawled in the robots.txt file, and then add a noindex response header or noindex meta tag to the page. This way the crawler gets to see the noindex instruction. Robots.txt controls which pages can be crawled, not which pages can be indexed.
The consequences of robots.txt misuse can also be disastrous for a regular site. For example, I've seen instances where multiple warnings of 'page indexed but blocked by robots.txt' have led to sites being severely down-ranked as a consequence.
My assumption being that search engines don't want to be listing too many pages that everyone can read and they can not.
Google used to have a /killer-robots.txt which forbid the T-1000 and T-800 from accessing Larry Page and Sergey Brin, but they took that down at some point.
> The Sitemap protocol enables you to provide details about your pages to search engines, […] in addition to the XML protocol, we support RSS feeds and text files, which provide more limited information.
> You can provide an RSS (Real Simple Syndication) 2.0 or Atom 0.3 or 1.0 feed. Generally, you would use this format only if your site already has a syndication feed.
Think of robots.txt as less of a no trespassing sign and more of a, "You can visit but here are the rules to follow if you don't want to get shot" sign.
If you do not respect the sign I shall be very cross with you. Very cross indeed. Perhaps I shall have to glare at you, yes, very hard. I think I shall glare at you. Perhaps if you are truly irritating I shall be forced to remove you from the premises for a bit.
There's a lot of talk of deregulation in the air, maybe we'll see Gibson-esque Black Ice, where rude crawlers provoke an automated DoS, a new Wild West.
They may or may not, though respecting robots.txt is a nice way of not having your IP range end up on blacklists. With cloudflare in particular, that can be a bit of a pain.
They're pretty nice to deal with if you're upfront about what you are doing and clearly identify your bot, as well as register it with their bot detection. There's a form floating around somewhere for that.
FWIW, that’s why I’m working on a platform[1] to help devs deploy polite crawlers and scrapers out of the box that respect robots.txt (and 429s, Retry-After response headers, etc). It also happens to be entirely built on Cloudflare.
What's the purpose of "User-Agent: DemandbaseWebsitePreview/0.1"? I couldn't find anything about that agent, but I assume it's somehow related to demandbase.com?
But why are it and twitter the only whitelisted entries? Google and bing missing is a bit surprising, but I assume they're whitelisted through a different mechanism (like a google webmaster account)?
It is one of the service they use. As per the cookie policy page [1]:
> DemandBase - Enables us to identify companies who intend to purchase our products and solutions and deliver more relevant messages and offers to our Website visitors.
I thought about doing something like that, but then I realised: what if someone linked to the trap URL it from another site and a crawler followed that link to the trap?
You might end up penalising Googlebot or Bingbot.
If anyone knew what that trap URL did, and felt malicious, this could happen.
How do you discern a crawler agent and a human? Is it easily as the fact that they might cover something like 80%+ of the site in one visit fairly quickly?
Cute how they hashtag out so many lines thinking that the robots will ignore them. AI tools see past such tricks and no doubt have logged cloudflare's use of anti-machine ascii art. When humanity is put to trial, the AI jury will see this.
I have an ASCII art Easter egg like this in an SEO product I made. :)
https://www.checkbot.io/robots.txt
I should probably add this SEO tip too because the purpose of robots.txt is confusing: If you want to remove/deindex a page from Google search, you counterintuitively need to allow the page to be crawled in the robots.txt file, and then add a noindex response header or noindex meta tag to the page. This way the crawler gets to see the noindex instruction. Robots.txt controls which pages can be crawled, not which pages can be indexed.
The consequences of robots.txt misuse can also be disastrous for a regular site. For example, I've seen instances where multiple warnings of 'page indexed but blocked by robots.txt' have led to sites being severely down-ranked as a consequence.
My assumption being that search engines don't want to be listing too many pages that everyone can read and they can not.
That’s a funny one!
Anyone knows of others like that?
Here is mine: https://FreeSolitaire.win/robots.txt
Google used to have a /killer-robots.txt which forbid the T-1000 and T-800 from accessing Larry Page and Sergey Brin, but they took that down at some point.
https://web.archive.org/web/20160530160330/https://www.googl...
Stripe has a humans.txt: https://stripe.com/humans.txt
One nice thing about CF's robots.txt is its inclusion of a sitemap:
https://www.cloudflare.com/sitemap.xml
which contains links to educational materials like
https://www.cloudflare.com/learning/ddos/layer-3-ddos-attack...
Potentially interesting to see their flattened IA....
Little-known fact: a syndication feed (RSS or Atom) can be used as a sitemap.
Quoting https://www.sitemaps.org/protocol.html#otherformats:
> The Sitemap protocol enables you to provide details about your pages to search engines, […] in addition to the XML protocol, we support RSS feeds and text files, which provide more limited information.
> You can provide an RSS (Real Simple Syndication) 2.0 or Atom 0.3 or 1.0 feed. Generally, you would use this format only if your site already has a syndication feed.
This is what happens if your robot isn't nice
That's not from robots.txt, but their Bot Management feature which blocks things calling themselves Googlebot that don't come from known Google IPs.
Are GCP IPs considered Google IPs?
For reference https://developers.google.com/search/docs/crawling-indexing/...
No.
No I am very sure they are not.
What does “OUR TREE IS A REDWOOD” refer to? A quick search doesn’t yield any definite results.
California’s state tree is the redwood, and that’s where their HQ is.
Right, that makes sense. But why would you mention your state’s tree anywhere, and why specifically in your robots.txt? Seems pretty random.
State pride I suppose.
Have you seen a redwood? They can create quite the impression amongst people.
The tree shape a fairly inaccurate though
That’s cool, if any scrapers would still respect the robots.txt that is
Think of robots.txt as less of a no trespassing sign and more of a, "You can visit but here are the rules to follow if you don't want to get shot" sign.
If you do not respect the sign I shall be very cross with you. Very cross indeed. Perhaps I shall have to glare at you, yes, very hard. I think I shall glare at you. Perhaps if you are truly irritating I shall be forced to remove you from the premises for a bit.
There's a lot of talk of deregulation in the air, maybe we'll see Gibson-esque Black Ice, where rude crawlers provoke an automated DoS, a new Wild West.
They may or may not, though respecting robots.txt is a nice way of not having your IP range end up on blacklists. With cloudflare in particular, that can be a bit of a pain.
They're pretty nice to deal with if you're upfront about what you are doing and clearly identify your bot, as well as register it with their bot detection. There's a form floating around somewhere for that.
FWIW, that’s why I’m working on a platform[1] to help devs deploy polite crawlers and scrapers out of the box that respect robots.txt (and 429s, Retry-After response headers, etc). It also happens to be entirely built on Cloudflare.
[1] https://crawlspace.dev
I was surprised any ever did, honestly
What's the purpose of "User-Agent: DemandbaseWebsitePreview/0.1"? I couldn't find anything about that agent, but I assume it's somehow related to demandbase.com?
But why are it and twitter the only whitelisted entries? Google and bing missing is a bit surprising, but I assume they're whitelisted through a different mechanism (like a google webmaster account)?
It is one of the service they use. As per the cookie policy page [1]:
> DemandBase - Enables us to identify companies who intend to purchase our products and solutions and deliver more relevant messages and offers to our Website visitors.
[1]: https://www.cloudflare.com/en-in/cookie-policy/
My guess is that the Twitter one is for previews when you link to a web in Twitter.
If those robots could read, they'd be very upset.
Has anyone worked on anything like this for AI scrapers?
https://github.com/ai-robots-txt/ai.robots.txt/blob/main/rob...
https://llmstxt.org/ https://www.answer.ai/posts/2024-09-03-llmstxt.html
A robots.txt that asks AI scrapers not to scrape?
There’s a couple services that keep updated lists of known scraper user agents. A quick search reveals a handful.
easy guess that length breaks some legacy stuff
but every robots.txt should have a auto-ban trap line
ie. crawl it and die
basically a script that puts the requesting IP into firewall
of course it's possible to abuse that so it has to be monitored
I thought about doing something like that, but then I realised: what if someone linked to the trap URL it from another site and a crawler followed that link to the trap?
You might end up penalising Googlebot or Bingbot.
If anyone knew what that trap URL did, and felt malicious, this could happen.
How do you discern a crawler agent and a human? Is it easily as the fact that they might cover something like 80%+ of the site in one visit fairly quickly?
Crawlers/archivers will be hitting your site much faster than a human user.
Cute how they hashtag out so many lines thinking that the robots will ignore them. AI tools see past such tricks and no doubt have logged cloudflare's use of anti-machine ascii art. When humanity is put to trial, the AI jury will see this.
https://en.wikipedia.org/wiki/Roko%27s_basilisk ???