F

Feedfetcher

Operated by GoogleDeveloper Tools
Visit Bot Homepage

Verify Feedfetcher IP Address

Verify if an IP address truly belongs to Google, using official verification methods. Enter both IP address and User-Agent from your logs for the most accurate bot verification.

Feedfetcher is Google’s crawler responsible for retrieving RSS and Atom feeds used in Google News, Google Reader (historically), and other syndication-based services. It fetches feed URLs rather than full webpages. The bot does not index content for Google Search and does not follow links within feeds; its role is purely to collect updates for subscribed users or Google systems that aggregate feed content. Most publishers allow it to ensure timely distribution of updates. Crawl activity is periodic and lightweight, triggered when feed subscribers or internal services request refreshes. It ignores robots.txt rules. RobotSense.io verifies Feedfetcher using Google’s official validation methods, ensuring only genuine Feedfetcher traffic is identified.

This bot does not honor Crawl-Delay rule.

User Agent Examples

FeedFetcher-Google (+http://www.google.com/feedfetcher.html)
Example user agent strings for Feedfetcher

Robots.txt Configuration for Feedfetcher

No Robots.txt Identifier

Feedfetcher does not have a unique robots.txt User-Agent identifier, which means this bot cannot be specifically targeted in your robots.txt file.

Looking to detect or manage this bot? RobotSense.io provides real-time bot detection and management beyond robots.txt, helping you identify and control bots that cannot be blocked through traditional means.

Frequently Asked Questions

What is FeedFetcher-Google, and why is it visiting my website?
FeedFetcher-Google is a crawler operated by Google that retrieves RSS and Atom feeds for syndication services such as Google News. Its primary purpose is to fetch feed data so updates can be delivered to subscribers or aggregated by Google systems. Visits are triggered by feed refresh requests, and crawl behavior is limited to specific feed URLs rather than full website crawling. For sites that publish feeds, this bot traffic is expected. Visits from FeedFetcher-Google bot are non-harmful.
Is FeedFetcher-Google a legitimate bot, or is it commonly spoofed?
FeedFetcher-Google is an official Google bot and is considered legitimate. However, like other widely recognized crawlers, its user-agent can be spoofed by malicious actors attempting to bypass restrictions or disguise scraping activity. Attackers may imitate it because feed endpoints are often accessible without strict controls. User-Agent strings alone cannot reliably verify authenticity. You can use Google's recommended methods mentioned below to verify a legitimate visit, or use RobotSense.io API to easily verify FeedFetcher-Google bot visits.
How can I verify that a request is really coming from FeedFetcher-Google?
You can use Google's recommended official methods to verify FeedFetcher-Google bot visits, these include: - IP range checks - Reverse DNS → forward DNS Do not use User-Agent based detection as that can be easily spoofed. Alternatively, you can use RobotSense.io API to easily verify FeedFetcher-Google and all other bots from Google.
Should I allow or block FeedFetcher-Google on my website?
Allowing FeedFetcher-Google is generally recommended if you publish RSS or Atom feeds, as it enables timely content distribution through Google services. The crawler is lightweight and limited in scope. Blocking may be appropriate if: - You do not use feed-based distribution - You want to restrict automated access to feed content - Your feeds contain sensitive or restricted data - You are limiting non-essential bot traffic If you are suddenly seeing too many visits, you can consider throttling (crawl-delay) before completely disallowing.
How can I control or block FeedFetcher-Google using robots.txt or other methods?
You cannot add a rule in your robots.txt to control FeedFetcher-Google, as this crawler has no specific robots.txt user-agent. However, you can use controls in your WAF, or in RobotSense enforcement settings to manage the bot behavior.
How often does FeedFetcher-Google crawl websites, and can it impact server performance?
FeedFetcher-Google uses periodic crawling based on feed update intervals and subscriber demand. It focuses only on feed URLs, resulting in lightweight and predictable request patterns. Any impact is typically minimal: - Bandwidth usage: low - Request rates: periodic - Dynamic load: minimal unless feeds are generated dynamically Most websites will not experience noticeable performance impact. Though some administrators choose to rate-limit or restrict it.
What happens if I block FeedFetcher-Google? SEO, visibility, and feature impact explained.
Blocking FeedFetcher-Google does not affect search rankings but may limit feed distribution through Google services. Potential effects include: - Delayed or missing updates in Google News or feed-based products - Reduced visibility for users relying on feed subscriptions Blocking FeedFetcher-Google may have direct impact on search engine visibility.
Does FeedFetcher-Google collect, scrape, or use my content for training or reuse?
FeedFetcher-Google retrieves content directly from RSS or Atom feeds, including titles, summaries, and links. It does not crawl full pages or follow links beyond the feed. Usage typically includes: - Feed aggregation and update delivery - Content syndication in Google services - Metadata extraction from feeds It does not store full website content beyond what is present in feeds, and there is no documented use of this crawler for AI training or general data reuse.