M

Meta-ExternalFetcher

Visit Bot Homepage

Verify Meta-ExternalFetcher IP Address

Verify if an IP address truly belongs to Meta / Facebook, using official verification methods. Enter both IP address and User-Agent from your logs for the most accurate bot verification.

Meta-ExternalFetcher is a Meta crawler that retrieves webpage content to support link previews, metadata extraction, and other external content processing tasks across Facebook, Instagram, and related Meta products. It fetches titles, descriptions, images, and structured data required for rendering shared links or enriching user interactions. These requests are typically user-driven but may also support automated metadata refreshes. Crawl volume is lightweight and focused, targeting only the URLs needed for previews or content enrichment within Meta’s ecosystem. It ignores the global user agent (*) rule. RobotSense.io verifies Meta-ExternalFetcher using Meta’s official validation methods, ensuring only genuine Meta-ExternalFetcher traffic is identified.

This bot does not honor Crawl-Delay rule.

User Agent Examples

Contains: meta-externalfetcher/1.1 (+https://developers.facebook.com/docs/sharing/webmasters/crawler)

Contains: meta-externalfetcher/1.1
Example user agent strings for Meta-ExternalFetcher

Robots.txt Configuration for Meta-ExternalFetcher

Robots.txt User-Agent:Meta-ExternalFetcher

Use this identifier in your robots.txt User-agent directive to target Meta-ExternalFetcher.

Recommended Configuration

Our recommended robots.txt configuration for Meta-ExternalFetcher:

User-agent: Meta-ExternalFetcher
Allow: /

Completely Block Meta-ExternalFetcher

Prevent this bot from crawling your entire site:

User-agent: Meta-ExternalFetcher
Disallow: /

Completely Allow Meta-ExternalFetcher

Allow this bot to crawl your entire site:

User-agent: Meta-ExternalFetcher
Allow: /

Block Specific Paths

Block this bot from specific directories or pages:

User-agent: Meta-ExternalFetcher
Disallow: /private/
Disallow: /admin/
Disallow: /api/

Allow Only Specific Paths

Block everything but allow specific directories:

User-agent: Meta-ExternalFetcher
Disallow: /
Allow: /public/
Allow: /blog/

Set Crawl Delay

Limit how frequently Meta-ExternalFetcher can request pages (in seconds):

User-agent: Meta-ExternalFetcher
Allow: /
Crawl-delay: 10

Note: This bot does not officially mention about honoring Crawl-Delay rule.