M

Meta-ExternalAgent

Visit Bot Homepage

Verify Meta-ExternalAgent IP Address

Verify if an IP address truly belongs to Meta / Facebook, using official verification methods. Enter both IP address and User-Agent from your logs for the most accurate bot verification.

Meta-ExternalAgent is a Meta crawler used to fetch webpage content for AI, integrity, and content understanding systems that operate outside classic social preview or ads workflows. It performs broader content retrieval to support tasks like classification, safety analysis, and model training. This traffic is not user-triggered and is separate from Meta’s ad review or link preview bots. Crawl activity is moderate and targeted toward pages relevant to Meta’s internal systems. It does not affect search rankings, as Meta has no public web search engine. It ignores the global user agent (*) rule. RobotSense.io verifies Meta-ExternalAgent using Meta’s official validation methods, ensuring only genuine Meta-ExternalAgent traffic is identified.

This bot does not honor Crawl-Delay rule.

User Agent Examples

Contains: meta-externalagent/1.1 (+https://developers.facebook.com/docs/sharing/webmasters/crawler)

Contains: meta-externalagent/1.1
Example user agent strings for Meta-ExternalAgent

Robots.txt Configuration for Meta-ExternalAgent

Robots.txt User-Agent:Meta-ExternalAgent

Use this identifier in your robots.txt User-agent directive to target Meta-ExternalAgent.

Recommended Configuration

Our recommended robots.txt configuration for Meta-ExternalAgent:

User-agent: Meta-ExternalAgent
Allow: /

Completely Block Meta-ExternalAgent

Prevent this bot from crawling your entire site:

User-agent: Meta-ExternalAgent
Disallow: /

Completely Allow Meta-ExternalAgent

Allow this bot to crawl your entire site:

User-agent: Meta-ExternalAgent
Allow: /

Block Specific Paths

Block this bot from specific directories or pages:

User-agent: Meta-ExternalAgent
Disallow: /private/
Disallow: /admin/
Disallow: /api/

Allow Only Specific Paths

Block everything but allow specific directories:

User-agent: Meta-ExternalAgent
Disallow: /
Allow: /public/
Allow: /blog/

Set Crawl Delay

Limit how frequently Meta-ExternalAgent can request pages (in seconds):

User-agent: Meta-ExternalAgent
Allow: /
Crawl-delay: 10

Note: This bot does not officially mention about honoring Crawl-Delay rule.