F

FacebookExternalHit

Visit Bot Homepage

Verify FacebookExternalHit IP Address

Verify if an IP address truly belongs to Meta / Facebook, using official verification methods. Enter both IP address and User-Agent from your logs for the most accurate bot verification.

FacebookExternalHit is Facebook’s (Meta’s) crawler used to fetch webpage content for link previews across Facebook, Messenger, Instagram, and other Meta surfaces. It retrieves metadata such as Open Graph tags, titles, descriptions, images, and structured data. These requests are user-triggered, occurring when someone shares or pastes a URL on a Meta platform. The bot does not index or rank websites and has no connection to search algorithms. Blocking it may prevent accurate link previews. Crawl activity is lightweight and focused on fetching just enough content to generate rich social previews. It ignores the global user agent (*) rule. RobotSense.io verifies FacebookExternalHit using Meta’s official validation methods, ensuring only genuine FacebookExternalHit traffic is identified.

This bot does not honor Crawl-Delay rule.

User Agent Examples

Contains: facebookexternalhit/1.1 (+http://www.facebook.com/externalhit_uatext.php)

Contains: facebookexternalhit/1.1

Contains: facebookcatalog/1.0
Example user agent strings for FacebookExternalHit

Robots.txt Configuration for FacebookExternalHit

Robots.txt User-Agent:FacebookExternalHit

Use this identifier in your robots.txt User-agent directive to target FacebookExternalHit.

Recommended Configuration

Our recommended robots.txt configuration for FacebookExternalHit:

User-agent: FacebookExternalHit
Allow: /

Completely Block FacebookExternalHit

Prevent this bot from crawling your entire site:

User-agent: FacebookExternalHit
Disallow: /

Completely Allow FacebookExternalHit

Allow this bot to crawl your entire site:

User-agent: FacebookExternalHit
Allow: /

Block Specific Paths

Block this bot from specific directories or pages:

User-agent: FacebookExternalHit
Disallow: /private/
Disallow: /admin/
Disallow: /api/

Allow Only Specific Paths

Block everything but allow specific directories:

User-agent: FacebookExternalHit
Disallow: /
Allow: /public/
Allow: /blog/

Set Crawl Delay

Limit how frequently FacebookExternalHit can request pages (in seconds):

User-agent: FacebookExternalHit
Allow: /
Crawl-delay: 10

Note: This bot does not officially mention about honoring Crawl-Delay rule.