Amazonbot
OthersVerify Amazonbot IP Address
Verify if an IP address truly belongs to Amazon, using official verification methods. Enter both IP address and User-Agent from your logs for the most accurate bot verification.
[Amazon Bots can take upto 30 days to read your Robots.txt updates.] Amazonbot is Amazon’s official web crawler, used to discover and fetch webpage content for applications such as Alexa, product-related features, and Amazon’s AI and search systems. Crawl activity varies based on Amazon services that rely on external web content, but it is generally moderate and focused on structured data, text content, and page metadata. Its purpose is to enhance Amazon’s search, AI models, and user-facing features. It ignores the global user agent (*) rule. RobotSense.io verifies Amazonbot using Amazon’s official validation methods, ensuring only genuine Amazonbot traffic is identified.
User Agent Examples
Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; Amazonbot/0.1; +https://developer.amazon.com/support/amazonbot) Chrome/119.0.6045.214 Safari/537.36Robots.txt Configuration for Amazonbot
AmazonbotUse this identifier in your robots.txt User-agent directive to target Amazonbot.
Recommended Configuration
Our recommended robots.txt configuration for Amazonbot:
User-agent: Amazonbot
Allow: /Completely Block Amazonbot
Prevent this bot from crawling your entire site:
User-agent: Amazonbot
Disallow: /Completely Allow Amazonbot
Allow this bot to crawl your entire site:
User-agent: Amazonbot
Allow: /Block Specific Paths
Block this bot from specific directories or pages:
User-agent: Amazonbot
Disallow: /private/
Disallow: /admin/
Disallow: /api/Allow Only Specific Paths
Block everything but allow specific directories:
User-agent: Amazonbot
Disallow: /
Allow: /public/
Allow: /blog/Set Crawl Delay
Limit how frequently Amazonbot can request pages (in seconds):
User-agent: Amazonbot
Allow: /
Crawl-delay: 10Note: This bot does not officially mention about honoring Crawl-Delay rule.