Rogerbot
SEO ToolsVerify Rogerbot IP Address
Verify if an IP address truly belongs to Moz, using official verification methods. Enter both IP address and User-Agent from your logs for the most accurate bot verification.
Rogerbot is the web crawler operated by Moz, used to collect link and page data for Moz’s SEO tools and analytics platform. It crawls webpages to discover links, anchor text, page metadata, and technical signals that inform metrics such as Domain Authority and link profiles. Rogerbot powers research features rather than a public search engine. Crawl activity varies based on Moz’s indexing cycles and site characteristics but is generally moderate and predictable. Its purpose is to support SEO analysis, competitive research, and web visibility insights for Moz users. RobotSense.io verifies rogerbot using Moz's official validation methods, ensuring only genuine rogerbot traffic is identified.
User Agent Examples
Contains: rogerbotRobots.txt Configuration for Rogerbot
rogerbotUse this identifier in your robots.txt User-agent directive to target Rogerbot.
Recommended Configuration
Our recommended robots.txt configuration for Rogerbot:
User-agent: rogerbot
Allow: /Completely Block Rogerbot
Prevent this bot from crawling your entire site:
User-agent: rogerbot
Disallow: /Completely Allow Rogerbot
Allow this bot to crawl your entire site:
User-agent: rogerbot
Allow: /Block Specific Paths
Block this bot from specific directories or pages:
User-agent: rogerbot
Disallow: /private/
Disallow: /admin/
Disallow: /api/Allow Only Specific Paths
Block everything but allow specific directories:
User-agent: rogerbot
Disallow: /
Allow: /public/
Allow: /blog/Set Crawl Delay
Limit how frequently Rogerbot can request pages (in seconds):
User-agent: rogerbot
Allow: /
Crawl-delay: 10Note: This bot officially honors the Crawl-delay directive.