DuplexWeb-Google
OthersVerify DuplexWeb-Google IP Address
Verify if an IP address truly belongs to Google, using official verification methods. Enter both IP address and User-Agent from your logs for the most accurate bot verification.
[This crawler is officially retired as per Google] DuplexWeb-Google is a Google crawler associated with Duplex and Assistant-related technologies that fetch web content to help generate conversational responses and perform task-oriented actions. It retrieves page information needed to understand structured data, business details, menus, appointment flows, and other interactive elements. Crawl activity is selective and generally tied to user-initiated tasks or systems that prepare content for automated assistance. Its purpose is to support natural-language interactions by ensuring Google’s assistant technologies can interpret and use real-time webpage information accurately. It ignores the global user agent (*) rule. RobotSense.io verifies DuplexWeb-Google using Google’s official validation methods, ensuring only genuine DuplexWeb-Google traffic is identified.
User Agent Examples
Mozilla/5.0 (Linux; Android 11; Pixel 2; DuplexWeb-Google/1.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.193 Mobile Safari/537.36Robots.txt Configuration for DuplexWeb-Google
DuplexWeb-GoogleUse this identifier in your robots.txt User-agent directive to target DuplexWeb-Google.
Recommended Configuration
Our recommended robots.txt configuration for DuplexWeb-Google:
# This bot is officially retired by Google
User-agent: Google-Safety
Disallow: /Completely Block DuplexWeb-Google
Prevent this bot from crawling your entire site:
User-agent: DuplexWeb-Google
Disallow: /Completely Allow DuplexWeb-Google
Allow this bot to crawl your entire site:
User-agent: DuplexWeb-Google
Allow: /Block Specific Paths
Block this bot from specific directories or pages:
User-agent: DuplexWeb-Google
Disallow: /private/
Disallow: /admin/
Disallow: /api/Allow Only Specific Paths
Block everything but allow specific directories:
User-agent: DuplexWeb-Google
Disallow: /
Allow: /public/
Allow: /blog/Set Crawl Delay
Limit how frequently DuplexWeb-Google can request pages (in seconds):
User-agent: DuplexWeb-Google
Allow: /
Crawl-delay: 10Note: This bot does not officially mention about honoring Crawl-Delay rule.