D

DuplexWeb-Google

Visit Bot Homepage

Verify DuplexWeb-Google IP Address

Verify if an IP address truly belongs to Google, using official verification methods. Enter both IP address and User-Agent from your logs for the most accurate bot verification.

[This crawler is officially retired as per Google] DuplexWeb-Google is a Google crawler associated with Duplex and Assistant-related technologies that fetch web content to help generate conversational responses and perform task-oriented actions. It retrieves page information needed to understand structured data, business details, menus, appointment flows, and other interactive elements. Crawl activity is selective and generally tied to user-initiated tasks or systems that prepare content for automated assistance. Its purpose is to support natural-language interactions by ensuring Google’s assistant technologies can interpret and use real-time webpage information accurately. It ignores the global user agent (*) rule. RobotSense.io verifies DuplexWeb-Google using Google’s official validation methods, ensuring only genuine DuplexWeb-Google traffic is identified.

This bot does not honor Crawl-Delay rule.

User Agent Examples

Mozilla/5.0 (Linux; Android 11; Pixel 2; DuplexWeb-Google/1.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.193 Mobile Safari/537.36
Example user agent strings for DuplexWeb-Google

Robots.txt Configuration for DuplexWeb-Google

Robots.txt User-Agent:DuplexWeb-Google

Use this identifier in your robots.txt User-agent directive to target DuplexWeb-Google.

Recommended Configuration

Our recommended robots.txt configuration for DuplexWeb-Google:

# This bot is officially retired by Google
User-agent: Google-Safety
Disallow: /

Completely Block DuplexWeb-Google

Prevent this bot from crawling your entire site:

User-agent: DuplexWeb-Google
Disallow: /

Completely Allow DuplexWeb-Google

Allow this bot to crawl your entire site:

User-agent: DuplexWeb-Google
Allow: /

Block Specific Paths

Block this bot from specific directories or pages:

User-agent: DuplexWeb-Google
Disallow: /private/
Disallow: /admin/
Disallow: /api/

Allow Only Specific Paths

Block everything but allow specific directories:

User-agent: DuplexWeb-Google
Disallow: /
Allow: /public/
Allow: /blog/

Set Crawl Delay

Limit how frequently DuplexWeb-Google can request pages (in seconds):

User-agent: DuplexWeb-Google
Allow: /
Crawl-delay: 10

Note: This bot does not officially mention about honoring Crawl-Delay rule.

Frequently Asked Questions

What is DuplexWeb-Google, and why is it visiting my website?
DuplexWeb-Google is a crawler operated by Google to support Assistant and Duplex-related features. Its primary purpose is to fetch webpage content needed for task-oriented interactions, such as retrieving business details, menus, or booking flows. Visits are typically triggered by user actions or systems preparing data for conversational responses, and crawl behavior is selective rather than broad. For publicly accessible pages that support such use cases, this bot traffic is expected. Visits from DuplexWeb-Google are non-harmful.
Is DuplexWeb-Google a legitimate bot, or is it commonly spoofed?
DuplexWeb-Google is an official Google crawler and is considered legitimate. However, like other Google bots, its user-agent may be spoofed by malicious actors attempting to bypass security controls or disguise automated requests. Attackers may impersonate it because Google-related traffic is often trusted. User-Agent strings alone cannot reliably confirm whether requests are authentic. You can use Google's recommended methods mentioned below to verify a legitimate visit, or use RobotSense.io API to easily verify DuplexWeb-Google visits.
How can I verify that a request is really coming from DuplexWeb-Google?
You can use Google's recommended official methods to verify DuplexWeb-Google visits, these include: - IP range checks - Reverse DNS → forward DNS Do not use User-Agent based detection as that can be easily spoofed. Alternatively, you can use RobotSense.io API to easily verify DuplexWeb-Google crawler and all other bots from Google.
Should I allow or block DuplexWeb-Google on my website?
Allowing DuplexWeb-Google is generally beneficial if your site provides structured content such as business information, booking systems, or menus that may be used in Assistant-driven interactions. The bot helps ensure accurate data retrieval for these use cases. Blocking may be appropriate if: - Your content is not intended for automated interaction - You restrict machine-driven access to workflows or APIs - You are concerned about exposing structured or transactional data - Your infrastructure has strict resource limitations If you are suddenly seeing too many visits, you can consider adding a small crawl-delay in your robots.txt before completely disallowing.
How can I control or block DuplexWeb-Google using robots.txt or other methods?
You can add a rule in your robots.txt, as given above to control (crawl-delay) or disallow DuplexWeb-Google crawler. Also, you can use further controls in your WAF, or in RobotSense enforcement settings to manage the bot behavior.
How often does DuplexWeb-Google crawl websites, and can it impact server performance?
DuplexWeb-Google uses event-driven crawling tied to user interactions or Assistant-related processes. It targets specific pages needed for tasks rather than performing continuous crawling. Any impact is typically low: - Bandwidth usage: minimal - Request rates: limited and context-driven - Dynamic load: slight if accessing interactive endpoints Most websites will not experience noticeable performance impact. Though some administrators choose to rate-limit or restrict it.
What happens if I block DuplexWeb-Google? SEO, visibility, and feature impact explained.
Blocking DuplexWeb-Google does not affect traditional search engine rankings but may limit Assistant-related functionality. Potential effects include: - Reduced ability for Google Assistant to access or interpret your content - Limited functionality for automated tasks like bookings or information retrieval Blocking DuplexWeb-Google will not have any direct impact on search engine SEO performance.
Does DuplexWeb-Google collect, scrape, or use my content for training or reuse?
DuplexWeb-Google retrieves page content needed to support real-time interactions, such as structured data, text, and workflow-related elements. It is not designed for broad indexing or public dataset creation. Usage typically includes: - Supporting conversational responses - Extracting structured data for task execution - Interpreting page content for Assistant workflows There is no public documentation indicating that this crawler stores full-page content for open datasets or uses it for general AI training purposes.