Web Light / googleweblight
Developer ToolsVerify Web Light / googleweblight IP Address
Verify if an IP address truly belongs to Google, using official verification methods. Enter both IP address and User-Agent from your logs for the most accurate bot verification.
[This crawler is officially retired as per Google] googleweblight is a Google fetcher used by the now-deprecated Google Web Light service, which provided simplified, faster-loading versions of webpages for slow mobile networks. It requested pages to generate lightweight, transcoded versions optimized for low-bandwidth conditions. Site owners allowed it to ensure better accessibility for users on slow connections. Since Web Light has been discontinued, activity from this user-agent is now rare or legacy in nature. Any remaining traffic is typically minimal and related to leftover systems or outdated client requests rather than active Google services. It ignored the global user agent (*) rule. RobotSense.io verifies Web Light / googleweblight using Google’s official validation methods, ensuring only genuine Web Light / googleweblight traffic is identified.
User Agent Examples
Mozilla/5.0 (Linux; Android 4.2.1; en-us; Nexus 5 Build/JOP40D) AppleWebKit/535.19 (KHTML, like Gecko; googleweblight) Chrome/38.0.1025.166 Mobile Safari/535.19Robots.txt Configuration for Web Light / googleweblight
googleweblightUse this identifier in your robots.txt User-agent directive to target Web Light / googleweblight.
Recommended Configuration
Our recommended robots.txt configuration for Web Light / googleweblight:
User-agent: googleweblight
Allow: /Completely Block Web Light / googleweblight
Prevent this bot from crawling your entire site:
User-agent: googleweblight
Disallow: /Completely Allow Web Light / googleweblight
Allow this bot to crawl your entire site:
User-agent: googleweblight
Allow: /Block Specific Paths
Block this bot from specific directories or pages:
User-agent: googleweblight
Disallow: /private/
Disallow: /admin/
Disallow: /api/Allow Only Specific Paths
Block everything but allow specific directories:
User-agent: googleweblight
Disallow: /
Allow: /public/
Allow: /blog/Set Crawl Delay
Limit how frequently Web Light / googleweblight can request pages (in seconds):
User-agent: googleweblight
Allow: /
Crawl-delay: 10Note: This bot does not officially mention about honoring Crawl-Delay rule.