Y

YandexAdditional

Visit Bot Homepage

Verify YandexAdditional IP Address

Verify if an IP address truly belongs to Yandex, using official verification methods. Enter both IP address and User-Agent from your logs for the most accurate bot verification.

YandexAdditional is a Yandex service crawler used to interpret and apply robots.txt rules that control whether indexed page content can be used in Yandex AI-generated responses. It operates only on pages that have already been indexed by Yandex’s primary crawler and does not request new pages or trigger indexing. The bot analyzes access directives to ensure content usage complies with site owner preferences in AI features. It does not perform independent crawling. Its activity is internal and policy-driven, supporting correct enforcement of robots-based restrictions for Yandex’s AI response systems. It ignores the global robots.txt user agent (*) rule. RobotSense.io verifies YandexAdditional using Yandex’s official validation methods, ensuring only genuine YandexAdditional traffic is identified.

This bot does not honor Crawl-Delay rule.

User Agent Examples

YandexAdditional

YandexAdditional/1.0

UserAgentFrom

Mozilla/5.0 (compatible; YandexAdditional/1.0; +http://yandex.com/bots)
Example user agent strings for YandexAdditional

Robots.txt Configuration for YandexAdditional

Robots.txt User-Agent:YandexAdditional

Use this identifier in your robots.txt User-agent directive to target YandexAdditional.

Recommended Configuration

Our recommended robots.txt configuration for YandexAdditional:

User-agent: YandexAdditional
Allow: /

Completely Block YandexAdditional

Prevent this bot from crawling your entire site:

User-agent: YandexAdditional
Disallow: /

Completely Allow YandexAdditional

Allow this bot to crawl your entire site:

User-agent: YandexAdditional
Allow: /

Block Specific Paths

Block this bot from specific directories or pages:

User-agent: YandexAdditional
Disallow: /private/
Disallow: /admin/
Disallow: /api/

Allow Only Specific Paths

Block everything but allow specific directories:

User-agent: YandexAdditional
Disallow: /
Allow: /public/
Allow: /blog/

Set Crawl Delay

Limit how frequently YandexAdditional can request pages (in seconds):

User-agent: YandexAdditional
Allow: /
Crawl-delay: 10

Note: This bot does not officially mention about honoring Crawl-Delay rule.