The tool provides two ways to supply a robots.txt file: Live Fetch (enter a domain and we fetch its robots.txt for you) and Editor Mode (paste or type the content directly). Once you have the content, enter a URL, pick a bot from the dropdown, and click Test. Parsing happens automatically β there is no separate parse step.
Under the hood, the matching engine works as follows:
- Parse β the file is sent to our backend parser, which extracts every User-agent group together with its Allow, Disallow, and Crawl-delay directives.
- User-agent matching β for the bot you selected, the engine finds the most specific matching User-agent group (exact match > substring match > wildcardΒ *).
- Path matching β the URL path is tested against all Allow and Disallow rules in the matched group. Wildcards (
*) and end-of-URL anchors ($) are fully supported. - Precedence β the longest (most specific) match wins. If two rules tie in length, Allow takes precedence over Disallow.
If no rule matches the URL, it is allowed by default. After the test, you will see a clear Allowed / Blocked verdict, the exact matching rule, a diagnostics panel, and an expandable view of all parsed rules with the matched group highlighted.