Waiting for test
Fill in the fields and click Test Path// check if a path is crawlable by your bot in one click
Test whether a URL path is allowed or disallowed for any user agent in your robots.txt. Paste your rules, choose a bot, and get instant results.
Waiting for test
Fill in the fields and click Test PathCopy the full content of your robots.txt file into the left panel.
Type the bot name (e.g. Googlebot) or use * for all bots.
Type the path you want to test, starting with /.
Get an instant Allow or Disallow verdict with the matching rule.
The Robots.txt Tester lets you paste any robots.txt and instantly check whether a specific URL path is allowed or blocked for a given crawler. It follows the same precedence rules as Google: longer, more specific rules win; Allow overrides Disallow when equal length.
It follows Google's precedence rule: the most specific (longest matching) rule wins. If an Allow and Disallow rule match the same path with equal length, Allow takes priority.
Yes. The tester supports the * wildcard (matches any sequence of characters) and the $ end-of-string anchor, matching the behavior of Googlebot and most major crawlers.
The rules under the specific user-agent group (e.g. Googlebot) are always checked first. If no specific group matches, the tool falls back to the * wildcard group.
No β everything runs entirely in your browser. Paste the robots.txt content manually. No data is sent to any server, which means your private configurations stay private.
If no Disallow or Allow directive matches the path for the chosen agent (or the wildcard group), the path is considered Allowed by default β crawlers are permitted unless explicitly blocked.
Yes. Enter the exact path you want to check including trailing slashes if relevant. Wildcards in the robots.txt rules (not your test path) are matched against your input.
Yes. The parser splits the robots.txt into separate groups, each with their own directives. You can inspect all parsed groups in the output panel after running a test.
In practice, both block all crawling. Disallow: / matches every path starting with /. Disallow: /* also matches all paths because * matches any characters. They are functionally equivalent for most crawlers.
A robots.txt tester is a developer tool that simulates how search engine crawlers read and interpret the directives in your robots.txt file. Rather than waiting for Google Search Console to report a crawl issue days later, you can paste your robots.txt, pick a user agent, enter a URL path, and immediately see whether that path would be allowed or blocked β all without deploying anything or sending a single request to Google.
This tool is especially useful during development, site migrations, and SEO audits. It helps you confirm that your robots.txt is doing exactly what you intend β no more guessing whether that Disallow: /checkout* accidentally blocks something it shouldn't.
π‘ Looking for SEO-optimized themes to complement your technical SEO setup? MonsterONE offers unlimited downloads of premium WordPress themes, HTML templates, and site assets β worth exploring before your next launch.
The robots.txt protocol (formally called the Robots Exclusion Standard) is a plain-text file placed at the root of a domain, typically at https://yourdomain.com/robots.txt. It consists of one or more groups, each starting with one or more User-agent directives followed by Allow and Disallow rules.
Each rule is evaluated against the path portion of a URL. The key rules to understand are:
Googlebot and a group for *, Googlebot uses only its own group β not the wildcard group.Google and most modern crawlers support two special characters in robots.txt directives:
* β Matches any sequence of characters (zero or more). For example, Disallow: /search* blocks /search, /search?q=test, and /search/results.$ β Anchors the pattern to the end of the URL. For example, Disallow: /*.pdf$ blocks any URL ending in .pdf.This tester handles both wildcards correctly, converting them into the appropriate matching logic so you get accurate results without needing to understand the underlying regex.
Many robots.txt errors are subtle. Here are the most common ones developers and SEOs encounter:
Disallow: / under User-agent: * blocks all crawlers from everything. Easy to add accidentally during staging and forget to remove./wp-content/ or /assets/ prevents Googlebot from rendering pages correctly, which can hurt rankings.Disallow: /*? blocks all query strings β which may inadvertently block paginated URLs or product filters you want indexed.Disallow: /admin/ and Allow: /admin/login is valid and works β but only if you understand the precedence rules.Disallow: /Admin/ does not block /admin/.Using this tool is straightforward. Open your site's robots.txt file (usually accessible at yourdomain.com/robots.txt), copy its entire content, and paste it into the input field on the left. Then enter the user agent you want to simulate β for most SEO purposes this will be Googlebot or *. Finally, type the URL path you want to check, such as /checkout/ or /wp-admin/, and click Test Path.
The results panel shows you a clear Allow or Disallow verdict, the exact rule that produced the verdict, and a full list of all applicable rules for the chosen agent. You can also inspect all parsed user-agent groups to understand the full structure of your robots.txt.
It's important to understand that robots.txt controls crawling, while the noindex meta tag (or X-Robots-Tag header) controls indexing. A page blocked by robots.txt may still appear in search results if it has backlinks pointing to it β Google can infer the page exists without crawling it. Conversely, a noindex page can be crawled but won't be added to the index. For pages you want completely removed from search results, use noindex rather than (or in addition to) robots.txt rules.
Google Search Console includes a built-in robots.txt tester, but it requires a verified property and only tests against the live file on your server. This standalone tool lets you test locally, during development, or against any robots.txt content you paste β making it faster for iterative debugging and pre-deployment validation. Once you're satisfied with your configuration here, you can verify it again in Search Console as a final check.