Skip to main content

Robots.txt Validator

Validate robots.txt syntax, test URLs against rules, and batch-test multiple paths and user-agents.

Free & unlimited
Mode
Robots.txt content

Robots.txt validation tips

The longest matching pattern wins. Allow: /admin/css overrides Disallow: /admin/ for that specific path.
An empty Disallow line means "allow everything" for that user-agent.
Googlebot ignores Crawl-delay. Use Google Search Console to set crawl rate.
robots.txt cannot prevent indexing. Use meta noindex or X-Robots-Tag header instead.
robots.txt is case-sensitive. /Admin/ and /admin/ are different paths.
Test with multiple user-agents. A page may be blocked for one bot but not another.
All processing happens in your browser. No data is sent to any server.

About this tool

  1. 1

    Enter robots.txt content

    Paste your robots.txt rules or fetch them automatically by entering your domain URL.

  2. 2

    Add test URLs

    Enter one or more page URLs to check whether they would be allowed or blocked by the current rules.

  3. 3

    Select the user agent

    Choose which crawler to simulate (Googlebot, Bingbot, etc.) since rules may differ per agent.

  4. 4

    Review results

    See a clear allow or block verdict for each URL with the specific rule that matched.

  • Test URLs with and without trailing slashes - "Disallow: /blog" and "Disallow: /blog/" can match differently.
  • Check your most important pages (homepage, category pages, product pages) to ensure they are not accidentally blocked.
  • Test with the wildcard (*) user agent as well as specific bots to catch rule conflicts.
  • Instant allow/block verdict for any URL against your robots.txt rules
  • Highlights the exact matching rule responsible for each decision
  • Multi-URL batch testing for quick site-wide validation
  • Support for wildcard patterns and $ end-of-URL anchors
  • Validate robots.txt changes before deploying to production
  • Debug why certain pages are not appearing in search results
  • Audit a client site for accidental crawl blocking during an SEO review
  • Verify that sensitive directories remain blocked after a site restructure
Check for a broad disallow rule like "Disallow: /" that blocks everything. Also check if the rule applies to all user agents via the wildcard (*).
You can paste content manually or enter a domain to fetch the live file. Either way, validation runs locally in your browser.

Related tools

View all

We use anonymous analytics to improve ToolChamp. No personal data is stored or sold. Privacy Policy