Test your robots.txt rules with one input. Paste a domain or full URL and instantly see if the page is crawlable.
A robots.txt file tells search engine crawlers which pages or files the crawler can or can't request from your site. It's used mainly to avoid overloading your site with requests.
Many AI crawlers respect robots.txt directives. Keep your rules explicit and avoid non-standard patterns so both search engines and LLM crawlers can interpret your policy correctly.