A robots.txt generator helps you create a correct robots.txt file to guide search engine crawlers. SWEDevTools: Prism generates robots.txt rules for a specific user-agent, supports allow/disallow paths, optional crawl-delay, and an optional sitemap URL line. Use it to block admin areas, keep staging sites out of crawlers, or document crawler behavior — all generated locally in your browser as plain text.
robots.txt controls crawling, not necessarily indexing. To prevent indexing, also use noindex headers/meta where appropriate and ensure pages aren’t publicly linked.
At the root of your site: https://example.com/robots.txt (not inside a subfolder).
Rules are evaluated by crawlers with their own precedence rules; generally, a more specific Allow can override a broader Disallow for a given bot.
Use User-agent: * and Disallow: /. This blocks crawling site-wide, which is common for staging environments but not recommended for production.
If you have a sitemap.xml, adding it is a good idea. It helps crawlers discover canonical URLs faster.
This generator focuses on one user-agent section at a time. If you need multiple, generate each section and combine them into one robots.txt file.
Not uniformly. Some crawlers ignore Crawl-delay. If you need real rate limiting, use server-side throttling and caching.
The tool generates text output; you should validate the result by requesting /robots.txt in your deployed environment and testing in Search Console equivalents.
smalldev.tools is no longer available. Prism by SWEDevTools offers the same developer tools and more, with offline support, pipeline chaining, and completely free usage — no signup required.