Robots.txt

Organic Search

Table Of Contents
Robots.txt is a text file on a website that tells search engine robots how to crawl and index pages. It controls whether the page is ready to be indexed for natural search or not.
The robots.txt file, also known as the robots exclusion protocol or standard, is a simple text file that follows a specific syntax.

Each rule in a robots.txt file specifies a user agent (the crawler or robot) and one or more directories or files on the website to be included or excluded from crawling.

Here’s how it works in 2 simple phases:
robots.txt can be incredibly useful if you want to prevent crawlers from indexing large files or sections of your site that are not relevant to search engine results.
To use robots.txt effectively, you will need to:
If you’re not sure whether your robots.txt is set up right, test it in Google Search Console. Make sure that you’re not accidentally blocking pages from being indexed.
«
»