Robots.txt is the common name of a text file that is loaded into the root directory of a Web site and linked in the HTML code of the Web page. The robots.txt file is used to provide instructions on the website for web robots and spiders. The authors of the web pages can use robots.txt so that the robots that participate in the tracking cooperate and do not have access to the entire site or parts of a website that they want to keep private.
Robots.txt is a text file (not HTML) that is put to tell search robots which pages you would like not to be visited. It doesn’t necessarily force search engines, but in general, search engines obey what they are asked not to. It is important to clarify that robots.txt is not a way to prevent search engines from crawling a site (i.e. it is not a kind of password protection) and the fact that a robots file is put.txt is something like putting a note: “please do not enter” in an open door.
Location of robots.txt
The location of robots.txt is very important. It must be in the main directory because otherwise search engines will not be able to find it. If they don’t find it there, they’ll simply assume that that site doesn’t have a robot file.txt and therefore index everything they find along the way.