What is Robots.txt?

A file placed on a web server that gives instructions to search engine crawlers, which are the robots that index web pages for search engines. Robots.txt files tell web crawlers what to include in their index and what to ignore.

Example

javascript

User-agent: *
Disallow: /admin/
Disallow: /private/

In this example, the robots.txt file instructs all user-agents (search engine bots) that they are not allowed to crawl the “/admin/” and “/private/” directories on the website. This prevents search engines from indexing sensitive or irrelevant content, such as administrative pages or private user data, improving the site’s overall search engine optimization (SEO) and security.

Go back to the Marketing Glossary >>