2 Simples ways How to Create robots.txt file

One of the most important and most often overlooked aspects of website optimization is the use of the create robots.txt file. 

The robots.txt file is a text file that is designed to instruct web robots (commonly known as search engine robots) on how to crawl the pages on your website. 

By using the robots.txt file, you can tell the search engines which pages you want them to crawl and which pages you don’t want them to crawl, which will help you get more traffic from the search engines.

What is robots.txt file in SEO

Robots.txt is a text file that webmasters use to tell web robots(search engines) how to crawl your website’s pages. The robots.txt file is part of the robots exclusion protocol (REP), which governs how robots crawl the web, access and index material, and serve that content to people. The REP also contains directives such as meta robots, as well as for instructions for how search engines should treat links (like if you want to create a rule to “Allow” or “Disallow”) on a page, subdirectory, or site-wide basis.

You can see the robots.txt format of my website:

Sample format of robots file.

Each refuses or allows rule in a robots.txt file with multiple user-agent directives only applies to the user agent(s) provided in that line break-separated set. A crawler will only pay attention to (and follow the directives in) the most particular collection of instructions if the file contains a rule that applies to more than one user agent.

How Does robot txt file work

To understand how a Robots.txt file works, you must first understand how search engines crawl and index a website.

Search engines crawl the sites published on the Internet by going from one site to another and following the links. They visit billions of links and websites in this process. This is commonly referred to as “spider.”

How Web spiders check your robotx.txt file:

  • When search engine bots visit a website, the first thing they look at is the robots.txt file.
  • Search engines such as Google and Bing examine the robots.txt file of a website on a regular basis to determine the instructions for crawling it. “Directives” refers to these instructions. Because the robots.txt file specifies how search engine bots should crawl the site, it is used to trigger further crawler activity on the site.
  • A user-agent is the search engine specified in the Robots.txt file. If a website’s robots.txt file does not contain a directive against user-agent crawling (or if there is no robots.txt file at all), it can be used to crawl additional information and webpages on the site. 
  • The.txt extension is used since a robots.txt file is simply a text file. There is no HTML markup code in it.

How to Create robots.txt file

Because this is a text file, you can create a robots.txt file using practically any text editor, including Notepad, TextEdit, vi, and emacs.

Robots.txt should not be created with a word processor because it saves the file in Word format rather than text.

  • Open your Notepad now and add this two lines:

User-agent*

Disallow:

The term ‘user-agent refers to robots or search engine spiders. This line applies to all spiders, as shown by the asterisk (*). The Disallow line has no files or folders, meaning that any directory on your site can be visited. This is a simple text file for robots.

1. If you want Block search engine spiders to crawl your site then write these two lines.

User-agent*

Disallow:/

If you put this / after Disallow then this rule says to all search engines to not crawl your site.

2. If you want to block certain pages and areas from your site then write this:

User agent*

Disallow:/category/

Disallow:/feeds/

Disallow:/database/

3. Normal robots.txt format that you need to write for your website is:

User-agent: *

Disallow: /wp-admin/

Allow: /wp-admin/admin-ajax.php

Admin-ajax.php: All of the code for routing Ajax requests on WordPress is contained in the admin-ajax. PHP file. Its main goal is to use Ajax to establish a connection between the client and the server. It is used by WordPress to refresh the contents of a page without refreshing it, making it more dynamic and interactive for users.

Now click on save your notepad file but remember that when you are going to save your notepad file then save the file with the name robots.txt do not make any spelling mistakes and do not write any capital words.

How Create robots.txt file using notepad.

Go to your root directory (Your hosting server File Manager) of your website and upload this robots.txt Notepad file.

Now check your URL in browser with https://example.com.robots.txt and you will be able to see your robots.txt file which you successfully created and uploaded.

Create robots.txt File Using Yoast SEO plugin

If you are using Yoast SEO plugin on the WordPress site then you can easily create robots.txt file for your site.

just go to WordPress Dashboard click on SEO >> then Tools >> File editor >> then you can see the option robots.txt file.

Image showing direction of how to open file editor.

Now click on create robots.txt, here you don’t need to write anything because the Yoast plugin automatically writes rules for you and they will add this file to your root directory.

Image showing Create robots.txt file using boast seo plugin.

Now check your file using https://example.com/robots.txt you’re all done here.

Why robots.txt file is Important

A robots.txt file is required for any website, no matter how tiny or huge it is. Any search engine’s resources are limited. The Robots.txt file avoids the search engine’s crawl budget from being squandered by crawling superfluous webpages in this case.

This allows you to have more control over how search engine spiders or bots navigate your website. Googlebot can crawl your entire website if one prohibit instruction in your Robots.txt file is incorrect. That is, it has the potential to make or break a website’s SEO.

The Robots.txt file, on the other hand, serves significant purposes on the website, as detailed below:

  • The Robots.txt file is critical for blocking non-public pages. The login page, for example, is not for everyone. It is possible to stop it from crawling.
  • The Robots.txt file on your website prevents superfluous files (like as photos, videos, and PDFs) from being indexed.
  • This keeps the web server from becoming overloaded.
  • This saves the search engine’s crawl budget from being squandered.
  • Prevents duplicate content from being crawled on the website. Duplicate content does not surface in SERPs as a result of this.
  • The use of a Robots.txt file in the website can prohibit crawling for internal search results pages.
  • You can use this to keep the portions or parts of your website that you don’t want indexed or crawled by search engines private. It also protects any sensitive data on the website from being accessed or accessed.

By the way, many websites do not require a robots.txt file because Google does not automatically index non-important or duplicate versions of other URLs.

In general, Google indexes just the most essential web pages.

Types of User-agent in robots.txt:

Each search engine has its own user-agent name in the Robots.txt file. Google’s Crawl Robot, for example, is known as Googlebot user-agent.

BingBot refers to similar robots from Microsoft’s Bing search engine. The following are some of the most prevalent user agents:

Google:

  • Googlebot
  • Googlebot smartphone
  • Googlebot-Image (for images)
  • Googlebot-News (for news)
  • Googlebot-Video (for video)

Bing:

  • Bingbot
  • MSNBot-Media (for images and video)

Baidu:

  • Baiduspider: Any directives in the Robots.txt file between two user-agents are only considered for the first user-agent.

Directions or Directives might apply to a single user-agent or to all user-agents. It is stated with a wildcard when these are for all user-agents: * User-agent

Frequently Ask questions

  1. What is robot txt used for?

    A robots.txt file notifies search engine crawlers which URLs on your site they can access. This is mostly intended to prevent your site from becoming overburdened with requests; it is not a strategy for keeping a web page out of Google. Block indexing with no index or password-protect a web page to keep it out of Google.

  2. Do you need a robot's text file?

    A webpage does not require a text file. If a bot visits your website and doesn't have one, it will crawl and index your pages as it normally would. A robot.txt file is only required if you want additional control over the crawling process.

  3. What is robots.txt disallowed?

    In the robots.txt file, there is a directive that says “do not allow.” You can restrict search engines' access to specific files, pages, or areas of your website. The Disallow directive is used to do this. The path that should not be visited is followed by the Disallow directive.

Conclusion:

I’m always excited to share little-known SEO “hacks” that can help you in a variety of ways.

You’re not simply improving your own SEO by properly configuring your robots.txt file. You are also assisting your guests.

If search engine bots can carefully allocate their crawl funds, they’ll best organize and show your content in the SERPs, making you more visible.

Leave a Reply

Your email address will not be published. Required fields are marked *