Robots.txt is a text file that includes instructions for crawling a website. It is used by websites to inform bots which parts of their website need to be indexed. You may also define which locations you don't want these crawlers to process; these sites may contain duplicate material or be under construction. Bots, such as malware detectors and email harvesters, do not adhere to this norm and will check for flaws in your security, and there is a good chance that they will begin analyzing your site from regions you do not want to be indexed.
Firstly you'll be given an option of either allowing or refusing all web crawlers to access your website. This option allows you to choose whether or not you want Google to crawl your website.
The second choice is whether to include your XML sitemap file. Simply insert its address in the given area.
Finally, you now have the ability to prevent search engines from indexing specific pages or directories. Pages that don't give any helpful information to Google or users, such as login, cart, and parameter pages, are often treated this way.
When it's finished, you may save the text file to your computer.
Once you've created your robots.txt file, keep in mind to save it and upload it to the root directory.
The whole Robots.txt file includes the directive "User-agent," and below it, you may insert other directives such as "Allow," "Disallow," "Crawl-Delay," and so on. Manually writing may take quite a long time. If you wish to omit a page, put "Disallow: the URL you don't want the bots to access,". If you believe that's all there is to the robots.txt file. One incorrect line can prohibit your page from being indexed. So, it's best to assign the chore to the experts, and let our Robots.txt generator handle the file for you.
https://theseotools.co.uk/xml-sitemap-generator