Now you need to decide what type of bot will crawl your site. There are four possible combinations, depending on whether you choose the mobile or desktop version of SemrushBot or GoogleBot.
img-semblog
Then set the Crawl-Delay. Decide between: Minimum delay between pages, Honor robots.txt or 1 URL every 2 seconds.
Choose "Minimum delay" to allow the bot to crawl at its db telegram normal speed. For SemrushBot, this means it will wait about a second before starting to crawl the next page.
“Respect robots.txt” is ideal when you have a robots.txt file on your site and therefore need a specific crawl delay.
If you are concerned that your website will be slowed down by our crawler or you don't have a crawl directive yet, you will probably want to choose "1 URL every 2 seconds". This may mean that the audit will take longer, but it will not degrade the user experience during the audit.
Step 3: Enable/Disable URLs
This is where you can really get into customizing your audit, deciding which subfolders you absolutely want scanned and which you don’t want scanned.
To do this correctly, you need to include everything that appears in the URL after the TLD. The subfolders that you absolutely want crawled go in the box on the left:
img-semblog
And the ones you don't want to be scanned go in the box on the right:
img-semblog
Step 4: Remove URL parameters
This step helps us ensure that your crawl budget isn't wasted by crawling the same page twice. Simply specify the URL parameters you use on your site to remove them before crawling.
This is perfect when you need a small workaround. Say, for example, your website is still in pre-production or is hidden for basic login authentication. If you think this means we can't perform an audit for you, think again.