I have a robots.txt on my Confluence site and would like to disallow crawlers from crawling the labels. I've got the following DISALLOW lines in my robots.txt but it doesn't appear to be working:
Disallow: /label/ Disallow: /labels/ Disallow: /*label*
I borrowed the last line from https://github.com/childnode/atlassian-confluence-config/blob/master/robots.confluence.txt - I thought DISALLOW didn't support regex, but I threw it in there anyway.
Does anyone know a better path to disallow that would prevent the label pages from being crawled?
(By the way, I know it's not working because I have a crawler that is crawling the site. It is possible there is a bug in the crawler, but is respecting all my other rules so it seems unlikely that it would just arbitrarily ignore a valid rule.)
Thanks in advance for any help.
This community is celebrating its one-year anniversary and Atlassian co-founder Mike Cannon-Brookes has all the feels.Read more
Hi Community! Kesha (kay-sha) from the Confluence marketing team here! Can you share stories with us on how your non-technical (think Marketing, Sales, HR, legal, etc.) teams are using Confluen...
Connect with like-minded Atlassian users at free events near you!Find a group
Connect with like-minded Atlassian users at free events near you!
Unfortunately there are no AUG chapters near you at the moment.Start an AUG
You're one step closer to meeting fellow Atlassian users at your local meet up. Learn more about AUGs