How to make a robots.txt that disallows 'label' paths

I have a robots.txt on my Confluence site and would like to disallow crawlers from crawling the labels. I've got the following DISALLOW lines in my robots.txt but it doesn't appear to be working:

Disallow: /label/
Disallow: /labels/
Disallow: /*label*

I borrowed the last line from https://github.com/childnode/atlassian-confluence-config/blob/master/robots.confluence.txt - I thought DISALLOW didn't support regex, but I threw it in there anyway.

Does anyone know a better path to disallow that would prevent the label pages from being crawled?

(By the way, I know it's not working because I have a crawler that is crawling the site. It is possible there is a bug in the crawler, but is respecting all my other rules so it seems unlikely that it would just arbitrarily ignore a valid rule.)

Thanks in advance for any help.

1 answer

The robots.txt looks correct to me. I don't have ways to test it at this moment, but just wondering, have you restarted the application so that it can pick up the new settings? Cheers

Suggest an answer

Log in or Sign up to answer
Community showcase
Posted Feb 06, 2019 in Confluence

Try out the new editing experience

Hi team, I’m Avinoam, a product manager on Confluence Cloud, and today I’m really excited to let the Community know that all customers can now try out the new editing experience and see some of the ...

895 views 45 7
Join discussion

Atlassian User Groups

Connect with like-minded Atlassian users at free events near you!

Find a group

Connect with like-minded Atlassian users at free events near you!

Find my local user group

Unfortunately there are no AUG chapters near you at the moment.

Start an AUG

You're one step closer to meeting fellow Atlassian users at your local meet up. Learn more about AUGs

Groups near you