How to make a robots.txt that disallows 'label' paths

I have a robots.txt on my Confluence site and would like to disallow crawlers from crawling the labels. I've got the following DISALLOW lines in my robots.txt but it doesn't appear to be working:

Disallow: /label/
Disallow: /labels/
Disallow: /*label*

I borrowed the last line from https://github.com/childnode/atlassian-confluence-config/blob/master/robots.confluence.txt - I thought DISALLOW didn't support regex, but I threw it in there anyway.

Does anyone know a better path to disallow that would prevent the label pages from being crawled?

(By the way, I know it's not working because I have a crawler that is crawling the site. It is possible there is a bug in the crawler, but is respecting all my other rules so it seems unlikely that it would just arbitrarily ignore a valid rule.)

Thanks in advance for any help.

1 answer

This widget could not be displayed.

The robots.txt looks correct to me. I don't have ways to test it at this moment, but just wondering, have you restarted the application so that it can pick up the new settings? Cheers

Suggest an answer

Log in or Sign up to answer
Community showcase
Posted Monday in Confluence

Why start from scratch? Introducing four new templates for Confluence Cloud

Hi my Community friends!  For those who don't know me, I'm a product marketer on the Confluence Cloud team - nice to meet you! For those of you who do, you know that I've been all up in your Co...

376 views 3 6
Join discussion

Atlassian User Groups

Connect with like-minded Atlassian users at free events near you!

Find a group

Connect with like-minded Atlassian users at free events near you!

Find my local user group

Unfortunately there are no AUG chapters near you at the moment.

Start an AUG

You're one step closer to meeting fellow Atlassian users at your local meet up. Learn more about AUGs

Groups near you