Afternoon,
I'm using the TOC macro in 4.3.7 Download (in Firefox), and want to exclude all H2s. However, the only way to do this seems to be take a degree in programming. IE, the documentation tells you to look here: http://download.oracle.com/javase/1.5.0/docs/api/java/util/regex/Pattern.html
The contents of this page will mean nothing to your average user. So two things:
If anyone knows how to set this up without taking three years to go back to university, I'd be most grateful to know how it works. :)
Cheers.
In case anyone else is looking for an idea here, I found this thread particularly helpful: https://community.atlassian.com/t5/Confluence-questions/TOC-Macro-Exclude-two-headings-with-different-names/qaq-p/95702
Hi Mick,
You can exclude certain heading by specifiying the Minimum Heading Level and the Maximun Heading Level in the table of macro configuration.
As explain the field description it allow you to exclude certain heading which are lower than the minimun and higher than the maximun level.
Regards,
Jing Hwa
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Thanks.
Following on from this, I've been using the Exclude Headings field to exclude specific headings. However, this only works on the first heading you enter. If you add a second, seperated by a comma, it is ignored. Is this expected behaviour?
The text under this field says:
If set, any headings that match the regular expression are ignored. Example: .*\.[1//2]
This makes no sense whatsoever. So what is the use of this info? What am I supposed to undertsand by it? For a start, why would any of these chars be in my headings? Or are they supposed to be something that has gotten mangled when rendered?
Cheers.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Mick i share your confusion. Here is where I am looking for some help. Regular Expressions: I have to say, that I think the use of regular expressions could be powerful here, however I am not sure how to actually implement them. I tried to copy the text into RegExHero and then attempted your example and a few of my own, but I had no effect. Does anyone from Jira think they could explain what the example demonstrates? What is: .*\.[1//2] Supposed to return? Is that supposed to select all level 1 and level 2 headers? Do you think that some Jira expert could provide an example of selecting the correct headers, or that they could update the help documentation to provide some text and then show us how they would exclude or include it? CSS Classes or HTML Heading Levels Alternatively It would seem more consistent with how the editor tool is used to make the TOC inclusion/exclusion work based on the existing Heading levels <H2> <H3> that you have us selecting, or some custom classes that could be applied through your templates and tools. Is there already a feature request for that?
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
For any case read https://www.geany.org/manual/gtk/glib/glib-regex-syntax.html
For my case I need regex for exclude any paragraf that befin from space, and resolution is:
(^[\s])
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Same problem here too ...
I simply need to exclude several headings and expected to enter them in the Exclude Headings field separated by commas.
I'm a documenter not a programmer. Why is this simple task so ridiculously difficult?
Has anyone found a simple solution?
Thanks,
Darren.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Same problem here... has any one figured this out?
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.