We are running Confluence Data Center version 9.2.17 with two cluster nodes.
The Content Index queue continuously grows to hundreds of thousands and even millions of items, while the Change Queue remains idle.
As a result, search results are delayed or missing, Page Properties / Page Properties Report macros do not return results, and newly created pages and spaces cannot be found via search.
A Full Site Reindex temporarily resolves the issue, but the Content Queue starts growing again after some time.
We suspect a problem with incremental indexing in our Data Center environment.
Hello and Welcome @Ladislav Turic
A full site reindex is probably only hiding the symptom.
A short backlog can happen, yes. But if the Content queue keeps growing into the hundreds of thousands or even millions, and search, newly created spaces/pages, and Page Properties Report only become correct again after a full site reindex, then I would not treat that as a harmless spike.
That points much more to incremental indexing not running properly. And since Page Properties Report depends on CQL/search, it is expected that it also breaks once the index is behind.
In a 2-node Data Center environment, start with these checks:
Scheduled Jobs on both nodes
Go to General Configuration → Scheduled Jobs and verify that Flush Content Index Queue and Flush Change Index Queue are enabled, running, and showing recent successful executions. Those are per-node jobs and should run every minute, so one bad node can already cause this kind of ongoing lag.
Time sync / NTP on both nodes
In DC, clock drift is a real thing and Atlassian has a KB around index flush jobs not behaving correctly when system time is out of sync. In a cluster, that is one of the first things I would rule out.
Index flush job does not always run due to time synchronization | Confluence | Atlassian Support
Logs for actual indexing failures
I would not only watch queue size. I would also enable the indexing debug packages Atlassian recommends and review the logs on both nodes. That is usually where you can tell whether indexing is just slow, or whether one specific task or extractor is blocking progress.
Enabling Debug classes for Indexing Troubleshooting | Confluence | Atlassian Support
Attachment extraction issues
Also worth checking, especially if the backlog starts growing again after uploads. Attachment text extraction problems can absolutely create the pattern of “full reindex helps, then it falls behind again.”
I would also keep cluster/storage health in mind. Atlassian’s monitoring metrics for queue size, processed vs added items, and node connection state can help confirm whether Confluence is simply receiving more work than it can process, or whether one node is the real problem.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.