After a recent upgrade our Bitbucket search started returning zero results. Currently on v6.7.0.
I tried the following solution but it still produces the same 403 error. I also tried resolution #2 in this solution but still the same error.
The application is running under docker with a NFS volume that's mounted on the system. It currently has 253G of free disk space.
When I tried the resolution #2 in the second solution above, the directory "shared/search/data/bitbucket_search" was never recreated.
Below is an excerpt from the logs. Any idea what might be causing this or other solutions to resolve it?
Thanks,
Weldon
app_1 | 2019-11-13 01:19:16,053 INFO [Caesium-1-3] c.a.b.i.s.i.jobs.StartupChecksJob Running startup jobs for search
app_1 | 2019-11-13 01:19:16,321 INFO [Caesium-1-3] c.a.b.i.s.i.u.DefaultUpgradeService Executing upgrade task:[Update path and filename fields for file search]
app_1 | 2019-11-13 01:19:17,462 INFO [Caesium-1-3] c.a.b.i.s.i.u.DefaultUpgradeService Successfully completed upgrade task:[Update path and filename fields for file search]
app_1 | 2019-11-13 01:19:17,505 ERROR [Caesium-1-3] c.a.b.i.s.i.IndexingSynchronizationService An error was encountered while checking or creating the mapping in Elasticsearch
app_1 | com.atlassian.bitbucket.internal.search.indexing.exceptions.IndexException: update index-version yielded 403 response code.
app_1 | at com.atlassian.bitbucket.internal.search.indexing.upgrade.DefaultIndexVersionService.lambda$execute$6(DefaultIndexVersionService.java:107)
app_1 | at io.atlassian.fugue.Either$Right.fold(Either.java:641)
app_1 | at com.atlassian.bitbucket.internal.search.indexing.upgrade.DefaultIndexVersionService.execute(DefaultIndexVersionService.java:102)
app_1 | at com.atlassian.bitbucket.internal.search.indexing.upgrade.DefaultIndexVersionService.setCurrentVersion(DefaultIndexVersionService.java:73)
app_1 | at com.atlassian.bitbucket.internal.search.indexing.upgrade.DefaultUpgradeService.upgradeVersion(DefaultUpgradeService.java:60)
app_1 | at com.atlassian.bitbucket.internal.search.indexing.upgrade.DefaultUpgradeService.upgrade(DefaultUpgradeService.java:52)
app_1 | at com.atlassian.bitbucket.internal.search.indexing.IndexingSynchronizationService.synchronizeMapping(IndexingSynchronizationService.java:112)
app_1 | at com.atlassian.bitbucket.internal.search.indexing.IndexingSynchronizationService.synchronizeStores(IndexingSynchronizationService.java:84)
app_1 | at com.atlassian.bitbucket.internal.search.indexing.jobs.StartupChecksJob.run(StartupChecksJob.java:80)
app_1 | at com.atlassian.bitbucket.internal.search.common.cluster.ClusterJobRunner.runJob(ClusterJobRunner.java:81)
app_1 | at com.atlassian.scheduler.core.JobLauncher.runJob(JobLauncher.java:134)
app_1 | at com.atlassian.scheduler.core.JobLauncher.launchAndBuildResponse(JobLauncher.java:106)
app_1 | at com.atlassian.scheduler.core.JobLauncher.launch(JobLauncher.java:90)
app_1 | at com.atlassian.scheduler.caesium.impl.CaesiumSchedulerService.launchJob(CaesiumSchedulerService.java:435)
app_1 | at com.atlassian.scheduler.caesium.impl.CaesiumSchedulerService.executeClusteredJob(CaesiumSchedulerService.java:430)
app_1 | at com.atlassian.scheduler.caesium.impl.CaesiumSchedulerService.executeClusteredJobWithRecoveryGuard(CaesiumSchedulerService.java:454)
app_1 | at com.atlassian.scheduler.caesium.impl.CaesiumSchedulerService.executeQueuedJob(CaesiumSchedulerService.java:382)
app_1 | at com.atlassian.scheduler.caesium.impl.SchedulerQueueWorker.executeJob(SchedulerQueueWorker.java:66)
app_1 | at com.atlassian.scheduler.caesium.impl.SchedulerQueueWorker.executeNextJob(SchedulerQueueWorker.java:60)
app_1 | at com.atlassian.scheduler.caesium.impl.SchedulerQueueWorker.run(SchedulerQueueWorker.java:35)
app_1 | at java.lang.Thread.run(Unknown Source)
I just got this as well, we use an external elasticsearch setup where bitbucket datacenter connects to elasticsearch on a separate server cluster. I got this error because i Updated to BB 6.x, then downgraded to 5.x (which created its own problems with elasticsearch) then got this error upon upgrading to 6.x again.
I found there was an elasticsearch index called "bitbucket-index-version" that got left behind when downgrading and I'm guessing this gets created when upgrading to 6.x. I believe the 403 I received is because you can't create indicies with the same name. The index showed as "close" when a get call was performed on elasticsearch indicies: Endpoint: GET http://<elasticserverURL>:9200/_cat/indices
I then deleted this index by sending a DELETE to http://<elasticserverURL>:9200/<indexname>
The next time it tried to run the update it was successful.
WARNING, I found this issue in my non prod environment. I recommend doing some research or contacting Atlassian support before making mods to elasticsearch indicies without an elasticsearch backup up first.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.