Database connection pools holes in charts

Marcin Wroniak May 18, 2023

Hi all,

We are experiencing a strange behavior in our Jira. The overall performance is unstable and often very slow. Looking at our Database connection pools chart it looks as if there are many interruptions. I'm attaching a screenshot of it. 

Has anyone experienced similar behavior and can advise what's the problem and how to find the root cause?

We've already spent a lot of time investigating db, logs, support.zip and explored many options but haven't found the solution.

I would greatly appreciate your help.2023-05-18 10_08_13-Database monitoring.png

1 answer

1 accepted

0 votes
Answer accepted
Radek Dostál
Rising Star
Rising Star
Rising Stars are recognized for providing high-quality answers to other users. Rising Stars receive a certificate of achievement and are on the path to becoming Community Leaders.
May 18, 2023

There are too many components at hand so just looking at one thing doesn't always mean it is or isn't the thing - since it is interlaced with other things.

In general it's good to raise a ticket with Atlassian support and send them the support zip along with thread dumps during the bad performance (https://confluence.atlassian.com/adminjiraserver/generating-a-thread-dump-938847731.html) as well as during good performance (good for comparing between them).

They have plenty tools to analyse the data, plus a good knowledge overall since they know their app and see performance data between many different environments.

 

Just going by the screenshots though, this doesn't look like a problem with the database, but then again that depends on the nature of the performance degradation, the timings, whether it's anything specific or just generally everything, and so forth. Sure it may have a few spikes there, but that doesn't mean the db isn't capable of handling it (which again requires to look on the database server how it's doing during the spikes).

Rather than db, which typically is not the cause, look at thread dumps / jvm - that's where you would typically find some resource heavy stuff to single out as suspects and look into more.

 

Couple tools I'm familiar with / easy for use on the internet:

fastthread.io (thread analyzer)

gceasy.io (gc analyzer)

heaphero.io (heap dump analyzer)

https://guardiaivs.bitbucket.io/perflog - atlassian analyzer for support zips

https://drauf.github.io/watson - atlassian analyzer for thread dumps

 

There are couple more but that's more in the form of scripts such, such as cache statistics or access logs or whatever really, so I can't include those.

All of them do something differently, all of them are likely to point to something, but again that doesn't mean that particular something is the cause, because you need to think about all of the relations, which are many.

 

In general, if you have a valid license, get in touch with Atlassian, they have plenty of automated stuff to identify anything obvious, and plenty stuff to analyze the data to suggest the cause.

Suggest an answer

Log in or Sign up to answer
DEPLOYMENT TYPE
SERVER
VERSION
9.4.2
TAGS
AUG Leaders

Atlassian Community Events