How to stop an inactive JIRA from writing to the database so much?

We run a small JIRA instance on a system with ZFS. While trying to figure out why our hourly snapshots had a pretty high cost on an effectively inactive system, we have determined that JIRA is doing a LOT of writes to our postgres database. It does it all day, even though we aren't using JIRA at all.

Here is a short (< 10 minutes) capture with `iotop -a` showing how busy JIRA is on our system, when there is *zero* JIRA activity:

You can see that, in less than 10 minutes, an inactive JIRA has accounted for > 20 MB of disk writes... the majority through postgres. This is by far the largest disk writer on our system (while also being one of the least used processes on the system).  For reference, we also run confluence on the same system and it is very quiet.

Trying to debug this, we found JIRA's "Database monitoring" tool, and thought it might help see what is happening. The charts it shows are below this message. You can see that there isn't much happening on the server, and it doesn't seem like it should account for the large amount of disk writes (again, when JIRA is not being used).

What is happening? How do we cut this database traffic down?

Thanks,

Russ

2 answers

0 vote

Firstly, does the DB traffic cause any problem? If not, leave it alone.

JIRA has services that run periodically, could be one of those is accessing the DB steadily.

Enable SQL profiling in Admin, System, Logging & profiling and the log file will tell you what is running.

Firstly, does the DB traffic cause any problem? If not, leave it alone.

The traffic does, unfortunately, cause us a problem. It is costing upwards of 500 MB/day of filesystem snapshot storage (and SSD write wearout, I suppose).

Enable SQL profiling in Admin, System, Logging & profiling and the log file will tell you what is running.

Thanks for the tip on the SQL logging.  Here is a brief snippet of what the SQL logging showed:

https://bpaste.net/raw/abe8d4438790

Basically it is a tonne of hits from their Caesium service runner. Lots of tracebacks in there, but I'm not sure if that is because they are errors, or if that logging mode just dumps them.

Sifting through the cruft, the database hits amount to a whole lot of sequences like this (https://bpaste.net/show/7e0bf3c98fac):

-- <snip>
SELECT DISTINCT ID FROM public.rundetails WHERE JOB_ID='JiraPluginScheduler:LEXO_RANK_STATS_COLLECTOR_JOB'
INSERT INTO public.rundetails (ID, JOB_ID, START_TIME, RUN_DURATION, RUN_OUTCOME, INFO_MESSAGE) VALUES ('27083874', 'JiraPluginScheduler:LEXO_RANK_STATS_COLLECTOR_JOB', '2017-07-25 09:48:47.52', '0', 'S', '')
DELETE FROM public.rundetails WHERE ID IN ('27083822')
SELECT ID, JOB_ID, JOB_RUNNER_KEY, SCHED_TYPE, INTERVAL_MILLIS, FIRST_RUN, CRON_EXPRESSION, TIME_ZONE, NEXT_RUN, VERSION, PARAMETERS FROM public.clusteredjob WHERE JOB_ID='0afa4454-b6e2-450b-8707-34bb38d133a3'
UPDATE public.clusteredjob SET VERSION = '477912' , NEXT_RUN = '1500990587561' WHERE JOB_ID='0afa4454-b6e2-450b-8707-34bb38d133a3' AND VERSION='477911'

SELECT DISTINCT ID FROM public.rundetails WHERE JOB_ID='0afa4454-b6e2-450b-8707-34bb38d133a3'
INSERT INTO public.rundetails (ID, JOB_ID, START_TIME, RUN_DURATION, RUN_OUTCOME, INFO_MESSAGE) VALUES ('27083875', '0afa4454-b6e2-450b-8707-34bb38d133a3', '2017-07-25 09:48:47.561', '6', 'S', '')
DELETE FROM public.rundetails WHERE ID IN ('27083823')
SELECT ID, JOB_ID, JOB_RUNNER_KEY, SCHED_TYPE, INTERVAL_MILLIS, FIRST_RUN, CRON_EXPRESSION, TIME_ZONE, NEXT_RUN, VERSION, PARAMETERS FROM public.clusteredjob WHERE JOB_ID='9e106d1a-a0a7-45be-ae85-b3793e7ef228'
UPDATE public.clusteredjob SET VERSION = '75463' , NEXT_RUN = '1500990588105' WHERE JOB_ID='9e106d1a-a0a7-45be-ae85-b3793e7ef228' AND VERSION='75462'

SELECT DISTINCT ID FROM public.rundetails WHERE JOB_ID='9e106d1a-a0a7-45be-ae85-b3793e7ef228'
INSERT INTO public.rundetails (ID, JOB_ID, START_TIME, RUN_DURATION, RUN_OUTCOME, INFO_MESSAGE) VALUES ('27083876', '9e106d1a-a0a7-45be-ae85-b3793e7ef228', '2017-07-25 09:48:48.105', '5', 'S', '')
DELETE FROM public.rundetails WHERE ID IN ('27083824')
SELECT ID, JOB_ID, JOB_RUNNER_KEY, SCHED_TYPE, INTERVAL_MILLIS, FIRST_RUN, CRON_EXPRESSION, TIME_ZONE, NEXT_RUN, VERSION, PARAMETERS FROM public.clusteredjob WHERE JOB_ID='com.thepluginpeople.jira.emq:PurgeAuditHistory'
UPDATE public.clusteredjob SET VERSION = '85876' , NEXT_RUN = '1500990648167' WHERE JOB_ID='com.thepluginpeople.jira.emq:PurgeAuditHistory' AND VERSION='85875'

SELECT DISTINCT ID FROM public.rundetails WHERE (JOB_ID='com.thepluginpeople.jira.emq:PurgeAuditHistory') AND (RUN_OUTCOME <> 'S' )
INSERT INTO public.rundetails (ID, JOB_ID, START_TIME, RUN_DURATION, RUN_OUTCOME, INFO_MESSAGE) VALUES ('27083877', 'com.thepluginpeople.jira.emq:PurgeAuditHistory', '2017-07-25 09:48:48.167', '4', 'U', 'Job runner key 'com.thepluginpeople.jira.emq:PurgeAuditHistory' is not registered')
DELETE FROM public.rundetails WHERE ID IN ('27083771')
SELECT ID, JOB_ID, JOB_RUNNER_KEY, SCHED_TYPE, INTERVAL_MILLIS, FIRST_RUN, CRON_EXPRESSION, TIME_ZONE, NEXT_RUN, VERSION, PARAMETERS FROM public.clusteredjob WHERE JOB_ID='1bc3f846-d912-4b6d-8804-9bf08c26c4f0'
UPDATE public.clusteredjob SET VERSION = '68758' , NEXT_RUN = '1500990588613' WHERE JOB_ID='1bc3f846-d912-4b6d-8804-9bf08c26c4f0' AND VERSION='68757'
-- <snip>


Note that that is inserting a record and immediately deleting it, over and over again! Seems like a bug, or a problem in our setup.

Here's a look at a filtered list of `INSERT INTO` calls alone:

https://bpaste.net/raw/6bbbe6504939

This seems like pretty high frequency traffic for an idle server. Is it normal? I'm hoping not, so we can fix it.

Looks like its the Lexorank balancer job that is used for maintaining the rank (order) of issuesm and the EMQ mail-related add-on. You can change the frequency of the latter I think.

You can also go to System, Scheduler Administration to see the list of jobs that are scheduled at secure/admin/SchedulerAdmin.jspa

I would expect to see around a dozen jobs there

I agree, that second snippet shows unexpected DB activity. I'd file a ticket with Atlassian Support as well

 

Hi Warren,

Let me know if you are still seeing problems with EMQ.

Thanks,

Mike

Suggest an answer

Log in or Join to answer
Community showcase
Emilee Spencer
Published Friday in Marketplace Apps

Marketplace Spotlight: DeepAffects

Hello Atlassian Community! My name is Emilee, and I’m a Product Marketing Manager for the Marketplace team. Starting with this post, I'm kicking off a monthly series of Spotlights to highlight Ma...

50 views 0 3
Read article

Atlassian User Groups

Connect with like-minded Atlassian users at free events near you!

Find a group

Connect with like-minded Atlassian users at free events near you!

Find my local user group

Unfortunately there are no AUG chapters near you at the moment.

Start an AUG

You're one step closer to meeting fellow Atlassian users at your local meet up. Learn more about AUGs

Groups near you
Atlassian Team Tour

Join us on the Team Tour

We're bringing product updates and pro tips on teamwork to ten cities around the world.

Save your spot