Forums

Articles
Create
cancel
Showing results for 
Search instead for 
Did you mean: 

Email that does not have permission to create issues never seems to be cleared.

david.stringer January 8, 2020

Good day knowledgeable people,

 

About every 5s we get a log like so;

2020-01-08 11:03:10,762 Caesium-1-4 WARN anonymous    Add New Issue/Comment [c.a.mail.incoming.mailfetcherservice] Add New Issue/Comment[10100]: Reporter (alert@newrelic.com) does not have permission to create an issue. Message rejected.

Once upon a time we did have emails from that service going to JSD but it was removed sometime later. No emails from alert@newrelic.com have been received in that email account for many months minimum. That email address is not the only one WARNed about, just the main one.

I completely cleared the mailbox of everything to ensure it's not somehow picking up old emails.

The mailservice log shows other emails being processed correctly as expected.

It _feels_ like the details of these emails are in the database but are not cleared. Poking around in the database without guidance looks risky as it's rather complex.

 

Although the above is about a warning we often have JSD's garbage collector max out and the service requires a restart to use sensibly, error below. I am hoping these 2 items are related. -Xmx seems to be set to 2.5g.

"java.lang.OutOfMemoryError: GC overhead limit exceeded"

The actual thread that triggers this seems random.

The mailfetcherservice logs are the only consistent log entry I find near the GC crash, although this may be because they're being logged all the time (every 5s).

 

I've googled about for information and it's all about giving that email user the right access but I don't want him to have the right access, I'd like JSD to dump them and never try again heh.

Actually I want to resolve the GC issue as that's my main problem but I'm having issues finding enough information, most the advice seems to be increase heap size but it's looks like it's already 2.5g.

 

Any advice would really be appreciated as I'm fed up with restarting the service twice a day :)

 

Many thanks,

Dave

 

1 answer

0 votes
Adrian Stephen
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
January 8, 2020

Hi @david.stringer , 

 

How many issues are already created in your Jira instance? This is just to get an idea of how much heap should be allocated. The best way to determine this is by checking the GC logs :

 

You may use the Jira Server sizing guide as a rough guide on how much heap, ram and CPU your Jira instance needs but it will also depend on other factors such as how many plugins you have installed and how memory intensive those plugin operations are. 

david.stringer January 8, 2020

We have ~25k tickets. From what I've read this is not considered large.

At this time I cannot just add -XX:+PrintGCCause etc as there are some practical considerations, so it'll be a few days before I can get that setup. Hopefully that'll provide more useful information.

Sadly, due to the cost of licensing, we are using a docker version of JSD and the boss has added plugins galore, which I am fairly sure is making this more difficult. I'd so much rather use Atlassian's cloud service.

Like Adrian Stephen likes this
david.stringer January 8, 2020

Also thanks for the amazing fast response :D

Like Adrian Stephen likes this
Adrian Stephen
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
January 8, 2020

jira releases after version 7.4 should already have the gc logs automatically generated in the jira_install/logs directory. You will only need to drag one of the gc log into the gcviewer tool.  

you’re welcome 😊 

Suggest an answer

Log in or Sign up to answer
TAGS
AUG Leaders

Atlassian Community Events