When Time to SLA Plugin (https://marketplace.atlassian.com/plugins/plugin.tts) is active, logging on Issues with a high number of log entries is not possible anymore. The "Log Work" window does not dissappear after pressing "Log". Pressing "cancel" after a few minutes closes the window. No entry in the calendar. Shortly afterwards, an error message appears: "Internal Server Error".
If I log work on the same issue via JIRA (More-Log Work), then I get the message: "The call to the JIRA server did not complete within the timeout period. We are unsure of this operation. Close this dialog and press refresh in your browser"
If Time to SLA Plugin is deactivated, these errors do not occor. Please can you test this on your system and fix this issue?
Just installed Time to SLA Plugin and reproduced exactly the same behavior for issue that has about 1100 wroklog entries.
The problem is caused by Time to SLA Plugin rather then by Work Calendar plugin. For time logging Work Calendar relies on JIRA REST API and Time to SLA "kills" reporting for issues that match defined rules and have many worklog entries.
To be sure I uninstalled Work Calendar and tried to report time for issue with 1100+ worklogs through JIRA (More-Log Work) and it hanged my instance.
Time to SLA team, could you please check this.
Hi Barbara, Volodymyr
I am really very sorry that I could not notice this question.
Are you using the latest version TTS? TTS had performance issues and I think you will not face the same problem with the latest version. Also, there will be other improvements not to effect JIRA issue transitions badly.
Hi all, I am watching this thread and also working on reproducing the problem. I also tried with issue that has 1K log entries, but I could not reproduce any performance problems (as taking 20-40 seconds) Additionally, I will be deep diving into this next week. Thanks
Hi @Tuncay Senturk [Snapbytes]! We have been using JIRA 6.4.12 with TTS 5.6.0 and Work Calendar 2.5.6 in production for a couple of weeks now and performance is very good! I suppose the longer response times at the beginning must have been just initial cache issues. Well done, thank you, case closed :)