We do a lot of internal logging for performance monitoring. We don't have a lot of data from which to determine times that would indicate some a problem in progress, so what should we be looking for?
We have "System A," that can access arbitrary URLs and measure response time, but we also can analyze all of the access logs to see actual response time for all requests of a certain type. The ones we're measuring now (and our entirely arbitrarily chosen guesses) are:
So my questions are, are those reasonable endpoints to check (for instance, I question the value of BrowseProjects.jspa) and/or are there others we should be checking? How about the times we've chosen?
The biggest thing I would ask is what are you actually measuring for response time? Is this time to get a valid HTTP response? First response? Last response? When the page finishes rendering client side? Is your test system actually running the JS as well?
We run apache as a proxy in front of tomcat, so we're presently analyzing the duration of requests. For this purpose, we're less interested in browser performance and more interested in server performance. For instance, recent log analysis show us just how hard we were getting hammered by https://jira.atlassian.com/browse/GHS-8775.
I think if you are capturing all request durations at the proxy level then the best thing is to run analysis based on frequency of requests * their total length. A request that takes 3 minutes to return but happens once a month is not as severe as a request that happens 100 times a day and takes 10 seconds. Rather then asking what endpoints to monitor (which will vary from system to system), this kind of an analysis will show you which endpoints matter for your system, and allow you to track their performance over time.
Connect with like-minded Atlassian users at free events near you!Find a group
Connect with like-minded Atlassian users at free events near you!
Unfortunately there are no AUG chapters near you at the moment.Start an AUG