Ask most IT leaders what they wish they had better visibility into, and chances are you’ll hear four (technically, three) familiar letters: MTTR.
And more, of course. But I want to focus on Mean Time to Resolution. In my opinion, it may be the most important acronyms in service management.
When it’s measured well, it tells you exactly how responsive your service organization really is when the chips are down. But more often than not, MTTR is treated like a vague benchmark—hard to define, harder to measure, hard to trust, and rarely used to make actual decisions.
It doesn’t have to be that way. And it shouldn’t be that way.
MTTR is the average time it takes to resolve an incident or request. That’s it. But don’t let the simplicity fool you.
It reflects everything from how well you detect issues, to how quickly the right people are looped in, to how efficiently the resolution gets done. It’s a lagging indicator of process health and a leading indicator of CSAT.
Teams with tight MTTRs tend to have strong operational habits: clear ownership, crisp triage, good automation, and low reliance on heroics. And if your organization is serious about service reliability, you can’t improve what you aren’t measuring.
Denis Boisvert, an ITSM Solution Architect, and my colleague, puts it this way: “Reducing MTTR is about intelligent measurement and decisive action. It means understanding the anatomy of delay—where time is spent, where it’s lost, and where it can be optimized through automation, documentation, and culture.”
That framing is a helpful reminder: MTTR isn’t about speed for speed’s sake. It’s a proxy for clarity, capability, and organizational rhythm.
Plenty of teams try to measure MTTR. Fewer do it well. Here’s why it often goes sideways:
Academic and industry research back this up. Measurement gaps almost always come down to two things: unclear workflows and inconsistent data capture. Not tooling. Not complexity.
There’s more empirical research on this than you might expect:
As Denis explains in his MTTRx model, real improvement comes when you stop treating MTTR as a scorecard and start treating it like an X-ray. Where are the bottlenecks? Where do tickets stall? Where is there unnecessary thrash?
This is where Atlassian’s Service Management Collection—centered on Jira Service Management—can make a real difference. It’s not about having more dashboards. It’s about having the right architecture to track MTTR meaningfully.
JSM lets you set time-to-resolution targets, attach them to workflows, and pause them during weekends or while waiting on users. That alone cleans up a lot of noise.
You can automatically capture when key moments happen: triage start, fix deployed, resolution verified. No more relying on someone to click the right button at the right time.
JSM lets you tie incidents to services and changes, giving you traceability that helps make sense of the MTTR story.
You can use built-in reports or beef them up with marketplace add-ons. The key is being able to segment: by team, severity, service area. No more flying blind.
Originally found in Opsgenie, but now part of JSM, alerts start the clock sooner—right from detection. That’s going to be crucial to you MTTR reduction efforts.
Together, these capabilities gives you a fuller picture of where time is really going.
A customer, CIO Scott Checkoway of DentaXChange, once told me, “Reducing MTTR with Jira Service Management (plus a couple of key add-ons) as helped us significantly improve customer satisfaction. And that has helped to set us apart from our competition.”
To learn more, see the full interview with Scott here in the community.
Once you trust your MTTR data, it becomes something you can use to achieve your CSAT goals.
To make “resolved” a truly meaningful metric, it’s important to define it clearly and avoid leaving it open to interpretation. Be specific about what resolution means in your context so that teams share the same understanding. Once defined, break the measurement down by severity, service, or team, because the more you segment the data, the more actionable insights you uncover. Don’t stop at averages, either—watch the trends. A flat mean might mask important issues, while a sudden spike in the 95th percentile is often a signal worth investigating. And finally, always pair the numbers with context. Metrics tell you what happened, but stories explain why. Ensure your PIRS include essential insights and depth that charts alone can’t provide.
When you get this right, MTTR stops being a metric you report up and becomes a lever you use down to drive change.
One thing I’ve noticed: high performing service organizations typically have MTTR KPIs that align with their CSAT KPIs.
MTTR might not be glamorous, but it’s honest. It reflects how your org behaves when the pressure’s on.
With the right structure and tools in place, you can stop guessing and start learning. And the best part? You don’t need perfection. You just need consistency.
Atlassian’s Service Management Collection gives you what you need to start that journey—and build from there.
Want to chat about it?
If you're headed to Team'25 Europe in Barcelona next week, I'll be hosting a Braindate on this very topic on Wednesday morning.
Dave Rosenlund is an Atlassian Community Champion for two virtual Atlassian Community Events chapters, ITSM/ESM Masters and Program/Project Masters. In his day job, he leads the product team for Platinum Atlassian Solution Partner, Trundl.
Denis Boisvert contributed to this article. He’s is an Atlassian Community Champion and leader of of the Montreal ACE chapter. He's also the Lead ITSM Solution Architect at Trundl. He recently authored “The Architecture of Intelligent Resolution (MTTRx).”
Dave Rosenlund _Trundl_
Global Director, Products @Trundl
Boston
202 accepted answers
4 comments