Forums

Articles
Create
cancel
Showing results for 
Search instead for 
Did you mean: 

The 3-Day Code Review Problem (And What It's Actually Costing You)

Björn Brynjar - Smart Guess
Atlassian Partner
January 8, 2026

"People often take around a week to review my changes... having to wait half a sprint is pretty insane in my opinion."

A developer posted this on Reddit recently  Their team runs two-week sprints. Reviews take a week. They suggested "same or next day" turnaround and got "interesting looks from a couple people, like I was saying something crazy or being unreasonable."

When they asked their teammates how long reviews should take, most said "3-4 days." The thread that followed had over 50 responses.

What struck me wasn't the advice—it was how wildly different "normal" looks across teams.

 


The gap between what's possible and what's accepted

Some teams operate reviews like this:

"My team does same or next day and favors small, focused change sets as much as possible."

"I aim to get a review done within an hour of receiving it. It's built into my job description and it affects team velocity."

"My Jira metrics say under 20 hours for all my client companies (from open to merged), this includes addressing comments, change requests, potential reworks."

Other teams have normalized something very different:

 "My old team would have it done same day. Then I went to a team where it would take a week. Now I'm on a team where 2-3 days is normal."

"When I started on my current contract, reviews took 1-3 weeks."

Same industry. Same type of work. Completely different expectations.

 


The real cost of waiting

One reply laid it out clearly:

"Any wait is bad and costs money in all sorts of ways—context switching, merge conflicts, finding out about bugs a week later instead of 15 minutes after writing it."

The original poster described what this looks like in practice:

"I am faster than my other team mates, so my MRs in this team pile up like train cars... I actually just avoid picking up new tasks to avoid context overload because I need to wrap up what's pending."

Read that again. A developer is intentionally slowing down because the review process can't keep up. They're managing their own throughput to compensate for a broken system.

Another response connected it to broader research:

"It's a major problem, for many organizations, and it pays to solve it (refer: 'Accelerate' by Dr. Forsgren)."

The Accelerate - research shows elite teams achieve lead times—from commit to production—measured in hours, not weeks. If your reviews alone take 3-4 days, you've already blown past what high performers accomplish end-to-end.

 


What high-performing teams actually do

The thread surfaced several patterns from teams that have solved this:

They treat reviews as blockers, not backlog

"PRs should be treated as blockers and dealt with ASAP."

"In the best teams I've been a part of, the guideline was to start the day by reviewing any pending MR, before producing any additional code."

They have explicit time agreements

"We have a 24 hour turn around time expectation for most code reviews."

"I expect 24 hour turnaround per iteration from required reviewers... 24 hours is when pings start going out."

"Four hours, tops. Then I start DMing people for the review."

They keep changes small and reviewable

"I learned to pair down my commit sizes. Now my commits are less than 100 lines. They are so small that when I ask for reviews, people readily look at my PRs because they know the PR will be easy to review. I routinely get reviews done same day now."

"We review all changes within a day. Though we keep our tickets and changes small so they are easy to read through and understand."

They make reviews part of the job, not an interruption

"We do same day code reviews for all merge requests, with the goal to do them within an hour of being submitted. We all have to have our code merged, so it's really just like we need it done by other people as much as we need it done so it basically averages out."

 


Making it work without breaking flow

The common objection is that fast reviews mean constant interruptions. But the best advice flips this:

 "When you start a work period, do reviews first, then go to coding. When you come in in the morning, get your cup of coffee, and then do reviews. Once reviews are done, start coding. When you come back from lunch, do any pending reviews, then get back to coding. No interruptions, no leaving the state of flow."

Another pattern:

"Code in small batches, and when you're done a batch of coding, reviewing other people's code is higher priority than the next batch of coding."
Reviews don't break flow if you design them as transitions, not interruptions.

 


Why nothing changes

The most honest observation in the thread:

"The situation is surreal, but it's also disturbingly normal and organizations tend to be very slow to move off their broken process no matter how large the evidence that it works badly."

Teams know long reviews hurt. Many have read Accelerate. They've felt the pain of context-switching back to week-old code. But the process persists because:

1. No visibility — You can't improve what you can't see. Without data, "reviews are slow" is just a feeling, easily dismissed.

2. No baseline — What's normal for your team? Is this sprint worse than last sprint? Without tracking, you can't know.

3. No accountability — A 24-hour agreement means nothing if no one can tell when it's been violated.

The original poster captured the dysfunction perfectly:

"Teams putting processes before people and then claiming to do agile."

 


How to Improve?  Debug your process, not your team

Your reviews take exactly as long as the system you've designed allows. Reviews aren't slow because developers are lazy. You designed the system - work flows through. It's yours to redesign. But like any debugging, it starts with visibility.

When a deploy fails, you don't guess at the cause. You look at the logs. You trace the error. You find the line that broke.
Your review process deserves the same treatment:

  1. How long does your PR actually sit in review?
  2. Which transitions are slow?
    1. Dev done → In Review?
    2. In Review → Review done?
    3. Review done → In Test?
  3. Do you have an agreement on how long code reviews wait until started?

These questions are all easy to answer—if you collect the data.

The developer who started this thread isn't unreasonable for expecting same-day reviews. They're just on a team that doesn't see where the time goes.

 


I'm the lead developer of Time In Status by Smart Guess — it shows you how long Jira issues spend in each status—so you can see where work waits, spot review bottlenecks before they pile up, and finally have data for those "why did this take so long?" conversations.

1 comment

Comment

Log in or Sign up to comment
Stephen_Lugton
Community Champion
January 8, 2026

One thing I've seen across various teams is that reviewing PRs isn't included in the estimate for a work item, as such reviewing a PR for a colleague isn't seen as being something the team has committed to.

Secondly, there is an aspect of context switching in reviewing someone else's work, for some teams there will not be much, but for others who are focussing on their own ticket having to review a PR changes their focus and affects their own work hence it will be longer before their mind is in a place to look at something else. 

My thoughts are that the developer who started this thread may be unreasonable for expecting same-day reviews if that would mean that their colleagues have to prioritise doing the review over their own work.  

However if the team works on lots of small and quick items, then reviewing PRs after finishing one item and before starting the next is more reasonable.

TAGS
AUG Leaders

Atlassian Community Events