Forums

Articles
Create
cancel
Showing results for 
Search instead for 
Did you mean: 

Stop guessing where to refactor: meet Code Heatmap (built for Codegeist 2025)

If you’ve ever led (or inherited) a codebase that “mostly works,” you know the feeling:

You don’t need more activity metrics.
You need to know where the risk lives.

Not “which files changed a lot,” but:

  • Which files keep collecting bugs and risky changes?
  • Where do work items actually land?
  • If we can only refactor one area this sprint… where do we get the biggest reliability win?

During Codegeist 2025: Atlassian Williams Racing Edition, we built Code Heatmap — a Forge app for Bitbucket that turns repository activity into a visual hotspot map you can use for planning, refactoring “pit-stops,” and technical debt prioritization. 

original.png

The problem: churn doesn’t equal risk

Most repo dashboards tell you things like:

  • commit counts
  • lines changed
  • “top changed files”

Those can be useful, but they miss the part engineering leads care about most:

Where does the team keep paying the same cost again and again?
Where are bugs and tasks repeatedly touching the same surface area?
Where is change concentrated enough that it threatens delivery speed and quality?

In racing terms: counting how many times a car went into the pit lane isn’t the same as knowing which component keeps overheating.

 What problems it solves

Here’s what we kept hearing from engineering managers and tech leads:

  • “We think this module is messy, but we can’t prove it.”
  • “We keep fixing bugs in the same area, but it’s hard to show why it deserves refactor time.”
  • “Sprint planning gets hijacked by opinions.”

Code Heatmap helps by:

  • Identifying files that repeatedly attract bugs or risky changes
  • Highlighting the strongest refactoring candidates (with evidence)
  • Revealing clusters where many issues land — often a hint of deeper architectural tension

original (3).png

The build story: the data reality check (Jira vs Bitbucket)

Our first assumption was:
“Jira + GraphQL will give us a clean list of work items with linked commits.”

On paper, that sounded perfect.

In practice… it didn’t. We couldn’t get reliable commit-to-task details the way we needed, which forced a pivot: we leaned much more heavily on Bitbucket data and iterated on different endpoints + mapping strategies until we could consistently connect work items to real code changes.

This ended up being one of the biggest lessons of the project:

Validate data availability early. Don’t design around what you hope the API exposes.

How Rovo Dev helped us move at hackathon speed

From day one, we treated Rovo Dev as our productivity engine — not as a magic button, but as an accelerator for the boring parts:

  • bootstrapping the Forge project structure
  • generating UI and backend scaffolding
  • spinning up small “integration slices” to test APIs quickly

Our pattern looked like this:

  1. Generate a small module
  2. Connect an endpoint
  3. Send a test request
  4. Inspect the returned shape
  5. Keep it or throw it away (fast)

That “throw it away” step mattered more than we expected.

Rovo Dev didn’t replace engineering judgment — but it absolutely helped us spend more time on the parts that matter:

  • data modeling
  • mapping work to code changes
  • visualization choices that are actually readable
  • turning outputs into decisions, not just charts

What we’re proud of

Honestly? The best moment was seeing the first hotspot map that made everyone on the team go:

“Oh… that’s why this area always feels painful.”

We built a tool that makes codebase hotspots visible — not guessed. 

What’s next

We don’t just want to show where code is “hot” — we want to show why it’s changing and how it impacts delivery and risk. Our roadmap includes:

  • AI code visibility (where AI-assisted code is likely introduced and by whom)
  • impact analytics (connect patterns to sprint metrics, bug rates, technical debt)
  • security + quality signals (static analysis / scan results layered onto hotspots) 

If you want to check it out (and tell us what you’d improve)

If you’re a tech lead/manager: What would you want to see on a “repo health snapshot”?
I’d love to hear which signals you trust (and which ones you ignore).

 

0 comments

Comment

Log in or Sign up to comment
TAGS
AUG Leaders

Atlassian Community Events