Bubble Framework blog post: Share your thoughts and questions here!

Hi, Atlassian Community! I’m Maui Duclercq, an R&D program manager based in Sydney. 

Last year, I worked closely with the Confluence team on the release of their 7.0 platform for Server and Data Center customers, and we released within a week of our goal. Our success was thanks to the team’s hard work and good decision making, helped along by a new kind of project management technique I developed. 

In essence, the “Bubble Framework” is a data-driven feedback loop that enables dev teams to actively respond to change – specifically by accurately tracking progress and anticipating risks of delay – and course-correct accordingly. You can learn more about it in our blog post.

We’d love to know your thoughts on the project, and I’m game to answer any questions you might have about how we did it and how to execute your own version. Fire away in the comments below!

10 comments

Paul-Émile Migneault March 18, 2020

How is it working in Jira? With the JQL you use which fields ?

 

If you use tags, components, links or Bitbucket branch, does the ticket creation need a bit of overhead? Does the analyze time (before starting the work) needs some relative ticket size comparison?

Like Lauren Marten Parker likes this
Maui
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
March 18, 2020

Thanks for reaching out @Paul-Émile Migneault!

This early version of the estimator is a standalone program. The JQL itself contains a relatively limited number of fields to keep things simple eg. issue key, issue type, created, resolved and resolution. The estimator then queries the API to retrieve additional info such as a start date (eg. status transition from 'To Do' to 'In Progress') and an end date (eg. status transition from 'In Review' to 'Done') where applicable.

This mechanism is very unobtrusive for teams as it doesn't require any additional effort from them other than regular backlog grooming discipline. We have two modes available for computation: one is based on throughput (think Kanban) and the other one is based on lead time analysis (more sophisticated). Teams are encouraged to continue using estimation games (eg. planning poker) if that helps flesh out gaps but the estimator is not using team estimates at all.

Hope this answers your questions.

Like # people like this
Brett Evans April 8, 2020

The approach sounds good but is not useful without the program or some form of example.
Do you have plans to release something we could take a look at so it would be possible to try this approach with a wider audience?

Maui
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
April 14, 2020

Thanks for reaching out @Brett Evans 

I agree an example would definitely be helpful. This could be the topic of another blog post.

This framework is still very much experimental despite its use expanding internally. To be honest, there are no plans to make it a feature at this stage. There's still lots to learn for us before we'd consider releasing it.

That said, what scenario would you be interested in trying it? Some context would be really helpful.

rohit kundu September 30, 2020

@Maui Duclercq This is very interesting! Are you planning to have a blog post around the engineering piece, anytime soon?

Maui
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
September 30, 2020

Thanks for reaching out @rohit kundu 

The framework has gone through a few iterations since that last blog post. We've been keeping track of project performance (average delay) for over a year now. Results across dozens of medium to large projects suggest a strong correlation with much greater predictability!

I will try to carve out some time in early 2021 to write another post which will present our findings and elaborate on the more technical side of things.

Stay tuned.

Jennifer Kinard October 22, 2021

@Maui Just curious if you are still planning on posting anymore details about this?

Maui
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
October 24, 2021

Thanks for following up @Jennifer Kinard !

Here's a very quick summary on this experiment since the original post:

  • We distilled this framework into a simple tool that runs on Google Sheets! We named it Soda. Soda has gone through a dozen iterations and has been stable for a little while now.
  • So far we've tracked over 80 projects across 25 dev teams and observed an overall reduction in project delays of about ~70% compared to baseline*.
  • Although this result is impressive, we learned that project-level buffering trades uncertain earliness for certain lateness if that makes sense. So if predictable delivery is important in your circumstances eg. program with lots of dependencies, you may want to add this framework to your toolkit. 

Here's a snapshot of the tool for reference:

Screen Shot 2021-10-25 at 11.16.58 am.png

* Average delay across projects delivered by roughly the same 25 dev teams over a period of 1 year prior to the experiment with Soda.

Like Jennifer Kinard likes this
Jennifer Kinard October 27, 2021

Thanks @Maui ! This is very interesting!

To clarify, does Soda run off an existing Critical Chain software tool or did you create it within GoogleSheets?  

Maui
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
October 27, 2021

Soda is fully running on Google Sheets without any custom scripts @Jennifer Kinard We really wanted a tool that wouldn't fail, was relatively easy to maintain, and could easily scale.

Like Jennifer Kinard likes this
TAGS
AUG Leaders

Atlassian Community Events