Create
cancel
Showing results for 
Search instead for 
Did you mean: 
Sign up Log in

Decreasing the importance of quick wins

Pieter Helsen
Contributor
March 11, 2024

Hi,

Our team has just made the switch to Product Discovery and I really like the way it allows me to visualize ideas on our roadmap. 

We currently use the following weighted formula to calculate the Impact Score:Screenshot 2024-03-11 165130.png

While I like the simplicity of the Weighted Score, it ends up favoring Quick Wins (low effort, high impact) over Major Projects (high effort, high impact). 

I'm looking for a way to bring down the importance of the Effort field

How is everyone calculating their impact scores?

6 comments

Denis Paul
Contributor
March 11, 2024

I haven't found a way to show this in one number.

My workaround: I've built a matrix view with a sum of the positive inputs on one axis and a sum of the negative inputs on the other. This way I can look at the ideas that are high on the impcat axis and decide from time to time if we want to start working on these.

Matrix.png   

Like # people like this
Christian Wietholt
Contributor
March 11, 2024

This is a very nice topic, and I put some thought on this myself. Of course I make sure that I am adding "Insights" to my ideas, that also received a weight. What I also found important is to incorporate blocked ideas in the weighting factor. To do this, I created automations that would increase a counter each time an idea gets blocked by another idea, or if the idea is blocking another idea. If the current idea blocks one or multiple ideas, that would count as a positive input, and if an idea is blocked by another one, that would count as a negative input. I also introduced a confidence factor 0 - 100 as another positive input to reflect a bit the human perception of an idea. Here is what my formula looks like. I would love to hear some feedback: 

Screenshot 2024-03-11 112601.png

Like # people like this
Pieter Helsen
Contributor
March 11, 2024

For now, my team doesn't have that many blocked by's, but I like the thought of adding a second negative input so you can play around with the relative weight Effort has on the scoring. 

I do feel that the more fields you add, the more the score starts to feel as 'black magic', which I instinctively feel should be avoided. 

Stephen_Lugton
Community Leader
Community Leader
Community Leaders are connectors, ambassadors, and mentors. On the online community, they serve as thought leaders, product experts, and moderators.
March 11, 2024

You could use a calculated formula instead of a weighted score, for example:

formula.png

You can adjust multipliers etc. as you wish but the (6 - effort) gives you a value of 1 to 5 inversely related to effort



example.png

Like # people like this
Pieter Helsen
Contributor
March 11, 2024

This is what I used to use in Foxly, prior to switching to Product Discovery and that's definitely a worthwhile option, but I liked the simplicity of the weighted formula. 

I haven't played around with the custom formula yet, but does that allow to take into account the individual insight weights? 

Bill Sheboy
Rising Star
Rising Star
Rising Stars are recognized for providing high-quality answers to other users. Rising Stars receive a certificate of achievement and are on the path to becoming Community Leaders.
March 11, 2024

Hi @Pieter Helsen 

How are your teams using the resulting "Impact Score" values?  What process steps does it drive?

Isn't the result of quick win items scoring to the top (and then quickly completing to leave the list) an indication the inputs to the scoring are accurate?  Or perhaps instead that there is something missing, such as finer granularity in what composes "Impact"?  For example, revenue generation, opportunity enablement, cost reduction, risk reduction, cost avoidance, etc.

Kind regards,
Bill

Like Pieter Helsen likes this
Pieter Helsen
Contributor
March 12, 2024

A very good point. 

I think part of it stems from our idea intake process, where our entire company suggests improvements and new ideas. Due to a -very- generous inflow of improvements, we end up working on quick wins that are valuable to the overall stability of the product, but are not necessarily delighting features to our users. 

Since switching to product discovery, I'm able to remedy that already by using weighted goals (such as Delight Users), but the effort still seems to play an inordinate role in the scoring. 

Like Bill Sheboy likes this
Paul Williams
I'm New Here
I'm New Here
Those new to the Atlassian Community have posted less than three times. Give them a warm welcome!
March 11, 2024

Great topic!

I try to look at user counts or estimating time savings for process changes and total market $ changes for products. If your market or affected labor force % changes for a small project, without looking at total impact it will skew these quick wins to be more valuable than they are, sapping resources from the bigger projects. What I mean is if it is a 30 min daily savings in time but only affects 10% of the workforce, it might not be a high impact for the global enterprise. (6% for the affected group but only 3% overall). It has happened here quite a bit in the past. 

Like # people like this
Pieter Helsen
Contributor
March 12, 2024

Yes, agree. A lot of changes to our product's settings are handled by our customer support team. Even halving the time they spend on these tasks does not outweigh the potential gain of building one delighting feature. 

Maybe reducing the positive inputs (impact, weighted goals) will be enough to negate the low effort tied to these. 

Like andrew_dunne likes this
Daniel Naumann March 17, 2024

We use the same idea/approach as @Stephen_Lugton

Additionally, we use a buff/fudge factor that we picked up from an Atlassian team demo on how they use JDP: https://community.atlassian.com/t5/Jira-Product-Discovery-articles/How-one-team-in-Atlassian-uses-Jira-Product-Discovery/ba-p/2452156

It can be used, for example, for projects that have lower value/impact themselves, but unlock higher-value work. We found these projects rank poorly using the regular effort v impact formulas but sometimes doing them allowed other much high-value work to be started.

Like Tom Booth likes this

Comment

Log in or Sign up to comment
TAGS
AUG Leaders

Atlassian Community Events