Hi,
Our team has just made the switch to Product Discovery and I really like the way it allows me to visualize ideas on our roadmap.
We currently use the following weighted formula to calculate the Impact Score:
While I like the simplicity of the Weighted Score, it ends up favoring Quick Wins (low effort, high impact) over Major Projects (high effort, high impact).
I'm looking for a way to bring down the importance of the Effort field
How is everyone calculating their impact scores?
This is a very nice topic, and I put some thought on this myself. Of course I make sure that I am adding "Insights" to my ideas, that also received a weight. What I also found important is to incorporate blocked ideas in the weighting factor. To do this, I created automations that would increase a counter each time an idea gets blocked by another idea, or if the idea is blocking another idea. If the current idea blocks one or multiple ideas, that would count as a positive input, and if an idea is blocked by another one, that would count as a negative input. I also introduced a confidence factor 0 - 100 as another positive input to reflect a bit the human perception of an idea. Here is what my formula looks like. I would love to hear some feedback:
For now, my team doesn't have that many blocked by's, but I like the thought of adding a second negative input so you can play around with the relative weight Effort has on the scoring.
I do feel that the more fields you add, the more the score starts to feel as 'black magic', which I instinctively feel should be avoided.
You could use a calculated formula instead of a weighted score, for example:
You can adjust multipliers etc. as you wish but the (6 - effort) gives you a value of 1 to 5 inversely related to effort
This is what I used to use in Foxly, prior to switching to Product Discovery and that's definitely a worthwhile option, but I liked the simplicity of the weighted formula.
I haven't played around with the custom formula yet, but does that allow to take into account the individual insight weights?
How are your teams using the resulting "Impact Score" values? What process steps does it drive?
Isn't the result of quick win items scoring to the top (and then quickly completing to leave the list) an indication the inputs to the scoring are accurate? Or perhaps instead that there is something missing, such as finer granularity in what composes "Impact"? For example, revenue generation, opportunity enablement, cost reduction, risk reduction, cost avoidance, etc.
Kind regards,
Bill
A very good point.
I think part of it stems from our idea intake process, where our entire company suggests improvements and new ideas. Due to a -very- generous inflow of improvements, we end up working on quick wins that are valuable to the overall stability of the product, but are not necessarily delighting features to our users.
Since switching to product discovery, I'm able to remedy that already by using weighted goals (such as Delight Users), but the effort still seems to play an inordinate role in the scoring.
Great topic!
I try to look at user counts or estimating time savings for process changes and total market $ changes for products. If your market or affected labor force % changes for a small project, without looking at total impact it will skew these quick wins to be more valuable than they are, sapping resources from the bigger projects. What I mean is if it is a 30 min daily savings in time but only affects 10% of the workforce, it might not be a high impact for the global enterprise. (6% for the affected group but only 3% overall). It has happened here quite a bit in the past.
Yes, agree. A lot of changes to our product's settings are handled by our customer support team. Even halving the time they spend on these tasks does not outweigh the potential gain of building one delighting feature.
Maybe reducing the positive inputs (impact, weighted goals) will be enough to negate the low effort tied to these.
We use the same idea/approach as @Stephen_Lugton
Additionally, we use a buff/fudge factor that we picked up from an Atlassian team demo on how they use JDP: https://community.atlassian.com/t5/Jira-Product-Discovery-articles/How-one-team-in-Atlassian-uses-Jira-Product-Discovery/ba-p/2452156
It can be used, for example, for projects that have lower value/impact themselves, but unlock higher-value work. We found these projects rank poorly using the regular effort v impact formulas but sometimes doing them allowed other much high-value work to be started.