We have incorporated impact assessments into our project-level requests for JPD. However, we are addressing simply by adding additional tables. This has led to a few instances where two people were working on updating impacts at once, and one was overridden. When we try to look at the history, the length of the Description field makes it extremely difficult to find the actual changes.
I'm wondering if there is/could be a way to incorporate something to better support the additional information. I know I can add a link to a Confluence File or some such thing, but being able to have a form filler, additional section, etc, so that the changes are compartmentalized and there is more sharing ability, would be amazing.
This is interesting because I and my team have encountered the same thing. I am thinking of a way around it by giving notes or comments first before updating the impacts.
Still, I think that the best way to fix this is to add something like requiring a person tag to know who and when the adjustment is made.
Hi Beau,
I understand that you adding tables within the description of the idea, and then you are looking in the history tab to identify both the modification and the modification ( please let me know if it's not correct).
Regarding the impact assessment, could you please share a bit more on the type of data that you would place in these tables?
As a first thought, I think it might be more suitable to create dedicated fields (for any criteria or team) so you can both easily track the changes, who made them and use Jira Product Discovery custom formula capabilities to compute automatically these data and provide you with a score.
I've recorded a loom to explain more in detail what I mean , and how it can help when you have multiple criterias, including numbers or text and multiple teams collaborating:
https://www.loom.com/share/c4b82fd9bc474191aab972205e53ee5b
Cheers,
Hermance
@Hermance NDounga thanks so much for the video! I really appreciate the time!
In my case, the trouble I'm having is that JPD seems to be more friendly in how we evaluate impacts/costs of individual features than overall projects. We use both, and I've spent a little bit of time working on impact scores, but when we get into projects I run into the following challenges;
At a feature level, these things aren't really a problem, because they're easily translated and have very few inputs. It's when we get larger that it gets more difficult.
Other things I've considered doing to manage (to help state how I'm thinking about solving the problem, for context):
Again, thanks for the support so far!