Because every Bugzilla that Mike and Scott ran into had a severity field that contained utter rubbish. They dumped the field because it was (and still is) usually useless. I still work with systems where it's built in, or been added, and that still stands.
It's not that the idea of a severity is wrong, the problem is that to a human raising issues, the severity is not quantitative, it's subjective. Every issue is "severe" to the person raising it.
Consider a simple case: My cat is ill. That's critical to her, very important to me and the vet that I take her to, mildly upsetting to the neighbours and friends who like her and utterly irrelevant to the rest of the world. What severity would you choose?
Severity was dumped because it's subjective and hence useless. When you do think you have a need for it, use a custom field to gather the information, but quantify it, or better, replace it with a proper measurable set of fields - Impact (how many people affected) and Functional loss (trivial being something the wrong colour and critical meaning the application just crashes so it's totally broken)
Your QC requirements should describe Severity statuses for bugs. For example, 'Highest' severity for core functional, 'Medium' for secondary functional, 'Low' - UI, 'Blocker' - obviously crashes, stoppers etc. Meanwhile priority should be defined per task by one responsible person/group.
Nic, let me answer your example
1. how objective it is - it may be based on a risk analysis. Such an approach is more or less objective. Even if it's subjective, but set by one person named PO consistently - this is acceptable.
2. My understanding of your example about cat:
I'm afraid you have completely misunderstood the point. The severity of my cat's illness is critical to them and irrelevant to most of the rest of the world. You can't judge any of what you've said without looking at the different points of view. The priority is something the developer (the carer for the cat) will set, and your priority calculation is incorrect for them as well because your severity is incorrect because you are the person guessing at something you don't really grasp because that's what severity is usually set up as.
The point is the severity is a poor attempt to recognise something that is too subjective and should really be calculated from objective, quantified values that means something to everyone involved. And that's why Jira does not have one by default - most people don't understand it.
Severity should be considered for bugs. It is clear that we are dealing with two dimensions of the problem which are how fast I need to fix the bug (Priority) and What is the impact on production (Severity). That said we should consider Priority from Low to Highest for example and Severity from Low to Blocker. Like that:
4-Highest: Should be fixed immediately
3-High: Should be addressed quickly for some reason, a unexpected bug of deadline of the project
2-Medium: Errors that could be addressed to future sprints
1-Low: Errors that does not affect functionality
4-Blocker: Tests are not executable, crash or feature is not working
3-Critical: Feature is working poorly
2-Medium: Feature does not match some acceptance criteria
1-Low: It does not affect functionality (UI in most cases)
The most important thing to consider is that you could have problems that need to be done ASAP (priority) but that are not critical to the system (severity). In most cases, a Blocker severity means Highest priority but not always. I believe that with these two dimensions you could work better to address bugs to be solved accordingly to reality.
Connect with like-minded Atlassian users at free events near you!Find an event
Connect with like-minded Atlassian users at free events near you!
Unfortunately there are no Community Events near you at the moment.Host an event
You're one step closer to meeting fellow Atlassian users at your local event. Learn more about Community Events