You're on your way to the next level! Join the Kudos program to earn points and save your progress.
Level 1: Seed
25 / 150 points
1 badge earned
Challenges come and go, but your rewards stay with you. Do more to earn more!
What goes around comes around! Share the love by gifting kudos to your peers.
Keep earning points to reach the top of the leaderboard. It resets every quarter so you always have a chance!
Join now to unlock these features and more
The Atlassian Community can help you and your team get more value out of Atlassian products and practices.
I'm a software developer. As such, I've worked on many projects that use Jira and I think of it as an awesome tool for agile teams. Are you with me on that? If you are, I would really appreciate your feedback but first let me tell you about a recent experience I had.
A few months ago I joined a team which had probably the worst backlog I've ever seen. It just didn't make any sense. Why?, because user stories had generic titles like "User Login", no description whatsoever and forget about prioritization, estimation or any other stuff.
I tried my best to explain how to write user stories (based on experience; I don't consider myself an expert on the matter), the importance of backlog grooming sessions, the importance of defining a proper ticket workflow (what's your definition of DONE?!) and other practices I consider help the whole team understand what they are building and gives management good visibility of the project.
Through meetings and with examples and presentations I tried to enforce some rules e.g.
It was useless. I was barking at the wrong tree. Everyone just kept writing and using Jira the way they wanted. I finally quit the project because there were too many broken windows and technical debt.
After that frustrating experience I started thinking, Can we enforce rules automatically in Jira? When writing code I almost always use a linter or other static code analysis tools that help me identify when I'm not following coding conventions or guidelines, stylistic errors or potential bugs.
I looked at the Marketplace for a tool that would help me enforce at least a very basic set of rules e.g. sintaxis, format and the presence of some content on a ticket. I couldn't find anything useful. There's one story quality tool that's for Jira Server. The other that's compatible with Jira Cloud just wouldn't install.
A good friend of mine pointed me to an interesting tool, but it's only a prototype (the theoretical background is what makes it really interesting!). I don't want to export a CSV, startup a Python backend and make a request to get an evaluation of my stories.
So, that's how I decided to start coding a Jira Cloud Application that would validate my stories. It's a work in progress. Here it is: https://github.com/jmigueprieto/ticket-linter. I'm thinking about putting it in the Marketplace but first I want to get some feedback from the community. Specifically, I'm curious about the following:
Hey @Kat that's awesome! I'll check that out and see if I can submit what I've been working on. Thank you!
Looks very interesting. I think there is a need for this. It would need to be highly configurable though. For example, I sometimes use the "Checklist" plugin for acceptance criteria rather than listing them in the description.
In addition, I tend to make the "Summary" field something short like "User Login" but put the full AS A... I WANT... SO THAT... in the description.
I'll start with an opinionated set of rules, based on a conceptual framework which I'll explain in the project.
But I totally agree with you, it should be highly configurable.
Thanks for the feedback!
I have also been looking for a plug-in to do quality checks on user stories. Have you made any progress with this?
To answer your questions:
A dynamic dashboard with quality results is a "must-have" for me: I would like to see things like the number of issues per PI, iteration, project, reporter, assignee, etc. and the quality thereof; issues that were committed to a PI or iteration that were not completed; a graph showing improvement of quality over time, e.g. per PI; IMs logged and linked to stories that could indicate poorly written or incomplete stories. Traceability is also important - stories must be linked to features, features to capabilities, and capabilities to epics.
I would like to do similar checks on Epics, capabilities, and features - but with different rules. For instance - does each Epic have a hypothesis statement?
That should be enough to keep you busy :)