Wanting a robust Defect Tracking system

Terry October 4, 2024

I am attempting to find a template so I do not have to create my homegrown defect-tracking application. The instructions that I found while searching were to create a new project and select the Defect Tracking template.  This gives me a very simple "bug" template.  I want more in-depth fields so I can capture the root cause information and tie it back to the application and user story,. I think a good reference would be 

 

Defect Tracking Best Practices for Software QA (daily.dev)

 

Has anyone created this sort of project? 

1 answer

1 vote
Łukasz Modzelewski _Lumo_
Marketplace Partner
Marketplace Partners provide apps and integrations available on the Atlassian Marketplace that extend the power of Atlassian products.
October 5, 2024

Oh, there are multiple factors and it depends on the team, the product, who is reporting this etc.

If it's for external customers: they may or may not follow the guidance.  What's worst the might not report bugs at all, if the form is complicated, because it would require too much effort from their side. It's better to start small and ask for more as the relation is established.

If it's internal team - then you need to define what you need: version? browser? maybe backup is needed? maybe the environment need to be shared with developer? 

Then if the bug is found what next, can this ever happen again? Do we need to test it with each release, or is it undocumented feature? 

From my experience some bugs are so easy that: summary, affected version, 1 screenshot with highlighted problem and description what is expected is enough. Then again, if there is a big rotation and person verifying != reporter, or team don't know the product deep enough, then even simple things can go wrong ;) 

As for the template from the link:

Title: [Short description]
Environment: [OS, browser, version]
Steps:
1. [Step 1]
2. [Step 2]
3. [Step 3]
Expected: [What should happen]
Actual: [What does happen]
Proof: [Screenshots/logs]
Severity/Priority: [High/Medium/Low]

I like to switch the order of Expected <=> Actual -> when Expected is the last thing on ticket it is easier to check what needs to be done.

As for priorities we also used "None" and "Blocker" - if any blocker is found then we are not releasing application - because any of the core features is not working. 

Environment - in most of our cases is useless, because if it is not affecting anything then it's a waist of time. So we focused on things that matter.

If there are any 'required' fields like the affected version (if app can be used in different version e.g. like Jira Data Center ) then it should be filled by users. On the other hand even for cloud based apps there should be info about the affected version, because if some bug was reported on version v1.0.0 (and it is not fixed right away), it can still be valid in v2.0.0

I was also teaching my QA's to mark issues on screenshots and if possible mark order of actions (arrows with numbers) - as I mentioned 1 good screenshot is enough (sometimes this can be 3 browser windows but everything visible in 1 file). If it requires more screens, then as a good practice I consider naming files in order with human friendly naming (just like preparing images for SEO). It is easier to mention screen "05 jira issue view" then "<timestamp>". Similarly when bug is resolved for verification 1 screen "Done..." added in comment was enough in most cases.

Circling back to writing test: Xray is doing a great job in helping teams write tests -> when defect is observed when test is executed all steps are added to the description of the defect. 
https://docs.getxray.app/display/XRAYCLOUD/Test

Considering X apps, we had X projects with a bit different configuration for each one - because each team works a bit different :) 

Suggest an answer

Log in or Sign up to answer
DEPLOYMENT TYPE
CLOUD
PRODUCT PLAN
STANDARD
PERMISSIONS LEVEL
Product Admin
TAGS
AUG Leaders

Atlassian Community Events