In using the Jira Xray plug-in for Test Management, our Product Team has observed that we often see very different Requirement Test Statuses displayed in different locations in the tool. The differences we observe cannot be explained by differences in the settings (Test Plan, Fix Version/s, Test Environments, etc.) of the filters associated with the Requirement Test Statuses in the different locations. We believe the differences are likely caused by differences in the logic that calculates the Status value in each of the locations the Requirement Test Status is displayed.
We are unable to find detailed enough explanations in the Xray Documentation to determine exactly how Xray is calculating the Requirement Test Status in each location and therefore are reaching out to the community to request help in understanding the criteria and logic/ algorithms by which each Requirement Test Status is calculated and/or why we see different Requirement Test Statuses in different locations even when the filter settings for the different locations are the same.
The Jira/ Xray tool locations providing Requirement Test Statuses that we have questions about are as follows:
Screenshots of each of the Requirement Test Status locations listed above and our questions regarding how these Statuses are calculated are listed below. (Many of the same questions are asked in each location but, since the Status values differ, we figure the answers to the questions are probably different in the different locations.)
Note: in each of the descriptions/ questions below, the term “Test Execution” is understood to mean a Test Execution Issue Type that contains the Test(s) under discussion.
Thank you very much for any help you can provide in deciphering how Jira/ Xray arrives at the Requirement Test Status values it displays in each of these locations.
REQUIREMENT TEST STATUS 1
Location: “Requirement Status” field in a Requirement (Story or Bug) Issue
Xray Documentation indicates that the Requirement Status combines the results of the latest Test Execution in each Test Environment for which there is a Test Execution containing the Test to arrive at its overall value.
Assumptions:
Question:
REQUIREMENT TEST STATUS 2
Location: Overall Requirement Coverage Gadget and Overall Requirement Coverage Report with Scope = Test Plan
Assumption:
Questions:
REQUIREMENT TEST STATUS 3
Location: Overall Requirement Coverage Gadget Filter (i.e., the list of Requirements displayed when an area on the Overall Requirements Coverage Gadget representing the Requirements that have the specified Requirement Test Status is clicked).
Note: Jira/ Xray shows that the Filter used to pull the listed Requirements is
filter = < Requirements-specifying Filter> AND issue in requirementsWithStatusByTestPlan(<Requirements Status clicked on the Gadget>, "<Test Plan ID>", "", "true","","","<Requirements-specifying Filter>")
Example: filter = 12345 AND issue in requirementsWithStatusByTestPlan(OK, "PROJ-1111", "", "true","","","12345")
Notice that, though the Overall Requirements Gadget (shown in Requirement Test Status 2) indicates that there are ninety-eight (98) Requirements with Requirements Status = OK, only twenty-nine (29) Requirements are shown in the list when the green OK area in the Gadget is clicked; the other sixty-nine (69) Requirements that make up the ninety-eight (98) Requirements indicated in the Gadget show up when the “OK” Requirement Status in the Filter is replaced with “NOTRUN”.
Questions:
REQUIREMENT TEST STATUS 4
Location: Xray Traceability Report
Questions:
1. The Requirement Test Status results for the Xray Traceability Report appear to be the same as the Requirement Test Status Results displayed in the Overall Requirements Coverage Gadget (Requirement Test Status 2 above). Do the Gadget and the Report use the same criteria and algorithm to calculate the results shown in each?
If not, we have the same questions about the calculation of Requirement Test Statuses for the Xray Traceability Report as we have for the Overall Requirements Coverage Gadget:
1.1 Which Tests linked to each Requirement does the Xray include in its calculation of the Requirement Test Status shown in the Xray Traceability Report - all linked Tests or only those Tests linked to the Requirement via a “Tested by” link?
1.2 Does Xray only look at the Test Plan specified in the Test Execution or does Xray also check that the Test itself is associated with the specified Test Plan? (I.e., how does the Xray treat a case in which a Test itself is not associated with the specified Test Plan but one or more of the Test’s Test Executions are associated with the specified Test Plan?)
REQUIREMENT TEST STATUS 5
Location: Quick Filter Results bar displayed above the Xray Traceability Report
Questions:
REQUIREMENT TEST STATUS 6
Location: Xray “requirementsWithStatusByTestPlan” Filter, described in the Xray Documentation Enhanced querying with JQL section
requirementsWithStatusByTestPlan Filter parameters shown in the Enhanced querying with JQL section are:
issue in requirementsWithStatusByTestPlan(<desired Requirements Status(es), "<Test Plan ID>", "", "true","","","<Requirements-specifying Filter>")
Note: the Xray requirementsWithStatusByTestPlan Filter appears to be the driver for the Requirement Test Status values displayed in both the Overall Requirements Coverage Gadget Filter (i.e., the list of Requirements displayed when one of the areas on the Overall Requirement Coverage Gadget is clicked, see Requirement Test Status 3 above) and the Traceability Report Quick Filters bar (see Requirement Test Status 5 above). Therefore, it seems that this is a key Xray Filter and understanding the criteria and algorithm this Filter uses to arrive at the results it displays is important to being able to fully use the analytical features Xray offers.
Questions:
REQUIREMENT TEST STATUS 7
Location: in the Summary bar at the top of the Test Coverage Panel in a Requirement (Story or Test) Issue
We have experienced cases in which the Requirement Test Status shown in the Summary bar at the top of the Test Coverage Panel in a Requirement Issue displays a Requirement Status that is inconsistent with the actual Test results that contribute to the Requirement Test Status.
The example above is one such situation:
Question:
Why is the Requirement Test Status shown in the bar at the top of the Test Coverage Panel above “NOTRUN” instead of “OK”?
Tell the customer to sign a document with a disclaimer with an exemption of responsibility of any bugs and architectural inconsistencies that may arise as a result of your work with a lack of perspective on the whole picture.
If they want to work that way, that's OK as long as you are covered, and not responsible of any disasters.
Besides, the exposure to that document might make them change their opinion on the way everything is managed.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.