Forums

Articles
Create
cancel
Showing results for 
Search instead for 
Did you mean: 

Requirement Test Status discrepancies in Jira Xray plug-in

Karyn Knoll
March 2, 2021

In using the Jira Xray plug-in for Test Management, our Product Team has observed that we often see very different Requirement Test Statuses displayed in different locations in the tool. The differences we observe cannot be explained by differences in the settings (Test Plan, Fix Version/s, Test Environments, etc.) of the filters associated with the Requirement Test Statuses in the different locations. We believe the differences are likely caused by differences in the logic that calculates the Status value in each of the locations the Requirement Test Status is displayed.

We are unable to find detailed enough explanations in the Xray Documentation to determine exactly how Xray is calculating the Requirement Test Status in each location and therefore are reaching out to the community to request help in understanding the criteria and logic/ algorithms by which each Requirement Test Status is calculated and/or why we see different Requirement Test Statuses in different locations even when the filter settings for the different locations are the same.

The Jira/ Xray tool locations providing Requirement Test Statuses that we have questions about are as follows:

  1. “Requirement Status” field in a Requirement (Story or Bug) Issue.
  2. Overall Requirement Coverage Gadget and Overall Requirement Coverage Report with Scope = Test Plan.
  3. Overall Requirement Coverage Gadget Filter (i.e., the list of Requirements displayed when an area on the Overall Requirements Coverage Gadget representing the Requirements that have the specified Requirement Test Status is clicked).
  4. Xray Traceability Report.
  5. Quick Filter Results bar displayed above the Xray Traceability Report.
  6. Xray “requirementsWithStatusByTestPlan” Filter.
  7. in the Summary bar at the top of the Test Coverage Panel in a Requirement (Story or Test) Issue.

Screenshots of each of the Requirement Test Status locations listed above and our questions regarding how these Statuses are calculated are listed below. (Many of the same questions are asked in each location but, since the Status values differ, we figure the answers to the questions are probably different in the different locations.)

Note: in each of the descriptions/ questions below, the term “Test Execution” is understood to mean a Test Execution Issue Type that contains the Test(s) under discussion.

Thank you very much for any help you can provide in deciphering how Jira/ Xray arrives at the Requirement Test Status values it displays in each of these locations.

REQUIREMENT TEST STATUS 1

Location: “Requirement Status” field in a Requirement (Story or Bug) Issue

ReqTestStatus1.jpg

Xray Documentation indicates that the Requirement Status combines the results of the latest Test Execution in each Test Environment for which there is a Test Execution containing the Test to arrive at its overall value.

Assumptions:

  1. Xray first finds the Tests linked to the Requirement and then, for each Linked Test, retrieves its latest Test Execution in each Test Environment. (Because only Tests, not Test Executions, are associated with Requirements, the only way Xray can determine that a Test Execution is testing a given Requirement is via the Tests linked to the Requirement.)
  2. The Requirement Status field calculation combines results for Test Executions
    • that have the same Fix Version/s as the Requirement,
    • that are associated with different Test Environments (i.e., one Test Execution for each Test Environments specified in the Test Executions associated with the Test is included in the Requirement Test Status calculation), but
    • other differences in Test Executions (e.g., different Test Plans, different Revisions) are ignored.

Question:

  1. Which Tests linked to the Requirement does the Xray include in its calculation of the Requirement Test Status - all linked Tests or only those Tests linked to the Requirement via a “Tested by” link?

REQUIREMENT TEST STATUS 2

Location: Overall Requirement Coverage Gadget and Overall Requirement Coverage Report with Scope = Test Plan

ReqTestStatus2.jpg

Assumption:

  1. Xray first finds the Tests linked to the Requirement and then retrieves the latest Test Execution of each Test associated with the specified Test Plan in the specified Test Environments.

Questions:

  1. Which Tests linked to each Requirement does the Xray include in its calculation of the Requirement Test Status - all linked Tests or only those Tests linked to the Requirement via a “Tested by” link?
  2. Does Xray only look at the Test Plan specified in the Test Execution or does Xray also check that the Test itself is associated with the specified Test Plan? (I.e., how does the Xray treat a case in which a Test itself is not associated with the specified Test Plan but one or more of the Test’s Test Executions are associated with the specified Test Plan?)

REQUIREMENT TEST STATUS 3

Location: Overall Requirement Coverage Gadget Filter (i.e., the list of Requirements displayed when an area on the Overall Requirements Coverage Gadget representing the Requirements that have the specified Requirement Test Status is clicked).

Note: Jira/ Xray shows that the Filter used to pull the listed Requirements is

filter = < Requirements-specifying Filter> AND issue in requirementsWithStatusByTestPlan(<Requirements Status clicked on the Gadget>, "<Test Plan ID>", "", "true","","","<Requirements-specifying Filter>")

Example: filter = 12345 AND issue in requirementsWithStatusByTestPlan(OK, "PROJ-1111", "", "true","","","12345")

ReqTestStatus3.jpg

Notice that, though the Overall Requirements Gadget (shown in Requirement Test Status 2) indicates that there are ninety-eight (98) Requirements with Requirements Status = OK, only twenty-nine (29) Requirements are shown in the list when the green OK area in the Gadget is clicked; the other sixty-nine (69) Requirements that make up the ninety-eight (98) Requirements indicated in the Gadget show up when the “OK” Requirement Status in the Filter is replaced with “NOTRUN”.

Questions:

  1. Are the Tests and Test Executions used to calculate the Requirement Test Status of each Requirement included in an area of the Overall Requirements Coverage Gadget the same as those used to calculate the Requirement Test Status of each Requirement listed when that Overall Requirements Coverage Gadget area is clicked or are different Tests and Test Executions used to calculate the Requirements Test Status of the Requirements included in a Gadget area and the Requirements listed when the Gadget area is clicked?
  2. Why is the Requirement Test Status of the Requirements shown in the Overall Requirements Gadget different than the Requirement Test Status of the Requirements listed when a Gadget area is clicked? (In the example shown, the Gadget indicates that ninety-eight (98) Requirements have Requirement Status = OK but the list shown when the green (98) Gadget area is clicked indicates that twenty-nine (29) Requirements have Requirement Status = OK. (The other sixty-nine (69) Requirements have Requirement Status = NOTRUN.)

REQUIREMENT TEST STATUS 4

Location: Xray Traceability Report

ReqTestStatus4.jpg

Questions:

1. The Requirement Test Status results for the Xray Traceability Report appear to be the same as the Requirement Test Status Results displayed in the Overall Requirements Coverage Gadget (Requirement Test Status 2 above). Do the Gadget and the Report use the same criteria and algorithm to calculate the results shown in each?

If not, we have the same questions about the calculation of Requirement Test Statuses for the Xray Traceability Report as we have for the Overall Requirements Coverage Gadget:

1.1     Which Tests linked to each Requirement does the Xray include in its calculation of the Requirement Test Status shown in the Xray Traceability Report - all linked Tests or only those Tests linked to the Requirement via a “Tested by” link?

1.2     Does Xray only look at the Test Plan specified in the Test Execution or does Xray also check that the Test itself is associated with the specified Test Plan? (I.e., how does the Xray treat a case in which a Test itself is not associated with the specified Test Plan but one or more of the Test’s Test Executions are associated with the specified Test Plan?)

REQUIREMENT TEST STATUS 5

Location: Quick Filter Results bar displayed above the Xray Traceability Report

ReqTestStatus5.jpg

Questions:

  1. The Requirement Test Status results for the Xray Traceability Report Quick Filters bar appear to be the same as the Requirement Test Status Results displayed in the Overall Requirements Coverage Gadget Filter (Requirement Test Status 3 above). Do the Gadget Filter and the Quick Filters bar use the same criteria and algorithm to calculate the results shown in each?
  2. Why is the Requirement Test Status of the Requirements shown in the Xray Traceability Report itself different than the Requirement Test Status of the Requirements shown in Traceability Quick Filters bar? (In the example shown, the Xray Traceability Report indicates that ninety-eight (98) Requirements have Requirement Status = OK but the Traceability Report Quick Filters bar indicates that twenty-nine (29) Requirements have Requirement Status = OK and sixty-nine (69) Requirements have Requirement Status = NOTRUN.

REQUIREMENT TEST STATUS 6

Location: Xray “requirementsWithStatusByTestPlan” Filter, described in the Xray Documentation Enhanced querying with JQL section

requirementsWithStatusByTestPlan Filter parameters shown in the Enhanced querying with JQL section are:

issue in requirementsWithStatusByTestPlan(<desired Requirements Status(es), "<Test Plan ID>", "", "true","","","<Requirements-specifying Filter>")

Note: the Xray requirementsWithStatusByTestPlan Filter appears to be the driver for the Requirement Test Status values displayed in both the Overall Requirements Coverage Gadget Filter (i.e., the list of Requirements displayed when one of the areas on the Overall Requirement Coverage Gadget is clicked, see Requirement Test Status 3 above) and the Traceability Report Quick Filters bar (see Requirement Test Status 5 above). Therefore, it seems that this is a key Xray Filter and understanding the criteria and algorithm this Filter uses to arrive at the results it displays is important to being able to fully use the analytical features Xray offers.

Questions:

  1. Which Tests linked to a Requirement does the requirementsWithStatusByTestPlan Filter include when calculating the Requirement Test Status of the Requirement – all linked Tests or only those Tests linked to the Requirement via a “Tested by” link?
  2. Also, in determining which Tests to include in its Requirement Status calculation, does Xray only look for Test Executions that are associated with the specified Test Plan or does Xray also check that the Test itself is associated with the specified Test Plan? (I.e., how does the Xray requirementsWithStatusByTestPlan Filter treat a case in which a Test itself is not associated with a specified Test Plan but one or more Test Executions that include the Test are associated with the specified Test Plan?)

REQUIREMENT TEST STATUS 7

Location: in the Summary bar at the top of the Test Coverage Panel in a Requirement (Story or Test) Issue

ReqTestStatus7.jpg

We have experienced cases in which the Requirement Test Status shown in the Summary bar at the top of the Test Coverage Panel in a Requirement Issue displays a Requirement Status that is inconsistent with the actual Test results that contribute to the Requirement Test Status.

The example above is one such situation:

  • The Requirement Issue (a Story) is linked to seven (7) Tests via seven (7) “Tested by” links.
  • Four (4) of the linked Tests are in the specified Test Plan; three of the linked Tests are associated with a different Test Plan.
  • All four (4) of the linked Tests in the specified Test Plan Passed in the specified Test Environment. All four (4) of these Tests are shown as Passed in the Test Coverage Panel list below the Summary bar; the remaining three (3) Tests, which are not associated with the specified Test Plan, are shown as N/A in the Test Coverage Panel list.
  • Additionally, an Overall Requirements Coverage Report that was run using the same Test Plan and Test Environment as that shown in the Test Coverage Panel above indicated that this Requirement Passed four (4) of a total of four (4) Tests. When the “4” in the Total Tests list was clicked, the result listed the same four (4) Tests shown in the Test Coverage Panel, which all have a Status = Passed. (I.e., the Overall Requirements Coverage Report correctly included only the four (4) linked Tests that are associated with the specified Test Plan and Test Environment; the three (3) linked Tests not associated with the specified Test Plan were ignored.)
  • Given the set of circumstances described above, we expect the Requirement Status shown in the Test Coverage Panel Summary bar to be “OK”, yet the Requirement Status shown in the bar is “NOTRUN”.

Question:

Why is the Requirement Test Status shown in the bar at the top of the Test Coverage Panel above “NOTRUN” instead of “OK”?

1 answer

1 vote
Ignacio Pulgar
Rising Star
Rising Star
Rising Stars are recognized for providing high-quality answers to other users. Rising Stars receive a certificate of achievement and are on the path to becoming Community Champions.
December 20, 2017

Tell the customer to sign a document with a disclaimer with an exemption of responsibility of any bugs and architectural inconsistencies that may arise as a result of your work with a lack of perspective on the whole picture.

If they want to work that way, that's OK as long as you are covered, and not responsible of any disasters.

Besides, the exposure to that document might make them change their opinion on the way everything is managed.

Suggest an answer

Log in or Sign up to answer
TAGS
AUG Leaders

Atlassian Community Events