We are currently working on migrating our old test protocols to JIRA Zephyr and I stumbled across a problem I couldn't find a nice solution for yet.
We are developing a product that also has a webpage that users need to use.
We support multiple browsers (Chrome, Firefox, IE 11 and Safari).
The thing is that when I have a test (e.g. "Registrationprocess") that I want to test in all browsers, I don't want to make a new test for every browser, because the test is always the same, only the plaform changes (same thing with a program I want to test on different Windows versions). If I would just copy/paste the test, then I had to edit every single one of them, when something changes.
What we first thought of was just adding the test multiple times to the same cycle and then just putting the info about the browser into the executions comment (not an ideal solution, but it would at least work).
Then I came across the issue, that you can't add the same test multiple times to the same cycle (https://answers.atlassian.com/questions/257866), which I kind of get, but it makes my life harder.
We also don't want to make a new cycle for every browser, because it just adds unnecessary complexity and kind of breaks the idea of a test cycle.
My question now is how I can have the same test executed in the same cycle, but with different "platforms" (or at least have a nice workaround)
Yes this is probably the biggest drawback to Zephyr in my estimation. I want to run the same test case in 5 different environments, and report it as Passed in one environment but Failed in another while showing up as WIP in another. Yes I could create 4 clones of every test case but then if I want to change a step, I have to make the same change in all 4 clones. It makes the job of maintaining test cases 5 times as long and hard as it ought to be.
It should be possible to record multiple executions of each test case, with a text field called maybe 'Iteration' or 'Execution' that indicates what is unique about the execution. For example, 5 different environments, or 3 different input data values, or logged in as Administrator User / non-Administrator User. That way, if you make an improvement to a test case, you only have to make it once and it affects every future execution.
It is going to boil down to how you want JIRA to report status.
If all the Browsers/OS have to be tested on one of your web pages before the web page passes testing, then you can:
Use the JIRA Test Case called "Registrationprocess", build out your test condition(s) in the DESCRIPTION for the webpage, then have the JIRA Test Steps outline your different browsers / OS. Put the test case into a Test Cycle then start executing the Test Steps.
One potential down side is that can make it a little more complicated to have members outside your team view the status of testing.
Here are three simple suggestions for handling your situation in the same test cycle.
Clone/copy the original 'Test Case' (with all of the steps) and in the summary call it whatever (so you can know to which browser it applies)... ex. "Registration Process (Chrome)", clone "Registration Process (FireFox)", clone " Registration Process (IE 11), clone "Registration Process (Safari)", etc... so you know at a glance to which browser each applies within your cycle. Also, use the labels field and consistently name each separate test with the browser it tests ("Chrome", "FireFox", "IE11", "Safari"). And/Or, use the components to handle this but requires adding that component to the project.
3 - If you wanted to manage the testing within the same test case (ie. only one Z4J test) - you could clone each step multiple times and somehow identify each cloned/copied/repeated step with the browser it relates to... so step 1-4 would have the same text (maybe)... and maybe the test data field would contain Step 1-Chrome, 2-FireFox, 3-IE11, 4-Safari. You could also completely handle all the test steps for Chrome first (maybe that's Step 1-11). Then step 12-22 is the iteration for Firefox. 23-33 is the iteration for IE11. 34-44 for Safari. A problem would exist if any of the browser steps fail - the whole test case would fail (or be blocked, or WIP, whatever). BUT when opening the execution you could at least see where all the other browser steps have passed and which step/s are being held up
Other handling methods also come to mind. Maybe some of the above can inspire your solution.
You've got this covered : -)
Connect with like-minded Atlassian users at free events near you!Find an event
Connect with like-minded Atlassian users at free events near you!
Unfortunately there are no Community Events near you at the moment.Host an event