Forums

Articles
Create
cancel
Showing results for 
Search instead for 
Did you mean: 

Automation concepts – Testing Practices for Rule Writing

Estimated time to read: 5 minutes

 

TL; DR: Let's cover testing basics for rule-writing to ensure you get the desired results and to save your team time in the future.

 

When testing automation rules, particularly more complicated ones, I may use these practices:

  • When possible, pair program on the rule. This will lead to a better solution, knowledge sharing, and improved maintainability when external events happen, such as your product admin leaving unexpectedly.
  • Define the problem you are trying to solve; that is, "why do this". That will save frustration when rule-writing and checking the test results.
  • For more complicated scenarios, I always pause to draw a diagram or use blank index cards, moving them around to simulate how I expect the logic to work. (Using blank cards this way is called the "naked CRC" method, where CRC is class, responsibility, and collaboration. There is an old software engineering rule of thumb where if you cannot explain a system with 10 blank index cards, you do not understand it.)
  • Define your test cases before you start, perhaps in GIVEN...WHEN...THEN... format or a matrix of inputs / results. This may help you do a bit of pareto analysis (i.e., 80 / 20 rule) to reduce testing, and perhaps even stop adding rule components when you have solved most of the problem with less effort. 
  • Create a test project in a free license, test site or sandbox. This reduces side-effects and production impacts, and reduces the chance of impacting automation usage limits.
  • When writing the rule, leverage the power of the audit log with Log actions and {{#debug}} ... {{/}} sections. Beware that debug syntax can break some smart value expressions, so watch out for unexpected results. When that happens, try removing the debug and retesting.
  • Unless work item continuity is needed for a test scenario, always create new work items for each test case. This helps fault-isolate testing and produce consistent results for later review: that is, you have a known starting and ending point. Please try these methods to create test work items for each test round:
    • Create a CSV import file to create a set of test work items meeting your needs
    • Create a separate rule which creates new work items
    • In both the above methods, add a unique identifier to distinguish items, such as adding a date / time stamp or "test  #" to the Summary
  • Before each test / automation rule execution, confirm what you expect and always slow down to understand results before changing the rule again. Better still, consider if you could automate validation in the rule itself, perhaps removing that validation later. Then the audit log will document your test results.
  • Unfortunately, there is no version control for rules. When making many / larger scale changes to a rule, instead of updating it, disable that rule and copy it to a new one for changes. This allows both comparison of before / after and preservation of the audit logs. When everything works, decide what to do with the previous, in-progress rules.
  • When not understanding why a test is failing, verbally explain the situation to a teammate. One may be surprised how often just slowing down to describe the symptom reveals the cause...even before your teammate says anything :^)
  • Provide rule context when asking the community to help resolve a rule problem. This usually includes: the problem being solved, your Atlassian products and their versions, the complete rule, the audit log, what is not working as expected, and why you believe that to be the case.
  • When ready for production, copy or export-import your rule to the prod environment, and retest there to confirm behaviors
  • When completely done, pause to document and explain the rule to another team member, letting them ask questions. This knowledge sharing will pay dividends for years to come.

 

I hope this article offered you some new ideas to test rules. Please let me know your feedback, and…Happy rule writing!

 

1 comment

Rock
Contributor
December 8, 2025

Great overview of testing practices! A few key takeaways for beginners:

  1. Always define the problem and expected outcome before writing the rule.
  2. Use a sandbox or test project to avoid impacting production.
  3. Create new work items for each test case to isolate results.
  4. Leverage audit logs and debug actions to track behavior.
  5. Keep versions of rules by copying instead of updating, since there’s no built-in version control.

Following these steps consistently saves time and prevents frustration when rules get complex.

Comment

Log in or Sign up to comment
TAGS
AUG Leaders

Atlassian Community Events