Forums

Articles
Create
cancel
Showing results for 
Search instead for 
Did you mean: 

Automation concepts – Testing Practices for Rule Writing

Estimated time to read: 5 minutes

 

TL; DR: Let's cover testing basics for rule-writing to ensure you get the desired results and to save your team time in the future.

 

When testing automation rules, particularly more complicated ones, I may use these practices:

  • When possible, pair program on the rule. This will lead to a better solution, knowledge sharing, and improved maintainability when external events happen, such as your product admin leaving unexpectedly.
  • Define the problem you are trying to solve; that is, "why do this". That will save frustration when rule-writing and checking the test results.
  • For more complicated scenarios, I always pause to draw a diagram or use blank index cards, moving them around to simulate how I expect the logic to work. (Using blank cards this way is called the "naked CRC" method, where CRC is class, responsibility, and collaboration. There is an old software engineering rule of thumb where if you cannot explain a system with 10 blank index cards, you do not understand it.)
  • Define your test cases before you start, perhaps in GIVEN...WHEN...THEN... format or a matrix of inputs / results. This may help you do a bit of pareto analysis (i.e., 80 / 20 rule) to reduce testing, and perhaps even stop adding rule components when you have solved most of the problem with less effort. 
  • Create a test project in a free license, test site or sandbox. This reduces side-effects and production impacts, and reduces the chance of impacting automation usage limits.
  • When writing the rule, leverage the power of the audit log with Log actions and {{#debug}} ... {{/}} sections. Beware that debug syntax can break some smart value expressions, so watch out for unexpected results. When that happens, try removing the debug and retesting.
  • Unless work item continuity is needed for a test scenario, always create new work items for each test case. This helps fault-isolate testing and produce consistent results for later review: that is, you have a known starting and ending point. Please try these methods to create test work items for each test round:
    • Create a CSV import file to create a set of test work items meeting your needs
    • Create a separate rule which creates new work items
    • In both the above methods, add a unique identifier to distinguish items, such as adding a date / time stamp or "test  #" to the Summary
  • Before each test / automation rule execution, confirm what you expect and always slow down to understand results before changing the rule again. Better still, consider if you could automate validation in the rule itself, perhaps removing that validation later. Then the audit log will document your test results.
  • Unfortunately, there is no version control for rules. When making many / larger scale changes to a rule, instead of updating it, disable that rule and copy it to a new one for changes. This allows both comparison of before / after and preservation of the audit logs. When everything works, decide what to do with the previous, in-progress rules.
  • When not understanding why a test is failing, verbally explain the situation to a teammate. One may be surprised how often just slowing down to describe the symptom reveals the cause...even before your teammate says anything :^)
  • Provide rule context when asking the community to help resolve a rule problem. This usually includes: the problem being solved, your Atlassian products and their versions, the complete rule, the audit log, what is not working as expected, and why you believe that to be the case.
  • When ready for production, copy or export-import your rule to the prod environment, and retest there to confirm behaviors
  • When completely done, pause to document and explain the rule to another team member, letting them ask questions. This knowledge sharing will pay dividends for years to come.

 

I hope this article offered you some new ideas to test rules. Please let me know your feedback, and…Happy rule writing!

 

4 comments

Rock
Contributor
December 8, 2025

Great overview of testing practices! A few key takeaways for beginners:

  1. Always define the problem and expected outcome before writing the rule.
  2. Use a sandbox or test project to avoid impacting production.
  3. Create new work items for each test case to isolate results.
  4. Leverage audit logs and debug actions to track behavior.
  5. Keep versions of rules by copying instead of updating, since there’s no built-in version control.

Following these steps consistently saves time and prevents frustration when rules get complex.

Darryl Lee
Community Champion
December 14, 2025

@Bill Sheboy I will pitch this in here (since it's too late for an official submission) for a Team session:

You and me (or some other more accomplished Automation rule writers) on stage with our laptops, a microphone out in the audience, asking for people to step up and give suggestions of rules they need written.

We try to implement them right then and there.

Of course, I wouldn't want follow any of your best practices, and just dive right in and start hacking. And then there's you with your measured, methodological approach.

It might be entertaining to see how that turns out. :-D

Alas, what Atlassian seems to want is well-rehearsed, slide-driven, practiced and repeatable content. Especially stuff that sells more subscriptions, upgrades, or third-party apps.

Well... there's always Braindates.

Would really love to meet in person one of these years for something like this, even if we don't end up doing a session together. (And I promise not to try to bully sway you into joining the Atlassian Champion program too hard.)

Like Rick Westbrock likes this
Bill Sheboy
Rising Star
Rising Star
Rising Stars are recognized for providing high-quality answers to other users. Rising Stars receive a certificate of achievement and are on the path to becoming Community Champions.
December 15, 2025

Hi @Darryl Lee !!

Thanks for your feedback, and...as you may have read in my posts before: I do not believe in best practices, only better or worse ones for a specific team, context, and point in time.

That noted, I clearly missed describing "better" testing practices used incrementally for many of the ideas, while others provide a safe place to stand to experiment.  For example, using a test project in a test site provides a safety foundation, and then one can incrementally create a test and then write the code / rule to make the test pass (i.e., TDD / BDD), reducing total risk and effort using pareto.  When there is more problem complexity, add the drawing stuff, etc.

Have a great day, and kind regards,
Bill

 

Dave Liao
Community Champion
January 8, 2026

That last bullet is so important. I'd add:

  • If the rule is super duper important, document it in a place like Confluence, and be clear on who the owner is.
    • Hopefully the Rule Owner is an account with a shared email inbox. 🥲
Like Bill Sheboy likes this

Comment

Log in or Sign up to comment
TAGS
AUG Leaders

Atlassian Community Events