Come for the products,
stay for the community

The Atlassian Community can help you and your team get more value out of Atlassian products and practices.

Atlassian Community about banner
Community Members
Community Events
Community Groups

JQL Nuance: WAS IN vs automated snapshot

I have an automation that sends me a set of issues on the first of the month at 01:00 CST.

issuetype = bug AND status in (Backlog, Blocked, "To Do", Icebox, Open, "In Progress", "In Review") AND "Product Component[Select List (multiple choices)]" not in (One, Two, Three)

I have a query for the same info for past months

issuetype = bug AND status was in (Backlog, Blocked, "To Do", Icebox, Open, "In Progress", "In Review") ON (2022-08-01) AND "Product Component[Select List (multiple choices)]" not in (One, Two, Three)

Both return applicable issues, though each has some unique issues. Since they all fit the criteria, I'd expect the lists would be identical. What nuances am I overlooking? 

1 answer

1 accepted

1 vote
Answer accepted

The first one is looking at the status of the issues at 01:00 CST.

The second is checking to see if the issues were in any of the specified statuses at any time in the 24 hours of the date specified.

You're correct, though I'm not certain that explains the discrepancy between the two results. 

I ran the automation and it found 100 issues. I took the timestamp and plugged it into the query, which returned 118. 

This leads me to believe there's a difference in what's being searched for, though it's not clear what.

What do you mean you took the time stamp and "plugged it into the query"? What did that query look like after you updated it?

Did you review the two lists to find the issues that were in one and not in the other, and then review the History for those issues?

Here's a screenshot from the automation log

Screen Shot 2022-09-09 at 3.56.00 PM.png

I took that date and time and ran the query looking for issues matching at that point in time. 

issuetype = bug AND "Product Component[Select List (multiple choices)]" not in (One, Two, Three) AND status was in (Backlog, Blocked, "To Do", Icebox, Open, "In Progress", "In Review") ON "2022/09/09 14:47"

They return a different number of results.

Hm, this is interesting.

I tried a very small data set of 4 newly created issues. I changed one to In Progress.

I ran a scheduled automation like you have for 

status in (Backlog)

...and got the three issues I expected. Then, like you, I took the timestamp from the issue run and used Advanced Issue Search to execute the query:

status was in (Backlog) on "<timestamp>"

...and that result included the fourth issue currently in the In Progress status.

I think I know what happened in my case. Maybe this will explain your discrepancies also.

My user account Preference time zone is set to Los Angeles. The default time zone for my instance is UTC; a 7 hour difference.

In the Audit Log for the Rule, the time stamp display is adjusted for my account Preference timezone.

The time of change for changes to an issue is stored in the database to match the timezone of the instance.

I have determined that the <timestamp> value used in the Advanced Issue Search screen for the ON predicate is being compared to the timestamp of status changes as they are recorded in the database - which for me is UTC (the time zone setting of the instance).

So, when I put "2022-09-09 14:44" (the Los Angeles-relative time stamp from the audit log) into my advanced search, it was assumed by the comparison to be UTC (the timezone of date/times in the issue change log). 7 hours ago that fourth issue of mine was still in the Backlog status. When I modified that timestamp to align with the UTC time at which the status change occurred for the fourth issue, I found that the fourth issue was properly excluded.

Given that information, does that explain the discrepancies in your case also?

Interesting! Let me run everything with UTC and see if it's 1:1.

Alright, headway!

Searching for current issues, noting UTC, then searching for issues WAS IN UTC time, is essentially 1:1, about 120 issues.

It seems the 'bug' is in the automation. For half the projects, it returns a 1:1 match with the manually run queries. It's not returning any results for the other half.

It's a global automation, set to All Projects. All parameters match the manually executed queries. Hitting validate query returns 120 issues. 

Is something truncating how many issues it'll email?

{{issueType}}, {{key}}, {{customfield_13261.value}}, {{customfield_13224.value}}, {{customfield_13264.value}}, {{created.jiraDate}}

For the Free plan there is a 100 emails/24 hours limit, but your post is tagged as Premium so that should not apply to you.

More info on automation limits can be found here.

Otherwise I'm not aware of a limitation on sending emails, but I'm wouldn't consider myself an authority on that.

It's definitely the automation, truncating at 100 issues. Thanks for your help with this!

Suggest an answer

Log in or Sign up to answer
Site Admin
Community showcase
Published in Jira Software

An update on Jira Software customer feedback – June 2022

Hello Atlassian Community! Feedback from customers like you has helped us shape and improve Jira Software. As Head of Product, Jira Software, I wanted to take this opportunity to share an update on...

5,076 views 18 32
Read article

Atlassian Community Events