In a simple query (like worklogDate = -1d && worklogAuthor = currentuser() ), where I'll see columns like Time Spent and "Epsilon" Time Spent, I believe the totals are for WorkLogDate - meaning the last 1 day ("-1d") that any (sub-tasks) were modified, or where work times were added. Today, for several sub-tasks, I added in times for the past 5 days. When I run the above query, instead of seeing *just* yesterday's hours, in fact I see the total of all the past 5 days.
Is there another column that I can look at that would give me only yesterday's/today's hours that I actually worked on those subtasks?
Hi @Larry Weisberg ,
Using the JQL results the data returned is going to be a single field, and the timespent is going to include all time spent for the issue-level and does not have a field per event so its a total sum vs individual action.
On the issue directly you can review the issues "History" tab or the issues "Work Log" tab and see the values added to the field and a timestamp for a manual summation, but natively there is not a time-tracking feature built-in for more detailed reporting options.
To expand on this functionality you have a few options, first, there are a lot of third-party apps available that add in time-tracking features viewable in the marketplace here:
Alternatively, you could look into writing a custom script to parse the data out of an API call, and yopu could look at GET /rest/api/3/issue/{issueIdOrKey}/worklog or possibly a JQL result set via GET /rest/api/3/search and pulling the issue history items via the api by appending the following to the search results to get the issue history items including timespent.
?expand=changelog
Looking at your JQL filter, you could use something like the following to get the desired data from the API:
https://<YOUR_BASE_URL>.atlassian.net/rest/api/3/search?jql=worklogDate%20%3D%20-1d%20%26%26%20worklogAuthor%20%3D%20currentuser()&expand=changelog
There is a really good example on how to do this on a per-user basis from the API with example scripts linked via GitHub repo in the following community post, which looks like it lines up with what you are going for, and would be a really good starting point to pull and collate the data set:
There are some more discussions in the following threads on this approach with some more examples as well if you wanted to go this route:
Regards,
Earl
Thanks @Earl McCutcheon - that looks like something interesting to play around with, writing a script to get at what we are looking for.
Thanks!
Larry
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.