The document
promises to send the nextPageToken at the end of the output if there are more than maxResults entries in the output.
The maxResults are ridiculously low (100) per the docs and are subject to change.
Two days ago the limit was in place and the nextPageToken appeared at the bottom of the output page.
Today I've got all items matching the query and the thing seems to ignore the maxResults parameter completely.
Naturally, the 'nextPageToken' is not there.
Not only it is a bad idea to only post the parameter when it has value and not always (with a null as a value). But now the whole thing is broken.
How do you manage to use this API that is so inconsistent?
At least as of September 2025 I'm able to work with 43 fields. I don't touch the *all option because we have 3000+ custom fields in our organization. ({{baseURL}})/rest/api/3/field will return field names and their friendly names. You have to use the "field" names in the {{baseURL}})/rest/api/3/search/jql API calls.) Field names are unique, but Friendly names can have duplicates.
As far as the speed of any ETL, finding a way to split up your data via JQL helps. ex instead of one giant category= jql I've switched to giving our ETL a list of project keys and use jql= project=keyname1 and updated>=-Xd and have a python worker for each project..
(nothing is as good as startAt)
maxResults=5000 is basically ignored and it only returns 100 (although without at least 100 there it defaults to 50)
I haven't had an issue with the nextPageToken missing. But it isn't supposed to be present when "isLast": true (Because it's the last page.) the worker for the Extract part of our ETL basically stops once isLast = true
For the update >=-7d part I basically delete the rows where those keys are in. Then insert the rows that are returned from the JQL. This Update\Delete\Insert method is fairly speedy (<30seconds using -7d) and we run this a few times per day, so our reporting is accurate when our users need the data.
However, we still need to have a RESEED option where we get everything periodically, for when we add a new project to our team or just to make sure nothing is missed in the long term. The problem I'm running into is after 50 pages, everything gets really slow and times out. I've tried variable back-off and retry options and it eventually finishes but not in a timely manner.
The Extract time is basically throttled to 100 results per API call of our largest project. Which sadly has over 100 pages.
The reseed option basically takes ~2 hours for the Extract part. I'd rather not breakup the RESEED mode into windows of time..
Wondering if anyone else has experienced this slowness after page 50 and potential faster workarounds that don't involve dicing up project JQLs.
An update here:
the number of the returned fields from this endpoint also appears to be limited:
&fields="issuekey,issuetype,status,summary,assignee"
produces key, type, status and summary, but not assignee.
But if I move the fields around and put the "summary" after "assignee",
like this: &fields="issuekey,issuetype,status,assignee,summary",
we now have the assignee, but not summary.
The fields=*all is ignored as well.
This thing seems totally broken. Still no response from the support.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Thank you @Renata_Getint for the insights!
I use this particular endpoint to pull data into PowerBI.
It's been a nightmare all along because of the very small page size, then came the change and now the nextPage is not even present.
There is no 'total' in this new version of the endpoint and this is yet another problem.
Everything has to be redone because you now can't calculate the number of pages to loop over. Instead we are supposed to rely on the nextPage, but it is not guaranteed to be there either.
I'd be happy to get all the items at once, in a single page, but can't rely on it be the case in the integrations. Because again, nothing is stable with Atlassian: not the page size, not the output format, not the rate limits, not the data provided by the endpoint.
I do not think our use case would count as a large-scale integration, but even this one small thing seems impossible to use.
Btw, I did rise the support ticket and never heard back from them.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.