On Oct 31, 2024, we announced the deprecation of several widely used REST APIs in JIRA, namely:
GET /rest/api/2|3|latest/search
- Search for issues using JQL (GET)
POST /rest/api/2|3|latest/search
- Search for issues using JQL (POST)
POST /rest/api/2|3|latest/search/id
- Search issue IDs using JQL
POST /rest/api/2|3|latest/expression/eval
- Evaluate Jira expression
You can find additional information about the deprecation here. Click “More details” to see the complete migration guide if necessary. In recent months, we have identified common problems related to migrating to the new enhanced APIs. With this blog post, we’re aiming to share common misconceptions about the new APIs and ways to avoid these challenges.
Read on for more details.
A common challenge that teams face after migrating to the Enhanced JQL API is performance-related. The earlier APIs allowed for random access when searching for issues, which meant that teams could pull multiple issues through parallel requests all at once. However, the Enhanced JQL API changes things up a bit with its pagination mechanism, requiring a more sequential approach. To get results from the next page, you'll need to grab the pagination token from the current page.
The below table illustrates how the new search engine, which supports the Enhanced Search APIs, greatly outperforms the earlier architecture in head to head comparisons. That said, we understand that performance might take a hit if your team was previously using parallel calls.
Search service | P90 Elapsed Time (ms) | P99 Elapsed Time (ms) |
OLD | 209.363 | 1502.477 |
NEW | 144.281 | 357.715 |
To reach best throughput in data retrieval via Enhanced Search, we recommend splitting the retrieval into two separate parts. This is slightly more complicated solution, however it yields the best performance outcome. The process would look like this:
To begin, sequentially gather all relevant issue IDs. Here are a few essential points to consider for ensuring optimal performance:
To boost performance and keep latency low, it's best to skip specifying any fields
or expands
. By doing this, Jira will just return ids
which offers bigger batch sizes.
Utilize maximum offered batch size and ask for 5K issues in each call
Once you have gathered all the IDs that match your query, organize them into batches of 100 within your application.
Call Bulk Fetch Issues API with multiple requests in parallel for each batch to get all issue details you need.
For certain applications, utilizing the Enhanced Search or Expression Evaluate endpoints may be more appropriate than the bulk fetch API. You can achieve equivalent results by supplying the JQL formatted as issue in (1001, …1100)
to these endpoints.
Example pseudo-code that visualizes the pattern:
import concurrent.futures import json import requests from requests.auth import HTTPBasicAuth enhanced_search_url = "https://JIRA_CLOUD/rest/api/3/search/jql" bulk_fetch_url = "https://JIRA_CLOUD/rest/api/3/issue/bulkfetch" auth = HTTPBasicAuth("Email", "Token") headers = { "Accept": "application/json", "Content-Type": "application/json" } def enhanced_search(next_page_token=None): payload = json.dumps({ "jql": "created >= -2d", "maxResults": 5000, "nextPageToken": next_page_token }) response = requests.request( "POST", enhanced_search_url, data=payload, headers=headers, auth=auth ) return response def bulk_fetch(field, issue_ids): payload = json.dumps({ "fields": [ field ], "issueIdsOrKeys": issue_ids, }) response = requests.request( "POST", bulk_fetch_url, data=payload, headers=headers, auth=auth ) return response def get_all_matching_issues(): nextPageToken = None issueIds = [] while True: response = enhanced_search(nextPageToken) data = json.loads(response.text) for issue in data["issues"]: issueIds.append(issue["id"]) if "nextPageToken" in data: nextPageToken = data["nextPageToken"] else: break return issueIds def partition_list(lst, batch_size): for i in range(0, len(lst), batch_size): yield lst[i:i + batch_size] def get_issue_details(field, batch): response = bulk_fetch(field, batch) return json.loads(response.text) def fetch_issues(field): issue_ids = get_all_matching_issues() batches = list(partition_list(issue_ids, 100)) issues = [] with concurrent.futures.ThreadPoolExecutor(10) as executor: futures = [executor.submit(get_issue_details, field, batch) for batch in batches] for future in concurrent.futures.as_completed(futures): issues.append(future.result()) return issues results = fetch_issues("summary")
Many teams are showing issues on the UI with AtlasKit Pagination component that includes:
pages user may navigate to
total count of matching issues
and issue details
The Enhanced Search API has limitations, as it does not return all necessary information in a single call and does not support random page access. This makes it challenging to effectively power UI components without making multiple additional request calls.
We noticed some common challenges in the Jira user experience and made a few updates to bring it in line with modern cloud technologies. Here’s a quick rundown of the key changes in how Jira displays issues these days:
We’ve swapped out the old random page pagination for a smooth live scrolling experience.
The next batch of issues loads automatically as you scroll down.
While the total number of matching issues might seem a bit unclear, users can easily request the exact count through a handy link.
In addition to the two common problems, there are several key considerations to keep in mind for mastering JQL performance:
Utilize the most restrictive JQL possible.
Utilize project-scoped JQLs! We call it project-scoped when we can confidently say that all issues matching the JQL come from a specific project or a defined set of projects. For example,
project = TEST and issueType = Bug
is project scoped
project = TEST or issueType = Bug
is not because of OR condition
updated > -30d
is not project scoped as it doesn't use required clauses at all
We recognize project
and key
clauses to determine if JQL is project scoped.
Don’t specify fields
that you don’t need.
Consider utilizing the bulk fetch API rather than relying on the plain key in (EXAMPLE-1, EXAMPLE-2)
to retrieve issues.
Search will perform better when issue ids are used. This is common case when JQL are built with issue keys, e.g. key in (EXAMPLE-1, EXAMPLE-2) and statusCategory !=Done
. If you have access to issue ids, prefer using them, e.g. key in (10001, 10002) and statusCategory !=Done
If you haven't started planning your migration yet, now is the time to act. The deprecation of old endpoints will conclude on April 30, 2025. After this date, these endpoints may cease to function. If you have any questions or suggestions, raise them in CDAC forum, we’re happy to help.
Grzegorz Lewandowski
Welcome to great meetings, with less work. Automatically record, summarize, and share instant recaps of your meetings with Loom AI.
Learn moreOnline forums and learning are now in one easy-to-use experience.
By continuing, you accept the updated Community Terms of Use and acknowledge the Privacy Policy. Your public name, photo, and achievements may be publicly visible and available in search engines.
2 comments