Hi everyone,
I am managing a utility bill calculating–related website (lescobil.pk/lesco-bill-calculator) where we fetch internal issue-tracking data from Atlassian Cloud for auditing and customer support cases. Recently, we started experiencing an unusual problem:
Issue:
A very small percentage (roughly 2–4%) of API requests from our backend to Atlassian return a 403 – Insufficient permissions error only in production. When we retry the same request (same user, same token, same endpoint), it succeeds after 1–2 attempts. This behavior is not reproducible in staging.
Setup:
Node.js backend (server-to-server integration)
OAuth 2.0 (3LO)
Correct scopes applied (read:jira-work, read:issue.jira)
Rate limits are not being hit
Tokens are not expired
Requests are queued and retried safely
Tried so far:
Regenerated client secrets
Rotated refresh tokens
Validated scopes
Logged request headers
Checked rate limit headers
Verified IP allowlisting rules
Patterns noticed:
This happens only during brief traffic bursts
403 responses come from a few specific Atlassian edge IPs
Debug logs show identical payloads on success and failure
Questions:
Does Atlassian Cloud perform edge-based permission propagation that can cause short-lived inconsistency?
Is there a known issue with partial permission caching on distributed nodes?
Should we implement exponential backoff beyond 2 retries?
Is static IP allowlisting via a marketplace app recommended in this scenario?
We are only fetching billing-related case IDs and notes for customer support purposes (no write operations).
Would really appreciate any insight. This problem is extremely rare but affects trust in our internal audit workflow.
Thanks!