Forums

Articles
Create
cancel
Showing results for 
Search instead for 
Did you mean: 

Building Around Forge Limitations: Rate Limits (Part 2: Scaling Patterns)

Something about me is that I tend to be very straightforward. If your outfit looks terrible, I’ll probably tell you it looks terrible. Not because I’m trying to be rude, but because I believe honest feedback helps people improve. I also expect the same treatment in return. If I’m missing something or doing something wrong, I would rather hear the truth than have someone sugarcoat it to spare my feelings.

The same mindset applies when dealing with technical problems. Once a problem has already happened, spending too much time discussing how we got there doesn’t change the outcome. At that point, we’ve moved past prevention and into resolution. My preference is simple: solve the problem first, then talk about mitigation.

That same logic applies when building apps on Forge. If you spend too much time complaining about the limitations which are already there, you can't scale your apps.

Forge Is a Constrained Environment (By Design)

Forge often feels like a locked container. That’s not a flaw, it’s intentional.

The platform enforces strict rules around:

  • Function execution time
  • Invocation limits
  • API request limits
  • Storage limits
  • Queue payload sizes

If you attempt to push beyond those boundaries, the platform simply restricts you. The operations you expected to run either slow down, fail, or get throttled.

So the real question becomes:

How do you build around Forge’s limitations?

One of the most discussed topics in the developer community is rate limiting.

And today, that’s exactly what we’re going to talk about.

Rate Limits: The Gatekeeper of System Scaling

Rate limiting is one of the most important mechanisms in any distributed system. It’s the control mechanism that prevents abuse, protects infrastructure, and ensures fair usage across tenants.

Forge is no different. If your app triggers too many:

  • function invocations
  • Atlassian REST API requests
  • external requests

you will eventually encounter platform limits. This isn’t unique to Forge. It exists in AWS, GCP, and most large cloud systems.

The difference is that Forge is multi-tenant and heavily controlled, so those limits can surface quickly if your architecture isn’t designed properly.

Can You Avoid Forge Rate Limits?

Short answer: No.

Rate limits are part of the platform’s protection model.

Forge runs thousands of apps across many customers on shared infrastructure. Without rate limiting, a single misbehaving app could degrade the platform for everyone.

So avoiding rate limits entirely is not realistic.

The real goal is designing your app so that rate limits rarely become a problem.

The Real Risk Developers Should Think About

The bigger concern isn’t simply hitting rate limits during normal use.

The concern is uncontrolled request amplification.

For example:

  • A poorly designed UI repeatedly triggering backend functions
  • Bulk operations that fire too many API calls
  • Users unintentionally triggering expensive operations
  • Automated scripts interacting with your app

In extreme cases, this could resemble a DDoS pattern against your own app. So the responsibility ultimately falls on the app architecture itself.

Practical Patterns to Reduce Rate Limit Pressure

There is no single silver bullet, but several patterns help significantly.

Debouncing

Prevent repeated actions from triggering backend logic unnecessarily.

For example, if a user clicks a button multiple times within a short period, your app should ignore duplicate requests.

This avoids unnecessary function invocations.

Caching

Avoid requesting the same data repeatedly.

Cache results when possible, especially when interacting with Jira or Confluence APIs.

Examples include:

  • Issue metadata
  • configuration values
  • frequently requested project data

Reducing repeated API calls dramatically lowers the chance of hitting limits.

Internal Rate Limiting

You can implement your own rate control within the app itself.

For example:

  • queue operations instead of executing immediately
  • throttle requests from specific workflows
  • limit bulk operations per user action

This prevents your app from overwhelming the platform.

The Truth: No Two Apps Are the Same

The patterns above help reduce pressure on the system, but they won’t solve every situation. Different apps have very different workloads.

An app that analyzes Jira data across thousands of issues will behave very differently from an app that adds a UI panel. That’s why architecture matters.

In many cases, the long-term improvement that vendors hope for is more granular rate-limit isolation per installation or tenant, which would further reduce cross-impact between heavy and light usage scenarios.

The real challenge is learning how to build within those boundaries without constantly fighting them.

When you do that well, Forge becomes surprisingly powerful. When you ignore those constraints, scaling problems appear quickly.

In my next article, I’ll continue with Part 3 of scaling patterns, where we’ll look at more architectural strategies for building production-grade Forge apps.

2 comments

Founder - NewSysRS
Atlassian Partner
April 13, 2026

Thanks for this insightful article. The strategies mentioned here regarding rate limits and scaling are crucial for anyone building complex Forge apps.

​I recently ran into these exact hurdles while developing a plugin that needs to import Excel files as structured specifications. Initially, I wanted to process the imports as a single bulk operation, but the combination of Forge’s execution time limits and the API rate limits made this extremely risky for larger datasets.

​To ensure stability and prevent silent failures during large imports, I ended up implementing a "row-by-row" processing logic. While this overhead feels counterintuitive to efficiency, it was the only way to reliably track the state of each requirement and ensure that the import didn't crash mid-way.

​It’s a classic trade-off: moving slower to ensure the system doesn't break under the weight of its own limitations. Looking forward to seeing the next article for scaling patterns.

Like # people like this
Prince Nyeche - ELFAPP
Atlassian Partner
April 13, 2026

@Founder - NewSysRS For your use case, I recommend enforcing strict limits on imported files. For example, define a fixed maximum file size such as 2MB or 5MB and restrict uploads to specific attachment extensions. Avoid leaving file uploads open-ended for users.

This validation should start at the frontend, where you enforce file size and allowed file types as the first gate. Then enforce the same rules again on the backend to ensure the limits cannot be bypassed, rejecting any request that exceeds them.

Since you’ve already encountered this limitation, another practical approach is to batch the data before processing and handle the workload through Forge queues. Keep in mind that each payload processed through the queue should remain under 200KB to stay within platform constraints.

Like Josh likes this

Comment

Log in or Sign up to comment
TAGS
AUG Leaders

Atlassian Community Events