Forums

Articles
Create
cancel
Showing results for 
Search instead for 
Did you mean: 

Forge Async Compute: What Are We Actually Being Charged For?

Last week, Atlassian quietly rolled out Async Compute usage metrics inside the Forge developer console. If you’re building on Forge, you probably noticed a new metric appearing in your usage dashboard without much explanation.

Now you might be asking:

What exactly is Forge function compute, and what changed?

Previously, most developers focused on invocation costs and synchronous compute usage. I’ve written about invocation mechanics before, but this new change introduces something more subtle: Forge now tracks compute usage coming from asynchronous modules separately.

These modules include things like:

  • Scheduled triggers
  • Async events
    • Queue consumers
  • Event triggers

All of these appear in the Forge platform as background execution models.

The challenge right now is not the existence of the metric; it’s the lack of clarity around how it’s calculated.

The Core Question: What Counts as Async Compute?

Technically speaking, Forge functions run inside the same execution environment regardless of whether they are triggered synchronously or asynchronously.

So the question becomes:

What exactly differentiates async compute from sync compute?

Is async compute counted when:

  1. A Forge module like scheduledTrigger, async events, or queues invokes a function?

OR

  1. Any function execution that happens outside a user request lifecycle?

Because depending on the interpretation, the billing implications are very different.

Example: Scheduled Trigger Usage

Let’s assume a very simple scenario. 

A scheduled trigger runs once every hour.

scheduledTrigger = 1;
month = 30; // 30 days
hourInDay = 24;

calculation = scheduledTrigger * hourInDay * month;
// total executions = 720 // hours

If each execution runs for the full hour (hypothetical worst case):

executionSeconds = 720 * 3600
= 2,592,000 seconds

Forge compute is measured using GB-seconds:

GB-seconds = Memory (GB) × Execution time (seconds)

Let's use an example app that Forge functions typically run with 256MB memory.

memory = 0.25 GB
compute = 0.25 × 2,592,000
= 648,000 GB-seconds

From a compute perspective, this looks identical to synchronous compute calculations.

So the question remains:

Why is async compute showing up separately and often appearing higher?

This is the part that is new, the Async compute was just released to show you the cost of the invocation cost but going back to the question why is it higher.

Possible Explanation #1 - Separate Metering for Background Work

One interpretation is that Atlassian is simply metering background execution independently. Meaning:

User-driven requests → Sync compute
Platform-driven execution → Async compute

Under this model:

Execution Type Example
Sync compute UI actions, resolvers, web triggers
Async compute scheduled triggers, queue consumers

This would explain why async metrics grow faster in apps that rely heavily on automation pipelines.

Possible Explanation #2 - Multi-Stage Async Pipelines

A common Forge pattern looks like this:

Scheduler → Queue → Consumer

If compute is counted per execution stage, the pipeline becomes:

1 invocation → multiple compute executions

That means one scheduled task could generate:

1 scheduled trigger execution
+ 1 queue enqueue
+ 1 queue consumer execution

Which would explain why async usage can appear to double or triple compared to synchronous usage.

However, Atlassian documentation does not clearly confirm this. And it would be good if the documents can simply go further. I don't know who writes them but please, pretty please, add more context when you write. Don't make vendors guess.

Another Important Context: The December 2025 Compute Fix

On 17 Dec 2025, Atlassian updated Forge compute metering. This update

According to the developer changelog:

As a result, affected apps may now report higher compute usage than before. This reflects more accurate tracking of the resources consumed; there is no change to the actual behavior or performance of your apps.

In other words:

usage didn’t suddenly increase - measurement became more precise.

That alone could explain why some developers are seeing unexpected spikes. However, it is waived until 1 July 2026, of which it becomes chargeable.

The Real Concern: Predictability

The issue isn’t that Forge charges for compute. That’s expected in any serverless platform. The issue is predictability.

If developers cannot easily estimate usage for architectures like:

Scheduler → Queue → Consumer

then it becomes very difficult to design scalable Forge apps with cost awareness.

Forge works best when developers lean into asynchronous architecture, queues, scheduled processing, background automation.

Charging heavily for those patterns risks discouraging the very architecture the platform encourages.

A More Balanced Alternative

One possible middle ground would be: discounted compute for async workloads.

Example:

Async compute billed at 50% rate

Using the earlier example:

648,000 GB-seconds
÷ 2
= 324,000 GB-seconds

This would:

  • Encourage scalable architecture
  • Align pricing with platform design
  • Avoid penalizing background workloads

Because in Forge today, async execution isn’t optional, it is the backbone that drives scalability with Forge and if pricing isn't predictable, well I'll reserve my comments for a latter time on this.

The future of Forge

I strongly believe Forge is the future of building inside the Atlassian ecosystem.

It solves many problems that existed in the old Atlassian Connect model:

  • security boundaries
  • hosting overhead
  • data residency
  • compliance

But platforms succeed when pricing is clear, predictable, and developer-friendly.

Right now, async compute is missing that clarity.

A Simple Ask for the Forge Team

It would help the ecosystem if Atlassian clarified:

  1. Exactly what events count as async compute
  2. Whether queue pipelines multiply compute usage
  3. How developers should estimate async workloads
  4. Whether background compute should be priced differently

Forge has massive potential. But the success of the platform will depend on aligning platform architecture with platform economics.

 

0 comments

Comment

Log in or Sign up to comment
TAGS
AUG Leaders

Atlassian Community Events