Last week, Atlassian quietly rolled out Async Compute usage metrics inside the Forge developer console. If you’re building on Forge, you probably noticed a new metric appearing in your usage dashboard without much explanation.
Now you might be asking:
What exactly is Forge function compute, and what changed?
Previously, most developers focused on invocation costs and synchronous compute usage. I’ve written about invocation mechanics before, but this new change introduces something more subtle: Forge now tracks compute usage coming from asynchronous modules separately.
These modules include things like:
All of these appear in the Forge platform as background execution models.
The challenge right now is not the existence of the metric; it’s the lack of clarity around how it’s calculated.
Technically speaking, Forge functions run inside the same execution environment regardless of whether they are triggered synchronously or asynchronously.
So the question becomes:
What exactly differentiates async compute from sync compute?
Is async compute counted when:
scheduledTrigger, async events, or queues invokes a function?OR
Because depending on the interpretation, the billing implications are very different.
Let’s assume a very simple scenario.
A scheduled trigger runs once every hour.
scheduledTrigger = 1;
month = 30; // 30 days
hourInDay = 24;
calculation = scheduledTrigger * hourInDay * month;
// total executions = 720 // hours
If each execution runs for the full hour (hypothetical worst case):
executionSeconds = 720 * 3600
= 2,592,000 seconds
Forge compute is measured using GB-seconds:
GB-seconds = Memory (GB) × Execution time (seconds)
Let's use an example app that Forge functions typically run with 256MB memory.
memory = 0.25 GB
compute = 0.25 × 2,592,000
= 648,000 GB-seconds
From a compute perspective, this looks identical to synchronous compute calculations.
So the question remains:
Why is async compute showing up separately and often appearing higher?
This is the part that is new, the Async compute was just released to show you the cost of the invocation cost but going back to the question why is it higher.
One interpretation is that Atlassian is simply metering background execution independently. Meaning:
User-driven requests → Sync compute
Platform-driven execution → Async compute
Under this model:
| Execution Type | Example |
|---|---|
| Sync compute | UI actions, resolvers, web triggers |
| Async compute | scheduled triggers, queue consumers |
This would explain why async metrics grow faster in apps that rely heavily on automation pipelines.
A common Forge pattern looks like this:
Scheduler → Queue → Consumer
If compute is counted per execution stage, the pipeline becomes:
1 invocation → multiple compute executions
That means one scheduled task could generate:
1 scheduled trigger execution
+ 1 queue enqueue
+ 1 queue consumer execution
Which would explain why async usage can appear to double or triple compared to synchronous usage.
However, Atlassian documentation does not clearly confirm this. And it would be good if the documents can simply go further. I don't know who writes them but please, pretty please, add more context when you write. Don't make vendors guess.
On 17 Dec 2025, Atlassian updated Forge compute metering. This update
According to the developer changelog:
“As a result, affected apps may now report higher compute usage than before. This reflects more accurate tracking of the resources consumed; there is no change to the actual behavior or performance of your apps.”
In other words:
usage didn’t suddenly increase - measurement became more precise.
That alone could explain why some developers are seeing unexpected spikes. However, it is waived until 1 July 2026, of which it becomes chargeable.
The issue isn’t that Forge charges for compute. That’s expected in any serverless platform. The issue is predictability.
If developers cannot easily estimate usage for architectures like:
Scheduler → Queue → Consumer
then it becomes very difficult to design scalable Forge apps with cost awareness.
Forge works best when developers lean into asynchronous architecture, queues, scheduled processing, background automation.
Charging heavily for those patterns risks discouraging the very architecture the platform encourages.
One possible middle ground would be: discounted compute for async workloads.
Example:
Async compute billed at 50% rate
Using the earlier example:
648,000 GB-seconds
÷ 2
= 324,000 GB-seconds
This would:
Because in Forge today, async execution isn’t optional, it is the backbone that drives scalability with Forge and if pricing isn't predictable, well I'll reserve my comments for a latter time on this.
I strongly believe Forge is the future of building inside the Atlassian ecosystem.
It solves many problems that existed in the old Atlassian Connect model:
But platforms succeed when pricing is clear, predictable, and developer-friendly.
Right now, async compute is missing that clarity.
It would help the ecosystem if Atlassian clarified:
Forge has massive potential. But the success of the platform will depend on aligning platform architecture with platform economics.
Prince Nyeche - ELFAPP
0 comments