Forums

Articles
Create
cancel
Showing results for 
Search instead for 
Did you mean: 

Intermittent 409 Conflict on self hosted Mac OS runner

Shane_Yoo
I'm New Here
I'm New Here
Those new to the Atlassian Community have posted less than three times. Give them a warm welcome!
January 26, 2026

Hi,

I'm getting intermittent 409 Conflict errors ("Simultaneous state updates") on my self-hosted runners.

My Setup:

  • Single Mac machine running multiple runner processes.

  • Custom run.sh script: I launch the runner by passing all configurations (UUID, OAuth, WorkingDir) directly via command-line arguments.

Verification:

  • Unique Configs: I manually verified that every script passes a unique UUID, Client ID, and working directory.

  • Process Check: ps -ef confirms no duplicate processes are running for the same UUID.

Since I am strictly using unique CLI arguments for each process, I don't understand why the server detects a conflict. Has anyone faced this issue?

[Error Log]

[2026-01-26 16:52:36,488] ...
com.atlassian.pipelines.stargate.client.core.exceptions.StargateConflictException: ...
"message":"Simultaneous state updates were attempted for runner with id: {xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}"

Thanks.

1 answer

1 vote
Taliah15
Contributor
January 26, 2026

Hello, runner-wrangler!

That 409 "Simultaneous state updates" on the multi-runner configuration on your Mac? Classic gotcha: processes can collide while racing to heartbeat to Bitbucket's Stargate service, even with distinct UUIDs and CLI arguments.

Fast fixes:

pkill -f runner, then restart each one individually, looking for dupes in the tail logs.

To serialize starts, use staggered delays or the --once flag in run.sh.

Use lsof | grep runner or shared locks (such as /tmp state) to look for zombie processes.

Worked for others on a Mac; if the runner version is out of date, update that as well. Are you still stuck? Share the entire run.sh snippet!

Go for those insects! @Shane_Yoo  If this works kindly let me know. 

Shane_Yoo
I'm New Here
I'm New Here
Those new to the Atlassian Community have posted less than three times. Give them a warm welcome!
January 26, 2026

I'll try your solution and let you know if this works or not.

Thank you for quick response!

Like Taliah15 likes this
Taliah15
Contributor
January 26, 2026

@Shane_Yoo  I will wait for your response. 

Like Shane_Yoo likes this
Shane_Yoo
I'm New Here
I'm New Here
Those new to the Atlassian Community have posted less than three times. Give them a warm welcome!
January 27, 2026

@Taliah15 

 

I killed all processes using the command you provided and added a 5-second delay between starting each runner, but the issue still persists.
The directory structure is as follows:
```
// ~/Desktop
└── runners
    ├── login-item.command
    ├── mobile-workspace-01
    │   ├── atlassian-bitbucket-pipelines-runner
    │   ├── atlassian-bitbucket-pipelines-runner.tar.gz
    │   ├── log.log
    │   └── run.sh
    ├── mobile-workspace-02
        ├── atlassian-bitbucket-pipelines-runner
        ├── atlassian-bitbucket-pipelines-runner.tar.gz
        ├── log.log
        └── run.sh
```
The run.sh script that starts each runner is as follows:
```
# run.sh
#!/usr/bin/env bash
cd atlassian-bitbucket-pipelines-runner/bin
nohup ./start.sh --accountUuid {REDACTED_ACNT_ID} --repositoryUuid {REDACTED_REPO_ID} --runnerUuid {REDACTED_RNNR_ID} --OAuthClientId {REDACTED_CLIENT_ID} --OAuthClientSecret {REDACTED_CLIENT_SECRET} --runtime macos-bash --workingDirectory ../temp > ../../log.log 2>&1 &
```
This run.sh exists in each runner directory and is executed by the following script:
```
# login-item.command
#!/bin/bash
# This script will be executed when build machine booted.
# run mobile workspace runner
sleep 5
cd ~/Desktop/runners/mobile-workspace-01
./run.sh
sleep 5
cd ~/Desktop/runners/mobile-workspace-02
./run.sh
# close terminal
osascript -e 'tell application "Terminal" to close first window' & exit
```
The errors logged in log.log include 401, 403, 409, and Connection reset, occurring intermittently.
```
# 403
[2026-01-27 16:43:23,147] An error occurred whilst completing step.
com.atlassian.pipelines.stargate.client.core.exceptions.StargateForbiddenException: Response Summary: HttpResponseSummary{httpStatusCode=403, httpStatusMessage=Forbidden, bodyAsString={"error":{"message":"Forbidden","detail":"Runner currently has no pipeline scheduled.","data":{"key":"rest-service.rest-service.forbidden","arguments":{}}}}}
at java.base/jdk.internal.reflect.DirectConstructorHandleAccessor.newInstance(Unknown Source)
Suppressed: reactor.core.publisher.FluxOnAssembly$OnAssemblyException:
Error has been observed at the following site(s):
Original Stack Trace:
```
```
# 401
[2026-01-27 16:42:41,902] An error occurred whilst updating runner state to "ONLINE".
com.atlassian.pipelines.stargate.client.core.exceptions.StargateUnauthorizedException: Response Summary: HttpResponseSummary{httpStatusCode=401, httpStatusMessage=Unauthorized, bodyAsString={"code":401,"message":"Unauthorized"}}
at java.base/jdk.internal.reflect.DirectConstructorHandleAccessor.newInstance(Unknown Source)
Suppressed: reactor.core.publisher.FluxOnAssembly$OnAssemblyException:
Error has been observed at the following site(s):
Original Stack Trace:
```
```
# 409
[2026-01-27 16:32:42,500] Uncaught error from RxJava
com.atlassian.pipelines.stargate.client.core.exceptions.StargateConflictException: Response Summary: HttpResponseSummary{httpStatusCode=409, httpStatusMessage=Conflict, bodyAsString={"key":"agent-service.runner.conflict","message":"Simultaneous state updates were attempted for runner with id: {...}","arguments":{}}}
at java.base/jdk.internal.reflect.DirectConstructorHandleAccessor.newInstance(Unknown Source)
Suppressed: reactor.core.publisher.FluxOnAssembly$OnAssemblyException:
Error has been observed at the following site(s):
Original Stack Trace:
```
```
# connection reset
[2026-01-28 07:35:12,008] [REDACTED_ID, L:/REDACTED_IP:56827 - R:api.atlassian.com/13.227.180.4:443] The connection observed an error
java.net.SocketException: Connection reset
at java.base/sun.nio.ch.SocketChannelImpl.throwConnectionReset(Unknown Source)
at java.base/sun.nio.ch.SocketChannelImpl.read(Unknown Source)
at io.netty.buffer.PooledByteBuf.setBytes(PooledByteBuf.java:255)
at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1132)
at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:356)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:151)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:796)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:732)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:658)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:998)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Unknown Source)
```

 

Suggest an answer

Log in or Sign up to answer
DEPLOYMENT TYPE
CLOUD
PRODUCT PLAN
PREMIUM
TAGS
AUG Leaders

Atlassian Community Events