Create
cancel
Showing results for 
Search instead for 
Did you mean: 
Sign up Log in

Bitbucket self-hosted runner failing: Bind mount failed

Jacky March 29, 2022

Hi,

I'm having issues when running a self-hosted pipeline where it isn't able to get past the runner initialising and the pipeline fails with the following error:

Status 500: {"message":"Bind mount failed: '/tmp/b3cc0c7e-8970-5dbc-8163-cb6cb3591fca/tmp' does not exists"}

 

I notice that the runner does initially generate all the necessary folders within `/tmp/b3cc0c7e-8970-5dbc-8163-cb6cb3591fca` for docker to bind to, but at some point they are all deleted except a `docker` and `runner.log` file.

Any help would be appreciated!

Kindest regards,

Jacky

 

Pipeline Outputs:

Runner
Runner matching labels:
- linux
- self.hosted
Runner name: Local
Runner labels: self.hosted, linux
Runner version:
current: 1.313
latest: 1.313
Build teardown<1s
Skipping cache upload for failed step
Skipping artifact upload for errored step
Searching for test report files in directories named [test-reports, TestResults, test-results, surefire-reports, failsafe-reports] down to a depth of 4
Finished scanning for test reports. Found 0 test report files.
Merged test suites, total number tests is 0, with 0 failures and 0 errors.

Docker

No logs to display


Artifacts

No artifacts uploaded

 

Pipeline:

```

image: node:16

pipelines:
    custom:
        test:
            - step:
                  runs-on: self.hosted
                  deployment: test
                  caches:
                      - node
                  name: Build and Deploy
                  script:
                      - npm install
                      - npm run build:testing
                      - pipe: atlassian/scp-deploy:0.3.3
                        variables:
                            USER: $DEV_USER
                            SERVER: $DEV_SERVER
                            REMOTE_PATH: '/test'
                            LOCAL_PATH: 'www/*'
```

 

Docker log:

```

2022-03-29 16:31:50,293] Getting oauth token for step.
[2022-03-29 16:31:50,325] Getting environment variables for step.
[2022-03-29 16:31:51,851] Getting all artifacts for step.
[2022-03-29 16:31:51,873] Getting SSH private key.
[2022-03-29 16:31:51,891] Getting known hosts.
[2022-03-29 16:31:52,376] Setting up directories.
[2022-03-29 16:31:52,405] Starting log uploader.
[2022-03-29 16:31:52,457] Removing container b3cc0c7e-8970-5dbc-8163-cb6cb3591fca_bf4f5934-524e-4600-a78d-fe63b4406a43_clone
[2022-03-29 16:31:52,506] Removing container b3cc0c7e-8970-5dbc-8163-cb6cb3591fca_bf4f5934-524e-4600-a78d-fe63b4406a43_build
[2022-03-29 16:31:52,539] Setting up step timeout: PT2H
[2022-03-29 16:31:52,542] Starting websocket listening to STEP_COMPLETED events.
[2022-03-29 16:31:52,550] Checking for step completion every PT30S seconds.
[2022-03-29 16:31:52,820] Updating step progress to PULLING_IMAGES.
[2022-03-29 16:31:53,080] Pulling image docker-public.packages.atlassian.com/sox/atlassian/bitbucket-pipelines-dvcs-tools:prod-stable.
[2022-03-29 16:31:53,478] Appending log line to main log.
[2022-03-29 16:32:13,590] Updating runner state to "ONLINE".
[2022-03-29 16:32:26,733] Pulling image node:16-alpine.
[2022-03-29 16:32:43,589] Updating runner state to "ONLINE".
[2022-03-29 16:32:45,643] Pulling image docker-public.packages.atlassian.com/sox/atlassian/bitbucket-pipelines-auth-proxy:prod-stable.
[2022-03-29 16:33:13,588] Updating runner state to "ONLINE".
[2022-03-29 16:33:19,032] Pulling image docker-public.packages.atlassian.com/sox/atlassian/bitbucket-pipelines-docker-daemon:v20.10.6-prod-stable.
[2022-03-29 16:33:43,589] Updating runner state to "ONLINE".
[2022-03-29 16:33:52,343] Pulling image docker-hub.packages.atlassian.com/google/pause:latest.
[2022-03-29 16:34:13,588] Updating runner state to "ONLINE".
[2022-03-29 16:34:31,501] Removing container b3cc0c7e-8970-5dbc-8163-cb6cb3591fca_bf4f5934-524e-4600-a78d-fe63b4406a43_pause
[2022-03-29 16:34:31,509] Updating step progress to CLONING.
[2022-03-29 16:34:31,510] Creating container b3cc0c7e-8970-5dbc-8163-cb6cb3591fca_bf4f5934-524e-4600-a78d-fe63b4406a43_pause.
[2022-03-29 16:34:31,742] Generating clone script.
[2022-03-29 16:34:31,883] Creating container b3cc0c7e-8970-5dbc-8163-cb6cb3591fca_bf4f5934-524e-4600-a78d-fe63b4406a43_clone.
[2022-03-29 16:34:31,884] Executing clone script in clone container.
[2022-03-29 16:34:32,430] Starting container.
[2022-03-29 16:34:32,817] Starting container.
[2022-03-29 16:34:35,764] Removing container b3cc0c7e-8970-5dbc-8163-cb6cb3591fca_bf4f5934-524e-4600-a78d-fe63b4406a43_system_auth-proxy
[2022-03-29 16:34:35,771] Creating container b3cc0c7e-8970-5dbc-8163-cb6cb3591fca_bf4f5934-524e-4600-a78d-fe63b4406a43_system_auth-proxy.
[2022-03-29 16:34:35,772] Removing container b3cc0c7e-8970-5dbc-8163-cb6cb3591fca_bf4f5934-524e-4600-a78d-fe63b4406a43_system_docker
[2022-03-29 16:34:35,790] Creating container b3cc0c7e-8970-5dbc-8163-cb6cb3591fca_bf4f5934-524e-4600-a78d-fe63b4406a43_system_docker.
[2022-03-29 16:34:36,925] An error occurred whilst starting container.
com.github.dockerjava.api.exception.InternalServerErrorException: Status 500: {"message":"Bind mount failed: '/tmp/b3cc0c7e-8970-5dbc-8163-cb6cb3591fca/tmp' does not exists"}

at com.github.dockerjava.netty.handler.HttpResponseHandler.channelRead0(HttpResponseHandler.java:103)
at com.github.dockerjava.netty.handler.HttpResponseHandler.channelRead0(HttpResponseHandler.java:32)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:280)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:436)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296)
at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:251)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:795)
at io.netty.channel.epoll.EpollDomainSocketChannel$EpollDomainUnsafe.epollInReady(EpollDomainSocketChannel.java:138)
at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:480)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Unknown Source)
[2022-03-29 16:34:36,967] Removing container b3cc0c7e-8970-5dbc-8163-cb6cb3591fca_bf4f5934-524e-4600-a78d-fe63b4406a43_build
[2022-03-29 16:34:36,978] Not uploading caches. (numberOfCaches: 1, resultOrError: ERROR)
[2022-03-29 16:34:37,119] Not uploading artifacts. (numberOfArtifacts: 1, resultOrError: ERROR)
[2022-03-29 16:34:37,121] Updating step progress to PARSING_TEST_RESULTS.
[2022-03-29 16:34:37,425] Test report processing complete.
[2022-03-29 16:34:37,426] Removing container b3cc0c7e-8970-5dbc-8163-cb6cb3591fca_bf4f5934-524e-4600-a78d-fe63b4406a43_clone
[2022-03-29 16:34:37,455] Appending log line to main log.
[2022-03-29 16:34:38,947] Removing container b3cc0c7e-8970-5dbc-8163-cb6cb3591fca_bf4f5934-524e-4600-a78d-fe63b4406a43_clone
[2022-03-29 16:34:38,958] Removing container b3cc0c7e-8970-5dbc-8163-cb6cb3591fca_bf4f5934-524e-4600-a78d-fe63b4406a43_build
[2022-03-29 16:34:38,965] Removing container b3cc0c7e-8970-5dbc-8163-cb6cb3591fca_bf4f5934-524e-4600-a78d-fe63b4406a43_system_auth-proxy
[2022-03-29 16:34:39,409] Removing container b3cc0c7e-8970-5dbc-8163-cb6cb3591fca_bf4f5934-524e-4600-a78d-fe63b4406a43_system_docker
[2022-03-29 16:34:40,943] Removing container b3cc0c7e-8970-5dbc-8163-cb6cb3591fca_bf4f5934-524e-4600-a78d-fe63b4406a43_pause
[2022-03-29 16:34:40,948] LEAK: ByteBuf.release() was not called before it's garbage-collected. See https://netty.io/wiki/reference-counted-objects.html for more information.
Recent access records:
Created at:
io.netty.buffer.UnpooledByteBufAllocator.newDirectBuffer(UnpooledByteBufAllocator.java:96)
io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:188)
io.netty.buffer.AbstractByteBufAllocator.buffer(AbstractByteBufAllocator.java:124)
io.netty.buffer.AbstractByteBuf.readBytes(AbstractByteBuf.java:871)
com.github.dockerjava.netty.handler.HttpResponseHandler.getBodyAsMessage(HttpResponseHandler.java:117)
com.github.dockerjava.netty.handler.HttpResponseHandler.channelRead0(HttpResponseHandler.java:97)
com.github.dockerjava.netty.handler.HttpResponseHandler.channelRead0(HttpResponseHandler.java:32)
io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:280)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:436)
io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324)
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296)
io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:251)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:795)
io.netty.channel.epoll.EpollDomainSocketChannel$EpollDomainUnsafe.epollInReady(EpollDomainSocketChannel.java:138)
io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:480)
io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378)
io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
java.base/java.lang.Thread.run(Unknown Source)
[2022-03-29 16:34:43,336] Updating step progress to COMPLETING_LOGS.
[2022-03-29 16:34:43,566] Shutting down log uploader.
[2022-03-29 16:34:43,585] Tearing down directories.
[2022-03-29 16:34:43,589] Updating runner state to "ONLINE".
[2022-03-29 16:34:43,595] Cancelling timeout
[2022-03-29 16:34:43,606] Completing step with result Result{status=ERROR, error=Some(Error{key='runner.bitbucket-pipelines.clone-container-failure', message='Status 500: {"message":"Bind mount failed: '/tmp/b3cc0c7e-8970-5dbc-8163-cb6cb3591fca/tmp' does not exists"}
', arguments={}})}.
[2022-03-29 16:34:44,035] Setting runner state to not executing step.
[2022-03-29 16:34:44,035] Waiting for next step.
[2022-03-29 16:35:13,589] Updating runner state to "ONLINE".

```

1 answer

1 accepted

0 votes
Answer accepted
Patrik S
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
March 30, 2022

Hello @Jacky ,

Welcome to Atlassian Community!

I've tried to reproduce the error you reported on my side, but the build ran successfully. Since the issue seems to be related to the volume mount, I would like to ask you try creating a new runner and modify the command we provide you to use a different mount path.

By default, when you create a runner, you'll be given a command like the below :

docker container run -it -v /tmp:/tmp <rest of the parameters> docker-public.packages.atlassian.com/sox/atlassian/bitbucket-pipelines-runner:1

where it mounts the local directory on your machine called /tmp, as a directory inside the container also called /tmp

That said, could you please try creating another directory in your local machine and use it as the tmp volume when creating a new runner container ? The command would look like the following :

docker container run -it -v /home/user/runner:/tmp <rest of the parameters> docker-public.packages.atlassian.com/sox/atlassian/bitbucket-pipelines-runner:1

The above example will use  local directory /home/user/runner as the tmp folder of the container. Please make sure the /home/user/runner directory exists in the machine where you are running the container.

You can also try changing the working directory of the container, so it will start in a folder other than tmp,  as per the below documentation :

Hope that helps. Let us know how it goes and if you still face the issue.

Thank you, @Jacky .

Kind regards,
Patrik S

Jacky April 9, 2022

Hi Patrik,

Terribly sorry for the late reply. As suggested, I have tried the several changes including binding the volume to another folder. After some more testing, I have noticed after the pipeline starts, it generates the necessary folders, which also proves that the volume is mounted correctly.

Directory on pipeline start

tmp
├───b3cc0c7e-8970-5dbc-8163-cb6cb3591fca
│ ├───artifact
│ ├───build
│ ├───cache
│ │ └───node
│ ├───ssh
│ └───tmp
└───hsperfdata_root

 

However, once it reaches to the cloning step (Updating step progress to CLONING.) and the container it attempts start, fails (An error occurred whilst starting container.), the teardown happens, deleting all the folders. Causing the Bind mount failed error.

Directory after clone container fails

tmp
├───b3cc0c7e-8970-5dbc-8163-cb6cb3591fca
└───hsperfdata_root

Relevant log section

[2022-04-09 15:07:24,147] Updating step progress to CLONING.
[2022-04-09 15:07:24,147] Creating container b3cc0c7e-8970-5dbc-8163-cb6cb3591fca_41047f8c-53e4-4c55-b50f-c6ad7cf13f86_pause.
[2022-04-09 15:07:24,384] Generating clone script.
[2022-04-09 15:07:24,583] Creating container b3cc0c7e-8970-5dbc-8163-cb6cb3591fca_41047f8c-53e4-4c55-b50f-c6ad7cf13f86_clone.
[2022-04-09 15:07:24,588] Executing clone script in clone container.
[2022-04-09 15:07:25,608] Starting container.
[2022-04-09 15:07:25,754] Starting container.
[2022-04-09 15:07:28,609] Updating runner state to "ONLINE".
[2022-04-09 15:07:30,570] Removing container b3cc0c7e-8970-5dbc-8163-cb6cb3591fca_41047f8c-53e4-4c55-b50f-c6ad7cf13f86_system_auth-proxy
[2022-04-09 15:07:30,584] Creating container b3cc0c7e-8970-5dbc-8163-cb6cb3591fca_41047f8c-53e4-4c55-b50f-c6ad7cf13f86_system_auth-proxy.
[2022-04-09 15:07:30,587] Removing container b3cc0c7e-8970-5dbc-8163-cb6cb3591fca_41047f8c-53e4-4c55-b50f-c6ad7cf13f86_system_docker
[2022-04-09 15:07:30,626] Creating container b3cc0c7e-8970-5dbc-8163-cb6cb3591fca_41047f8c-53e4-4c55-b50f-c6ad7cf13f86_system_docker.
[2022-04-09 15:07:30,636] An error occurred whilst starting container.
com.github.dockerjava.api.exception.InternalServerErrorException: Status 500: {"message":"Bind mount failed: '/tmp/b3cc0c7e-8970-5dbc-8163-cb6cb3591fca/build' does not exists"}
at com.github.dockerjava.netty.handler.HttpResponseHandler.channelRead0(HttpResponseHandler.java:103)
at com.github.dockerjava.netty.handler.HttpResponseHandler.channelRead0(HttpResponseHandler.java:32)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:280)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:436)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:327)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:299)
at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:251)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:800)
at io.netty.channel.epoll.EpollDomainSocketChannel$EpollDomainUnsafe.epollInReady(EpollDomainSocketChannel.java:138)
at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:480)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Unknown Source)
[2022-04-09 15:07:30,678] Removing container b3cc0c7e-8970-5dbc-8163-cb6cb3591fca_41047f8c-53e4-4c55-b50f-c6ad7cf13f86_build
[2022-04-09 15:07:30,700] Not uploading caches. (numberOfCaches: 1, resultOrError: ERROR)
[2022-04-09 15:07:30,713] Not uploading artifacts. (numberOfArtifacts: 1, resultOrError: ERROR)

Could it be related to the issue: BCLOUD-21574 - Pipeline build fails at step git clone while using runners 

 I have attempted the workaround but it did not work. Any other suggestions to fix this issue would be greatly appreciated!

 

Kindest regards,

Jacky

Patrik S
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
April 14, 2022

Hello @Jacky ,

Reading through the details you have provided, we'll likely need to access your build logs and have the help of our engineering team to understand what might be the issue here.

So in order to further investigate this case, I have created an internal ticket for you using the email of your community account, so you don't have to share this information here.
You should have received an email with a link to the support ticket. Just in case you haven't received it, please feel free to let me know and I can post the ticket URL here. The ticket will be visible only to you and Atlassian staff, no one else can view its contents even if they have the URL.

Thank you @Jacky .

Kind regards,

Patrik S

Jacky April 14, 2022

Hi Patrik,

Thank you for creating the ticket, I will continue the discussion within the ticket.

Thanks!

Kindest regards,

Jacky

Suggest an answer

Log in or Sign up to answer
TAGS
AUG Leaders

Atlassian Community Events