Hi, I'm trying to put together a Bitbucket pipeline that does a docker build. I was surprised when what takes 5 minutes locally was getting stuck for 27 minutes in the Bitbucket pipeline.
When searching for a possible cause, I came across this article: https://confluence.atlassian.com/bbkb/bitbucket-pipeline-execution-hangs-on-docker-build-step-1189503836.html
Sure enough, I increased the docker memory limit in my bitbucket-pipelines.yml file, and it now runs as expected.
I consider this a serious bug in Bitbucket Pipelines that running out of memory in docker build would cause the pipeline to hang rather than to immediately fail. In fact, I will be looking to move away from using Bitbucket Pipelines unless this is resolved. I can't afford to have a pipeline hang indefinitely due to a bug of Bitbucket not correctly detecting docker has run out of memory.
Anyone else facing this or know if Atlassian is planning to fix this?
Hello @josh_hayden ,
and thank you for reaching out to the Community!
I see you opened a support ticket with us, but I would just like to clarify here in community as well, in case other users come across the same issue.
Bitbucket Pipelines actually tries to identify when the build container or any service container (like the docker service) runs out of the memory limit, immediately stopping the build in such cases and presenting a message similar to the one below :
Container 'docker/build' exceeded the memory limited
Those are scenarios discussed in the Troubleshooting Pipelines article .
However, the identification of those scenarios depends very much on the commands you are running. In some scenarios, instead of the command aborting due to a memory limit, the command hangs and keeps trying to allocate more memory but does not return an exit code for pipelines to identify such a scenario, leading to the situation where the build "hangs."
The occurrence of this behavior is usually related to memory issues, so after the memory is increased, you shouldn't experience "stuck" pipelines going forward.
However, to avoid too many build minutes being used when such a situation happens, you can also implement a custom timeout for your steps leveraging the max-time flag in your step definition. Once the step reaches the defined max-time, it will be shut down with a timeout.
I hope that helps! Should you have any questions, feel free to ask.
Thank you, @josh_hayden !
Patrik S
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hello @josh_hayden . Welcome to the Bitbucket community. Have you tried increasing the memory at the step level and the step size ? If you can post a redacted version of the pipelines file, then I can help probe this issue further. You need to execute the command
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.