we use pipelines.
we sometimes have jobs that fail with the message 'Build memory limit exceeded.'
we also run the same docker image internally as we run in pipelines. (we run the same image locally as we declare in the image: of bitbucket-pipelines.yml)
we want a way to run our docker image locally with the same limitations that pipelines enforces (4gb) : https://confluence.atlassian.com/bitbucket/limitations-of-bitbucket-pipelines-827106051.html#LimitationsofBitbucketPipelines-Buildlimits
we want to do this to make sure we are staying within 90% of what the limit is of pipelines. (so 3.6GB)
Is this a way to achieve that?
docker run --rm --memory=3600M --memory-swap=3600M docker-image-same-as-we-run-in-pipelines
We have the same issue with our Java project; during the building process we get a "Build exceeded memory limit" error, but if we run the build locally (as reported here: https://confluence.atlassian.com/bitbucket/debug-your-pipelines-locally-with-docker-838273569.html) all works fine without any problem.
I also tried to limit the memory allocated by maven with the MAVEN_OPTS variable, but anyway the build is stopped (randomly).
Are you using any service containers or the Docker build and push functionality? This will *also* use part of your 4GB of memory for the build. (1GB per service container)
EDIT: This was a bug. It's currently being worked on. Follow it here: https://bitbucket.org/site/master/issues/14666/enabling-docker-for-pipelines-wrongfully
Sorry for the delayed response.
Can you try a slight variation fo your command for your local debugging.
docker run -it --volume=/Users/myUserName/code/localDebugRepo:/localDebugRepo --workdir="/localDebugRepo" --memory=4g --memory-swap=4097m --entrypoint=/bin/bash python:2.7
The difference is I've changed memory-swap from 4g to 4097m (purposefully 1 MB larger than 4GB).
I need to do a little more investigation, but if the values are the same then docker may revert to default swap behaviour, giving you 8GB of swap instead of 0GB.
I'll update the docs once I confirm this. :) See if that change helps with reproducing the issue.
Same issue for us. We're doing a Scala (sbt plus ScalaTest) build using a PostgreSQL database as a service. When I run this locally, it works fine using about 1GB of memory. If I understand the documentation correctly, Posgtres will use 1GB of the 4GB for a pipeline. So it should not run out of memory?
I also ran the build in Docker as suggested in the Pipelines documentation for debugging locally and the build is fine there as well with the container using about 1.6GB of memory. For some branches the memory exceeding happens only occasionally (immediate rerun mostly works), while one other branch does not work at all atm. This suggests to me that the build is very close to some kind of memory threshhold, but I do not know what is the limiting factor here? Any tips how to debug this?
Bitbucket Pipelines helps me manage and automate a number of serverless deployments to AWS Lambda and this is how I do it. I'm building Node.js Lambda functions using node-lambda ...
Connect with like-minded Atlassian users at free events near you!Find a group
Connect with like-minded Atlassian users at free events near you!
Unfortunately there are no AUG chapters near you at the moment.Start an AUG
We're bringing product updates and pro tips on teamwork to ten cities around the world.Save your spot