It's not the same without you
Join the community to find out what other Atlassian users are discussing, debating and creating.
we use pipelines.
we sometimes have jobs that fail with the message 'Build memory limit exceeded.'
we also run the same docker image internally as we run in pipelines. (we run the same image locally as we declare in the image: of bitbucket-pipelines.yml)
we want a way to run our docker image locally with the same limitations that pipelines enforces (4gb) : https://confluence.atlassian.com/bitbucket/limitations-of-bitbucket-pipelines-827106051.html#LimitationsofBitbucketPipelines-Buildlimits
we want to do this to make sure we are staying within 90% of what the limit is of pipelines. (so 3.6GB)
Is this a way to achieve that?
docker run --rm --memory=3600M --memory-swap=3600M docker-image-same-as-we-run-in-pipelines
We have the same issue with our Java project; during the building process we get a "Build exceeded memory limit" error, but if we run the build locally (as reported here: https://confluence.atlassian.com/bitbucket/debug-your-pipelines-locally-with-docker-838273569.html) all works fine without any problem.
I also tried to limit the memory allocated by maven with the MAVEN_OPTS variable, but anyway the build is stopped (randomly).
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
We have the same issue with our maven build.
Anybody has found a way to hard limit maven to less than 4G?
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Are you using any service containers or the Docker build and push functionality? This will *also* use part of your 4GB of memory for the build. (1GB per service container)
(See here: https://confluence.atlassian.com/bitbucket/use-services-and-databases-in-bitbucket-pipelines-874786688.html )
EDIT: This was a bug. It's currently being worked on. Follow it here: https://bitbucket.org/site/master/issues/14666/enabling-docker-for-pipelines-wrongfully
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Sorry for the delayed response.
Can you try a slight variation fo your command for your local debugging.
docker run -it --volume=/Users/myUserName/code/localDebugRepo:/localDebugRepo --workdir="/localDebugRepo" --memory=4g --memory-swap=4097m --entrypoint=/bin/bash python:2.7
Akin to these docs: https://confluence.atlassian.com/bitbucket/debug-your-pipelines-locally-with-docker-838273569.html
The difference is I've changed memory-swap from 4g to 4097m (purposefully 1 MB larger than 4GB).
I need to do a little more investigation, but if the values are the same then docker may revert to default swap behaviour, giving you 8GB of swap instead of 0GB.
I'll update the docs once I confirm this. :) See if that change helps with reproducing the issue.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Same issue for us. We're doing a Scala (sbt plus ScalaTest) build using a PostgreSQL database as a service. When I run this locally, it works fine using about 1GB of memory. If I understand the documentation correctly, Posgtres will use 1GB of the 4GB for a pipeline. So it should not run out of memory?
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
I also ran the build in Docker as suggested in the Pipelines documentation for debugging locally and the build is fine there as well with the container using about 1.6GB of memory. For some branches the memory exceeding happens only occasionally (immediate rerun mostly works), while one other branch does not work at all atm. This suggests to me that the build is very close to some kind of memory threshhold, but I do not know what is the limiting factor here? Any tips how to debug this?
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi! actually, we're still having the same issue - a simple maven build with tests is failing unless we use `size: 2x`.. We've tried limiting memory with MAVEN_OPTS, as well as setting
-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap`
parameters (we're using Java 8, so technically without these flags, JVM is not aware it's running in a container), but still no luck..
Did anyone happen to find a solution for that?
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hello! My name is Mark Askew and I am a Premier Support Engineer for products Bitbucket Server/Data Center, Fisheye & Crucible. Today, I want to bring the discussion that Jennifer, Matt, and ...
Connect with like-minded Atlassian users at free events near you!
Find a groupConnect with like-minded Atlassian users at free events near you!
Unfortunately there are no AUG chapters near you at the moment.
Start an AUGYou're one step closer to meeting fellow Atlassian users at your local meet up. Learn more about AUGs
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.