Create
cancel
Showing results for 
Search instead for 
Did you mean: 
Sign up Log in

Bitbucket Pipeline exceeded memory limit

Walter Kopacz July 16, 2021

Hello,

My group has a CI pipeline setup based on this bitbucket-pipeline.yaml

imagegcc:10.2
pipelines:  
default:    
parallel:        
step:            
nameTest            
size2x            
script:              
git submodule update --init --recursive              
apt update -y              
# juce dependencies for linux              
apt install -y libasound2-dev libjack-jackd2-dev libcurl4-openssl-dev libfreetype6-dev libx11-dev libxcomposite-dev libxcursor-dev libxcursor-dev libxext-dev libxinerama-dev libxrandr-dev libxrender-dev libwebkit2gtk-4.0-dev libglu1-mesa-dev mesa-common-dev              
# need to pull cmake and install              
wget https://github.com/Kitware/CMake/releases/download/v3.19.8/cmake-3.19.8-Linux-x86_64.sh -O cmake.sh              
sh cmake.sh --prefix=/usr/local/ --exclude-subdir              
mkdir build && cd build              
cmake .. -DSayso_BuildTests=ON              
cmake --build . --config Release --parallel              
ctest -C Release -V        
step:            
namelint            
script:              
git submodule update --init --recursive              
apt update -y              
# juce dependencies for linux              
apt install -y libasound2-dev libjack-jackd2-dev libcurl4-openssl-dev libfreetype6-dev libx11-dev libxcomposite-dev libxcursor-dev libxcursor-dev libxext-dev libxinerama-dev libxrandr-dev libxrender-dev libwebkit2gtk-4.0-dev libglu1-mesa-dev mesa-common-dev              
              
# need to pull cmake and install              
wget https://github.com/Kitware/CMake/releases/download/v3.19.8/cmake-3.19.8-Linux-x86_64.sh -O cmake.sh              
sh cmake.sh --prefix=/usr/local/ --exclude-subdir              
apt install -y clang-tidy              
mkdir build && cd build              
cmake ..              
cmake --build . --target tidy
# definitions:  
# services:    
# docker:      
# memory32768

 

We started getting the error "Container 'Build' exceeded memory limit" when we compile with the command "cmake --build . --config Release --parallel"

[ 84%] Building CXX object foo.cpp: fatal error: Killed signal terminated program cc1plus
compilation terminated.
make[2]: *** [foo/build.make:82: foo.cpp.o] Error 1
make[2]: *** Waiting for unfinished jobs....

so we upped the memory the container uses and at a very high value we got it to work but suddenly it no longer does even on a commit that has the same code as a previous commit that does work. Any idea what could be causing this?  

UPDATE: I removed the docker memory definition as it seemed that could be causing problems and now the pipeline will run successfully sometimes but not every time even when using the same configuration and code. I cannot find any pattern to when it does or does not build. 

2 answers

0 votes
Brian Soe November 28, 2021

I also encounter a bitbucket pipelines exceeded memory limit, when running colcon build or make. My guess is that g++/gcc memory usage during the c++ build process exceeds the bitbucket pipeline limit, even when using:

- step:
size: 2x

definitions:
services:
docker:
memory: 7128

Here is a post on how to print memory usage during the build

Probably we need to reduce gcc memory usage, either indirectly by building on one core

cmake --build . --config Release -- -j 1
make -j1
colcon build --parallel-workers 1

or directly by setting a memory limit

ulimit -v 4096
0 votes
dave August 4, 2021

I've had similar problems with bitbucket pipelines running out of memory intermittently - (our pipeline consists of 2 steps, first running tests, and then installing a set of npm dependencies,building and then uploading a set of static pages up to AWS s3 and refreshing AWS cloudfront). While initially this reliably has worked consistently we've begun hitting memory limits as well - I've upped my size to 2x and i have to rerun pipelines multiple times to get them to pass every nth time. This isn't tenable long term - and we may need to move to some other solution if builds are going to be consistently flakey with memory limits.

Suggest an answer

Log in or Sign up to answer
TAGS
AUG Leaders

Atlassian Community Events