Create
cancel
Showing results for 
Search instead for 
Did you mean: 
Sign up Log in

Pipelines: Increase the Build Container Memory above 1024mb

Jan Hadl January 31, 2019

I am constantly running into the memory limit of the build container, which is, according to the documentation, 1024mb. It is nice that I have 4gb in total (that is, including all the used services) at my disposal, but in my case, I do not need any external services and would rather use the entire 4gb for the build container.

Is there any configuration option that I can use to do so? Even after extensive searching and trial&error, I can't seem to make the build container use more than 1024mb, which is unfortunate.

3 answers

5 votes
Philip Hodder
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
February 3, 2019

Hi Jan,

By default, the build container has 4GB of memory.

If you add a service container, each will take 1GB of the total 4GB memory.

e.g. 2 service containers. Each service container will have 1GB and the build container will have 2GB of memory.

If you'd like to alter the memory usage of your containers, you have two options:

An example using both features:

pipelines:
default:
- step:
size: 2x # Total memory is 8GB
services:
- my-service-container # Will consume 512MB
script:
# The build container will have 7.5GB remaining.
- echo "Build container memory usage in megabytes:" && echo $((`cat /sys/fs/cgroup/memory/memory.memsw.usage_in_bytes | awk '{print $1}'`/1048576))

definitions:
services:
my-service-container:
image: a-docker-image:tag
memory: 512

Thanks,

Phil

josh_sutterfield January 13, 2021

Docs here may be misleading then. They suggest:

  • Regular steps have 4096 MB of memory in total, large build steps (which you can define using size: 2x) have 8192 MB in total.

  • The build container is given 1024 MB of the total memory, which covers your build process and some Pipelines overheads (agent container, logging, etc).

In other words they suggest that the build container is NOT given 4GB, but 1. It is not clear how size: 2x affects this.

In my case no service container was involved, yet a memory limit was reached on the build container (or on Container 'Build' ?). Setting size: 2x did seem to solve the problem although it's hard to tell if this was necessary.

Like # people like this
Jelmen Guhlke May 10, 2022

@josh_sutterfield coud you solve the issue? I am facing the same problem

Like Jorge de Diego likes this
Christian Koschmieder de Juan July 1, 2022

Good option is to....

options:
  docker: true #enabling docker daemon
  size: 2x #doubling memory size of the entire pipe
definitions:
  services:
      docker:
         memory: 2048 #added memory so the container doesnt hang

Example step:
    - step:
          name: 'Build and push new version of the frontend'
          size: 2x #doubling the ammount of memory to this step. By default is 1024M, and we said 2048, so 4096.
          script:
            - docker login -u XXXXXX -p $DOCKER_HUB_PASSWORD
            - docker build -t XXXXX/frontend .
            - docker push XXXXX/frontend
          services:
            - docker
          caches:
            - docker

This should be clarifying.
Like # people like this
Patrik S
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
November 30, 2023

Hello all,

The total memory allocated to the build step is distributed to the build container and any service containers you have defined in the step.

The amount of memory that will be allocated for the build container and any service is based on the configuration of your YML file.

For details on how the memory is allocated in a pipeline step, please refer to the Service memory limits section of the following documentation : 

That article also provides example scenarios of different memory allocations.

Thank you!

Patrik S

0 votes
simon March 18, 2024

Hello,

 If i set the step to use 2x the memory and only use half of it, how much of my bill time/minutes will it use. am i better of increasing the limits of sonar to use all the available mem? 

 - step: &sonarcloud
name: Analyze on SonarCloud
size: 2x
caches:
- node
- sonar
script:
- pipe: sonarsource/sonarcloud-scan:2.0.0
variables:
SONAR_TOKEN: ${SONAR_TOKEN}
EXTRA_ARGS: "-Dsonar.projectKey=${SONAR_PROJECT_KEY} -Dsonar.organization=${SONAR_ORGANISATION}"
SONAR_SCANNER_OPTS: -Xmx4096m
EXTRA_ARGS: "-Dsonar.projectKey=${SONAR_PROJECT_KEY} -Dsonar.organization=${SONAR_ORGANISATION} -Dsonar.javascript.node.maxspace=4096"

 

Suggest an answer

Log in or Sign up to answer
TAGS
AUG Leaders

Atlassian Community Events