Come for the products,
stay for the community

The Atlassian Community can help you and your team get more value out of Atlassian products and practices.

Atlassian Community about banner
4,298,026
Community Members
 
Community Events
165
Community Groups

Pipelines: Increase the Build Container Memory above 1024mb

I am constantly running into the memory limit of the build container, which is, according to the documentation, 1024mb. It is nice that I have 4gb in total (that is, including all the used services) at my disposal, but in my case, I do not need any external services and would rather use the entire 4gb for the build container.

Is there any configuration option that I can use to do so? Even after extensive searching and trial&error, I can't seem to make the build container use more than 1024mb, which is unfortunate.

1 answer

5 votes

Hi Jan,

By default, the build container has 4GB of memory.

If you add a service container, each will take 1GB of the total 4GB memory.

e.g. 2 service containers. Each service container will have 1GB and the build container will have 2GB of memory.

If you'd like to alter the memory usage of your containers, you have two options:

An example using both features:

pipelines:
default:
- step:
size: 2x # Total memory is 8GB
services:
- my-service-container # Will consume 512MB
script:
# The build container will have 7.5GB remaining.
- echo "Build container memory usage in megabytes:" && echo $((`cat /sys/fs/cgroup/memory/memory.memsw.usage_in_bytes | awk '{print $1}'`/1048576))

definitions:
services:
my-service-container:
image: a-docker-image:tag
memory: 512

Thanks,

Phil

Docs here may be misleading then. They suggest:

  • Regular steps have 4096 MB of memory in total, large build steps (which you can define using size: 2x) have 8192 MB in total.

  • The build container is given 1024 MB of the total memory, which covers your build process and some Pipelines overheads (agent container, logging, etc).

In other words they suggest that the build container is NOT given 4GB, but 1. It is not clear how size: 2x affects this.

In my case no service container was involved, yet a memory limit was reached on the build container (or on Container 'Build' ?). Setting size: 2x did seem to solve the problem although it's hard to tell if this was necessary.

Like # people like this

@josh_sutterfield coud you solve the issue? I am facing the same problem

Like Jorge de Diego likes this

Good option is to....

options:
  docker: true #enabling docker daemon
  size: 2x #doubling memory size of the entire pipe
definitions:
  services:
      docker:
         memory: 2048 #added memory so the container doesnt hang

Example step:
    - step:
          name: 'Build and push new version of the frontend'
          size: 2x #doubling the ammount of memory to this step. By default is 1024M, and we said 2048, so 4096.
          script:
            - docker login -u XXXXXX -p $DOCKER_HUB_PASSWORD
            - docker build -t XXXXX/frontend .
            - docker push XXXXX/frontend
          services:
            - docker
          caches:
            - docker

This should be clarifying.

Suggest an answer

Log in or Sign up to answer
TAGS
Community showcase
Published in Bitbucket

Git push size limits are coming to Bitbucket Cloud starting April 4th, 2022

Beginning on April 4th, we will be implementing push limits. This means that your push cannot be completed if it is over 3.5 GB. If you do attempt to complete a push that is over 3.5 GB, it will fail...

2,222 views 2 9
Read article

Community Events

Connect with like-minded Atlassian users at free events near you!

Find an event

Connect with like-minded Atlassian users at free events near you!

Unfortunately there are no Community Events near you at the moment.

Host an event

You're one step closer to meeting fellow Atlassian users at your local event. Learn more about Community Events

Events near you