Showing results for 
Search instead for 
Did you mean: 
Sign up Log in
It's not the same without you

Join the community to find out what other Atlassian users are discussing, debating and creating.

Atlassian Community Hero Image Collage

Specify docker memory limit for a specific step

I would like to configure docker memory limit differently depending on the step I run.

In fact I have a step that perform tests and use a lot of memory using docker-compose, but some other steps that just build the docker image.

For instance one step that use docker with 1GB and another one with 6GB to hold my several services

Is it possible to override docker memory limit for a specific step ?

7 answers

We have the exact same requirement as @Victor Fleurant . We are running our tests in docker-compose and get "Container 'docker' exceeded memory limit." some times. It's really annoying and breaks the whole CI process.

Just need the "Docker" service in the Test step to have more memory, but there's no way to define that other than adding a memory limit in "definitions", which affects other steps using the "Docker-in-Docker" service.

Any chance of making a feature request for this?

Agree, this is really annoying and not clear.
As a compromise workaround i found that you can redefine (extend) docker service and use it in your step instead. That way your definition for it does not affect basic docker service and you don't have to adjust the size of other steps.

The drawback is that you cannot simply use `caches: -docker` anymore. May be there is a workaround for this also...

Al always, atlassian's documentation is just not detailed enough.

type: docker
memory: 6144

- step:
size: 2x
name: build docker image
- docker-6g
max-time: 20

- step:
name: update image on test server
- pipe: atlassian/ssh-run:0.4.0


Would love this too instead of having to use 2x size on all my steps that use docker because I need over 3GiB for docker in one step.

Same here, I would like to specify docker memory for a specific step, not for all steps

Same question as this.

From what I remember you can control the memory limit first of all by choosing between two options, the standard one (4GB) and then there is one to double the resources (8GB).

If you want to bring memory limit down (from either of those two), you have to add services. Each service divides the overall limit. E.g. Having three services brings down the limit for the pipeline script itself from 4GB to 1GB.

Going from 8GB to 6GB does not look to be mathematically possible this way, only from 8GB to 4GB (one service) but then you could just run with 4GB as the default.

Running docker rootless inside a pipeline would not add more options here as it is limited on limiting resources:

Currently, rootless mode ignores cgroup-related docker run flags such as --cpus and memory.

However, traditional ulimit and cpulimit can be still used, though they work in process-granularity rather than in container-granularity.

E.g. see ulimit and sysctl ( and the administration and configuration guide of your choice.


Suggest an answer

Log in or Sign up to answer
Community showcase
Published in Bitbucket

📣 Calling Bitbucket Data Center customers to participate in research

Hi everyone, Are you Bitbucket DC customer? If so, we'd love to talk to you! Our team wants to dive deep to understand your long-term plans regarding Bitbucket DC and Atlassian Cloud. Do you plan...

190 views 2 4
Read article

Community Events

Connect with like-minded Atlassian users at free events near you!

Find an event

Connect with like-minded Atlassian users at free events near you!

Unfortunately there are no Community Events near you at the moment.

Host an event

You're one step closer to meeting fellow Atlassian users at your local event. Learn more about Community Events

Events near you