Create
cancel
Showing results for 
Search instead for 
Did you mean: 
Sign up Log in

Specify docker memory limit for a specific step

Victor Fleurant March 3, 2020

I would like to configure docker memory limit differently depending on the step I run.

In fact I have a step that perform tests and use a lot of memory using docker-compose, but some other steps that just build the docker image.

For instance one step that use docker with 1GB and another one with 6GB to hold my several services

Is it possible to override docker memory limit for a specific step ?

7 answers

1 vote
Viraj Dayarathne July 2, 2020

We have the exact same requirement as @Victor Fleurant . We are running our tests in docker-compose and get "Container 'docker' exceeded memory limit." some times. It's really annoying and breaks the whole CI process.

Just need the "Docker" service in the Test step to have more memory, but there's no way to define that other than adding a memory limit in "definitions", which affects other steps using the "Docker-in-Docker" service.

Any chance of making a feature request for this?

0 votes
Emptyfruit December 29, 2021

Agree, this is really annoying and not clear.
As a compromise workaround i found that you can redefine (extend) docker service and use it in your step instead. That way your definition for it does not affect basic docker service and you don't have to adjust the size of other steps.

The drawback is that you cannot simply use `caches: -docker` anymore. May be there is a workaround for this also...

Al always, atlassian's documentation is just not detailed enough.

definitions:
services:
docker-6g:
type: docker
memory: 6144

pipelines:
branches:
cd:
- step:
size: 2x
name: build docker image
services:
- docker-6g
max-time: 20
script:
...

- step:
name: update image on test server
script:
- pipe: atlassian/ssh-run:0.4.0
variables:
....

 

0 votes
Mathieu Lemay May 13, 2021

Would love this too instead of having to use 2x size on all my steps that use docker because I need over 3GiB for docker in one step.

0 votes
etiennecaldichourypys May 5, 2021

Same here, I would like to specify docker memory for a specific step, not for all steps

0 votes
Wei Wei March 26, 2021

Same question as this.

0 votes
Joris Vleminckx October 5, 2020

+1 please

0 votes
ktomk
Rising Star
Rising Star
Rising Stars are recognized for providing high-quality answers to other users. Rising Stars receive a certificate of achievement and are on the path to becoming Community Leaders.
March 6, 2020

From what I remember you can control the memory limit first of all by choosing between two options, the standard one (4GB) and then there is one to double the resources (8GB).

If you want to bring memory limit down (from either of those two), you have to add services. Each service divides the overall limit. E.g. Having three services brings down the limit for the pipeline script itself from 4GB to 1GB.

Going from 8GB to 6GB does not look to be mathematically possible this way, only from 8GB to 4GB (one service) but then you could just run with 4GB as the default.

Running docker rootless inside a pipeline would not add more options here as it is limited on limiting resources:

Currently, rootless mode ignores cgroup-related docker run flags such as --cpus and memory.

However, traditional ulimit and cpulimit can be still used, though they work in process-granularity rather than in container-granularity.

E.g. see ulimit and sysctl (www.LinuxHowtos.org) and the administration and configuration guide of your choice.

 

Suggest an answer

Log in or Sign up to answer
TAGS
AUG Leaders

Atlassian Community Events