Create
cancel
Showing results for 
Search instead for 
Did you mean: 
Sign up Log in
Celebration

Earn badges and make progress

You're on your way to the next level! Join the Kudos program to earn points and save your progress.

Deleted user Avatar
Deleted user

Level 1: Seed

25 / 150 points

Next: Root

Avatar

1 badge earned

Collect

Participate in fun challenges

Challenges come and go, but your rewards stay with you. Do more to earn more!

Challenges
Coins

Gift kudos to your peers

What goes around comes around! Share the love by gifting kudos to your peers.

Recognition
Ribbon

Rise up in the ranks

Keep earning points to reach the top of the leaderboard. It resets every quarter so you always have a chance!

Leaderboard

Come for the products,
stay for the community

The Atlassian Community can help you and your team get more value out of Atlassian products and practices.

Atlassian Community about banner
4,467,377
Community Members
 
Community Events
177
Community Groups

Specify docker memory limit for a specific step

I would like to configure docker memory limit differently depending on the step I run.

In fact I have a step that perform tests and use a lot of memory using docker-compose, but some other steps that just build the docker image.

For instance one step that use docker with 1GB and another one with 6GB to hold my several services

Is it possible to override docker memory limit for a specific step ?

7 answers

We have the exact same requirement as @Victor Fleurant . We are running our tests in docker-compose and get "Container 'docker' exceeded memory limit." some times. It's really annoying and breaks the whole CI process.

Just need the "Docker" service in the Test step to have more memory, but there's no way to define that other than adding a memory limit in "definitions", which affects other steps using the "Docker-in-Docker" service.

Any chance of making a feature request for this?

Agree, this is really annoying and not clear.
As a compromise workaround i found that you can redefine (extend) docker service and use it in your step instead. That way your definition for it does not affect basic docker service and you don't have to adjust the size of other steps.

The drawback is that you cannot simply use `caches: -docker` anymore. May be there is a workaround for this also...

Al always, atlassian's documentation is just not detailed enough.

definitions:
services:
docker-6g:
type: docker
memory: 6144

pipelines:
branches:
cd:
- step:
size: 2x
name: build docker image
services:
- docker-6g
max-time: 20
script:
...

- step:
name: update image on test server
script:
- pipe: atlassian/ssh-run:0.4.0
variables:
....

 

Would love this too instead of having to use 2x size on all my steps that use docker because I need over 3GiB for docker in one step.

Same here, I would like to specify docker memory for a specific step, not for all steps

Same question as this.

0 votes

From what I remember you can control the memory limit first of all by choosing between two options, the standard one (4GB) and then there is one to double the resources (8GB).

If you want to bring memory limit down (from either of those two), you have to add services. Each service divides the overall limit. E.g. Having three services brings down the limit for the pipeline script itself from 4GB to 1GB.

Going from 8GB to 6GB does not look to be mathematically possible this way, only from 8GB to 4GB (one service) but then you could just run with 4GB as the default.

Running docker rootless inside a pipeline would not add more options here as it is limited on limiting resources:

Currently, rootless mode ignores cgroup-related docker run flags such as --cpus and memory.

However, traditional ulimit and cpulimit can be still used, though they work in process-granularity rather than in container-granularity.

E.g. see ulimit and sysctl (www.LinuxHowtos.org) and the administration and configuration guide of your choice.

 

Suggest an answer

Log in or Sign up to answer
TAGS

Atlassian Community Events