Hello all,
as one (sometimes failing) step of our pipeline we
definitions:
services:
docker:
memory: 1024
Explanation on the memory usage is not clear on this case.
Could somebody please explain what's going on with the memory allocation to containers, docker host, and step env?
Interesting is that running the same tests, same containers for multiple times usually gives different results. Understandable that real load on the physical server changes over time and running times can differ - but memory?
Puzzled...
The "memory: 1024" setting you refer to is the total memory made available to the docker daemon running as part of your step. All containers started by docker as part of your step will share that memory, so if you're running 3 containers you could easily hit that limit.
As the documentation that you linked to says, you can increase that up to a maximum of 3GB (leaving 1GB for the rest of your step), or up to 7GB if you're using 2x step size.
Thank you Kenny for confirming the behaviour!
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.