Create
cancel
Showing results for 
Search instead for 
Did you mean: 
Sign up Log in

How can I configure different docker memory limits per pipeline step

Shane McNamara July 27, 2021

I have a React app that has multiple steps in the pipeline. Two of these steps seem to require conflicting memory limits for the docker service.

The two steps are:

  1. Static analysis with Sonarcloud
  2. Building via react-scripts build

 

The first step runs static analysis using Sonarcloud:

 - step: &StaticAnalysis name: Static Analysis with SonarCloud
image: atlassian/default-image:2 #quickest image
size: 2x
script:
- pipe: sonarsource/sonarcloud-scan:1.2.1
- pipe: sonarsource/sonarcloud-quality-gate:0.1.4

This step will fail with an error "Container 'docker' exceeded memory limit." without increasing the docker service memory limit as such:

definitions:
services:
docker:
memory: 2048

 

However, when I increase the docker memory limit, the next step fails with the error: "Container 'Build' exceeded memory limit.". If I remove the above service definition, the build step passes fine.

I tried putting:

services:
docker:
memory: 2048

 Under the steps, but this produced a yaml error from the pipeline saying that services is expected to be a list, not a map.

Is there a way for me to configure the docker memory limit based off the step?

1 answer

0 votes
Theodora Boudale
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
July 28, 2021

Hi @Shane McNamara,

I'm afraid that it is not possible to configure memory for services on a step level. The memory that you configure for a service will be the same in all steps that use this specific service (in this case docker).

We have a feature request for allowing configuration of service memory on a step level:

Since you'd be interested in that, I would suggest that you add your vote in the feature request (by selecting the Vote for this issue link) as the number of votes helps the development team and product managers better understand the demand for new features. You are more than welcome to leave any feedback, and you can also add yourself as a watcher (by selecting the Start watching this issue link) if you'd like to get notified via email on updates.

Implementation of new features is done as per our policy here and any updates will be posted in the feature request.

You included in your question the definition of the first step, and I see that it includes size: 2x so it has 8GB of memory in total. If I understand correctly, the docker service needs 2048 MB of memory for this step to work, and if you allocate less than 2048 MB the step fails, is this the case?

Regarding the second step, I assume it also uses the docker service? Does this second step succeed when you allocate to the service less than 2048 MB of memory, and fail when you allocate 2048 MB? Are you using size: 2x for this step as well? I just want to make sure I understand what happens, so we can see if there is a way around this.

Kind regards,
Theodora

Shane McNamara July 28, 2021

Hi Theodora,


Both steps are using `size: 2x`. 

Step 1 requires 2048 MB of memory and fails with less.

Step 2 requires less than 2048 MB of memory and fails with more.

 

Heres what the YAML looks like in part:

definitions:
services:
docker:
memory: 2048
caches: sonar: ~/.sonar/cache # Caching SonarCloud artifacts will speed up your build
steps:
- step: &StaticAnalysis
name: Static Analysis with SonarCloud
image: atlassian/default-image:2 #quickest image
size: 2x
script:
- pipe: sonarsource/sonarcloud-scan:1.2.1
- pipe: sonarsource/sonarcloud-quality-gate:0.1.4
caches:
- docker
services:
- docker
- step: &BuildDeploy
name: Build and Deploy
script:
- npm run build

I'm not explicitly setting service: docker in the second step (BuildDeploy), but the outcome of the step changed when modifier the docker service memory definition.

I was able to get the BuildDeploy step to work by disabling Source Maps in my build process, but I would really prefer to have them.

Theodora Boudale
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
July 30, 2021

Hi @Shane McNamara,

Thank you for the info.

If the docker service is not used in the step "Build and Deploy", it shouldn't affect the memory for that step.

Right now the docker service has a memory of 2048 MB, you have disabled Source Maps in your build process. Does the build step still have size: 2x (just checking, because I don't see it in the part of yaml you copy-pasted here).

My suggestion for troubleshooting this would be to include the following commands at the beginning of the script of the build step:

- while true; do ps -aux && sleep 30; done &
- while true; do echo "Memory usage in megabytes:" && echo $((`cat /sys/fs/cgroup/memory/memory.memsw.usage_in_bytes | awk '{print $1}'`/1048576)) && sleep 0.1; done &

Enable source maps as well. The commands I gave above are going to print memory usage in the Pipelines log while the step is executed, which will give us some insight on what processes are consuming a lot of memory during this step, causing it to fail.

Please feel free to attach here the Pipelines log (or part of it, sanitizing any sensitive info) so we can check the memory usage.

Kind regards,
Theodora

Shane McNamara August 4, 2021

Hi @Theodora Boudale

Sorry for the delay. I've ran the pipeline with the script steps you suggested. Some observations:

  • Removing the docker service from the Build step does indeed fix the issue here. Not sure what I was seeing before.
  • The memory issue is caused by react-scripts (well more specifically babel), you can read about it here. From the logs, "react-scripts build" is consuming 6+ GB of memory, which is more than is available (when docker is using 2MB).

Thanks for all the help

Shane McNamara August 4, 2021

@Theodora Boudale

Something interesting has happened, regarding the docker service. When I'm running both steps, only the steps that explicitly set:

services: 
-
docker 

 receive the new docker memory limit. However, if I'm only running a single step, the step receives the docker limit, even if I have not set the service.

With this YAML:

definitions:
services:
docker:
memory: 4096
caches: sonar: ~/.sonar/cache # Caching SonarCloud artifacts will speed up your build
steps:
- step: &StaticAnalysis
name: Static Analysis with SonarCloud
image: atlassian/default-image:2 #quickest image
size: 2x
script:
- pipe: sonarsource/sonarcloud-scan:1.2.1
- pipe: sonarsource/sonarcloud-quality-gate:0.1.4
caches:
- docker
services:
- docker
- step: &BuildDeploy
name: Build and Deploy
size: 2x
script:
- npm run build

The pipeline will pass when both are run sequentially (StaticAnalysis, followed by BuildDeploy), even when "BuildDeploy" is showing 6+GB of memory usage.

When I run just the "BuildDeploy" step, the build fails with the "Container 'Build' exceeded memory limit." error when the step hits 4GB memory usage (which implies to me the docker limit of 4GB is being utilized).

Shane McNamara August 4, 2021

Third update:

I have to branches running identical code/pipelines and one fails consistently and the other passes consistently. Any idea whats going on here? Is there some branch/commit specific caching happening?

Caroline R
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
August 6, 2021

Hi, @Shane McNamara

Theoretically, since you are not using a docker service in &buildDeploy, it should give you the full 7 GB for the build container. In this case, we will need additional details to further investigate this issue: 

  • Could you please confirm if the yml file you shared is the full yml that you are using?
  • Also, could you go to the top right of your build log and hover your mouse pointer there, like this:

Screen Shot 2021-08-06 at 9.59.51 AM.png

That would give us a better understanding of this case. Thank you! 

Kind regards,
Caroline 

Shane McNamara August 6, 2021

Hi @Caroline R

 

  1. It is not the full YAML, there are some minor things I've left out (e.g. export variables).  I also left out the subsequent `Deploy` pipe, which uses `atlassian/aws-s3-deploy:0.4.4`, but the pipeline crashes before that step.
  2. Here's a screenshot from a failed pipeline that just runs the "BuildDeploy" step:
    Screen Shot 2021-08-06 at 8.30.01 AM.png

    One from a pipeline that runs both "StaticAnalysis" and "BuildDeploy" and passed:
    Screen Shot 2021-08-06 at 8.38.26 AM.png

    And one from the exact same code that runs both "StaticAnalysis" and "BuildDeploy" and failed:
    Screen Shot 2021-08-06 at 8.39.24 AM.png

So it appears as if the size parameter is not being applied? The only difference between the two bottom screenshots is that they were run on different branches.

Caroline R
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
August 11, 2021

Hi, @Shane McNamara

Thanks for getting back to us and for providing the additional details. In order to further investigate this issue, we'll need to analyze this YAML, so I created a ticket on your behalf to our team. You should receive an email with this info and we'll contact you to work on this case. 

Please let me know if you have any questions. 

Kind regards,
Caroline 

Suggest an answer

Log in or Sign up to answer
TAGS
AUG Leaders

Atlassian Community Events