I am currently facing a challenge with a pipeline that I have implemented in one of my repositories. The pipeline consistently fails with an error message indicating a "run out of memory" issue.
In my efforts to address this matter, I have attempted to augment the memory size allocation and have also implemented a 2x increase in the pipeline configuration file. Despite these attempts, the problem persists.
Hi @Software Team and welcome to the community!
Does the Pipelines step that fails with this error use any services? If you are unsure what services are in Pipelines, I mean something like this:
pipelines:
default:
- step:
services:
- redis
script:
- echo "test"
definitions:
services:
redis:
image: redis:3.2
Using something like the following in your yml file will also add a docker service to each step:
options:
docker: true
The reason I am asking is because each service gets 1024 MB memory by default (unless you specify otherwise) so the build container has less memory available then. In this case, we also need to understand if it is the build container that needs more memory or the service.
You can add the following commands in your yml file, at the beginning of the script of the step that fails with this error:
- while true; do date && ps -aux && sleep 5 && echo ""; done &
- while true; do date && echo "Memory usage in megabytes:" && echo $((`cat /sys/fs/cgroup/memory/memory.memsw.usage_in_bytes | awk '{print $1}'`/1048576)) && echo "" && sleep 5; done &
These commands will print memory usage throughout the step in the Build log and they can help you figure out which processes consume a lot of memory.
The maximum memory that can be configured for a Pipelines build in our own infrastructure is 8GB with the size: 2x option. We have a feature request for the ability to increase that limit:
If the build cannot be configured to use less than 8GB of memory, you can look into using Runners in one of your servers and run this step on a runner:
Runners allow you to run builds in Pipelines on your own infrastructure, and you won’t be charged for the build minutes used by your self-hosted runners. With a runner, it is possible to configure up to 32GB (8x) of memory to run your builds (if the host machine has that memory available).
Kind regards,
Theodora
I just have a react repository and the step that its failing is [npm run build]. I already used the size: 2x option but with no result
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi,
You can add at the beginning of the script in your yml file the commands I provided in my previous reply to see memory usage per process and figure out what is consuming memory.
If you would like help with how to configure your build to use less memory you can reach out to the support team or a forum specific to the tools you are using. There may also be a bug with one of these tools causing memory consumption issues. For example, I've seen customers reporting memory issues with their builds and they found out that the culprit was this bug: https://github.com/jestjs/jest/issues/11956. A forum specific to the tools you are using should be able to provide you with better help.
Builds running on Atlassian's infrastructure can use up to 8 GB memory with the size: 2x option. If there is no bug and if your build cannot be configured to use less memory, you can look into using one of our runners (I provided details in my previous reply).
Kind regards,
Theodora
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Online forums and learning are now in one easy-to-use experience.
By continuing, you accept the updated Community Terms of Use and acknowledge the Privacy Policy. Your public name, photo, and achievements may be publicly visible and available in search engines.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.