System.OutOfMemoryException error on build step

Eric Alexis Pap
I'm New Here
I'm New Here
Those new to the Atlassian Community have posted less than three times. Give them a warm welcome!
January 10, 2024

Hello, I need some help here. I've been running a pipeline for my project for over a month and started having random problem in the build step of my pipeline.  I managed to temporarly solve it adding the size: 2x option, but now not even that helps.

What are my options here? I really need help.

Eric

This is the error I'm getting:

Bitbucket Error.png

And this is my pipeline:

definitions:
steps:
- step: &build-branch
name: Build Feature Branch
image: mcr.microsoft.com/dotnet/sdk:6.0
caches:
- dotnetcore
script:
- dotnet restore
- dotnet build --no-restore

- step: &build-netcore
name: Build Docker Image
image: mcr.microsoft.com/dotnet/sdk:6.0
services:
- docker
caches:
- pip
script:
- apt-get update && apt-get install -y libgdiplus python3-pip nodejs npm wget
- wget https://packages.microsoft.com/config/ubuntu/20.04/packages-microsoft-prod.deb
- dpkg -i packages-microsoft-prod.deb
- apt-get update && apt-get install -y apt-transport-https
- apt-get update && apt-get install -y dotnet-sdk-6.0
- pip3 --version
- dotnet --info
- export CI=true
- pip3 install awscli
- IMAGE=$AWS_ECR_REPOSITORY_DOMAIN/$AWS_ECR_IMAGE_NAME
- PROJECT_VERSION=`node -p "require('./project.json').version"`
- TAG=$PROJECT_VERSION
- aws configure set aws_access_key_id "${AWS_ACCESS_KEY_ID}"
- aws configure set aws_secret_access_key "${AWS_SECRET_ACCESS_KEY}"
- eval $(aws ecr get-login --no-include-email --region ${AWS_ACCESS_KEY_REGION} | sed 's;https://;;g')
- dotnet restore
- dotnet publish -c Release -o ./docker/publish
- docker build -t $IMAGE:$TAG .
- docker tag $IMAGE:$TAG $AWS_ECR_REPOSITORY_DOMAIN/$AWS_ECR_IMAGE_NAME:$TAG
- docker tag $IMAGE:$TAG $AWS_ECR_REPOSITORY_DOMAIN/$AWS_ECR_IMAGE_NAME:latest
- docker push $IMAGE

- step: &deploy-netcore
name: Deploy Docker Container
image: mcr.microsoft.com/dotnet/aspnet:6.0
services:
- docker
caches:
- pip
script:
- apt-get update && apt-get install -y libgdiplus python3-pip wget
- export CI=true
- pip3 install awscli
- aws configure set aws_access_key_id "${AWS_ACCESS_KEY_ID}"
- aws configure set aws_secret_access_key "${AWS_SECRET_ACCESS_KEY}"
- export AWS_DEFAULT_REGION="${AWS_ACCESS_KEY_REGION}"
- eval $(aws ecr get-login --no-include-email --region "${AWS_ACCESS_KEY_REGION}" | sed 's;https://;;g')
- aws ecs update-service --service $SERVICE_NAME --cluster $CLUSTER_NAME --force-new-deployment

- step: &migrations-netcore
name: Run Migrations
image: mcr.microsoft.com/dotnet/sdk:6.0
services:
- docker
caches:
- pip
script:
- apt-get update && apt-get install -y libgdiplus python3-pip nodejs npm wget
- wget https://packages.microsoft.com/config/ubuntu/20.04/packages-microsoft-prod.deb
- dpkg -i packages-microsoft-prod.deb
- apt-get update && apt-get install -y apt-transport-https
- pip3 --version
- dotnet --info
- export CI=true
- pip3 install awscli
- aws configure set aws_access_key_id "${AWS_ACCESS_KEY_ID}"
- aws configure set aws_secret_access_key "${AWS_SECRET_ACCESS_KEY}"
- eval $(aws ecr get-login --no-include-email --region ${AWS_ACCESS_KEY_REGION} | sed 's;https://;;g')
- dotnet new tool-manifest
- dotnet tool install --local dotnet-ef --version 7.0.14
- dotnet ef migrations script --project OhmioData --startup-project OhmioWEBAPI --context EmpresaContext --idempotent --output migrations.sql
- aws s3 cp ./migrations.sql "${MIGRATIONS_BUCKET_DEST}"
- echo "Finalizado"

options:
size: 2x
pipelines:
default:
- step: *build-branch

branches:
develop:
- step:
<<: *build-netcore
name: Build Dev environment
deployment: DevelopBuild

- step:
<<: *deploy-netcore
name: Deploy Dev environment
deployment: DevelopDeploy

- step:
<<: *migrations-netcore
name: Migrations Dev environment
deployment: DevelopMigrations

master:
- step:
<<: *build-netcore
name: Build Prod environment
deployment: ProductionBuild

- step:
<<: *deploy-netcore
name: Deploy Prod environment
deployment: ProductionDeploy

- step:
<<: *migrations-netcore
name: Migrations Prod environment
deployment: ProductionMigrations

 

1 answer

1 accepted

1 vote
Answer accepted
Patrik S
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
January 11, 2024

Hello @Eric Alexis Pap and welcome to the community!

From the YML you shared, I can see the step that is failing with memory error uses a docker service.

To bring some context, the memory available to the step is distributed between the build container (where the command in your step's script is executed) and any services defined in the step (such as docker service) A size: 2x step will have 8GB of memory available., and by default a service will use 1GB our of that available memory. This leaves 7GB to the build container and 1GB to the docker service (learn more about memory allocation in Databases and services containers )

It looks like the dotnet command you are executing (which is run inside the build container) needs more than 7GB of memory, which is causing the error you reported.

In this case, you can try reducing the memory allocated to the docker service so more memory is left for the build container. In the following example, we reduce the docker service memory from the default 1GB to 512MB : 

definitions:
services:
docker:
memory: 512 # reduce memory for docker-in-docker from 1GB to 512MB

This would make the build container receive 7.5GB of memory, instead of the previous 7GB, which should give more room for the dotnet command to complete its execution. The memory allocated to a service can be set down to a minimum of 128MB, however, it's important to note that depending on the docker commands you are executing, the service container may also run out of memory, so you need to find a balance in the memory allocation depending on your use case and the commands you are running in your step.

That being said, pipelines that run in Atlassian infrastructure are currently limited to size:2x (8GB). In case you need more than 8GB of memory, you may also want to explore using Linux Docker self-hosted runners . The runners will be executed in your own infrastructure and can be configured up to size : 8x (32GB of memory) if the runner's host has that resource available.

Thank you, @Eric Alexis Pap !

Patrik S

Suggest an answer

Log in or Sign up to answer
DEPLOYMENT TYPE
CLOUD
PERMISSIONS LEVEL
Product Admin
TAGS
AUG Leaders

Atlassian Community Events