Create
cancel
Showing results for 
Search instead for 
Did you mean: 
Sign up Log in
Celebration

Earn badges and make progress

You're on your way to the next level! Join the Kudos program to earn points and save your progress.

Deleted user Avatar
Deleted user

Level 1: Seed

25 / 150 points

Next: Root

Avatar

1 badge earned

Collect

Participate in fun challenges

Challenges come and go, but your rewards stay with you. Do more to earn more!

Challenges
Coins

Gift kudos to your peers

What goes around comes around! Share the love by gifting kudos to your peers.

Recognition
Ribbon

Rise up in the ranks

Keep earning points to reach the top of the leaderboard. It resets every quarter so you always have a chance!

Leaderboard

Come for the products,
stay for the community

The Atlassian Community can help you and your team get more value out of Atlassian products and practices.

Atlassian Community about banner
4,555,948
Community Members
 
Community Events
184
Community Groups

How is container memory allocated/shared for parallel steps?

Edited

Hi all,

I have a question regarding memory allocation for parallel steps.

Our setup looks like this:

definitions:
steps:
- step: &one
name: "Step one"
image: <some self hosted image>
size: 2x
script: |
- while true; do date && echo "Memory usage in megabytes:" && echo $((`cat /sys/fs/cgroup/memory/memory.memsw.usage_in_bytes | awk '{print $1}'`/1048576)) && echo "" && sleep 30; done >> memoryLogs.txt &
# taken from https://support.atlassian.com/bitbucket-cloud/docs/generate-support-logs/#Container-memory-monitoring
# some more commands
cache: # caches being used
artifacts:
- memoryLogs.txt
# some other artifacts

- step: $two
# similar to &one with some other scripts being executed

- step: &three
# ...

# ...

pipelines:
custom:
Run-Numbered-Steps:
- parallel:
- step: *one
- step: *two
- step: *three
# some more parallel steps, 12 in total
- step:
# some other stuff that will run sequentially



   
From my understanding, running steps in parallel will spin up an individual container per each step, meaning in our setup we would have 12 containers running at the same time. For 2/12 of these we have defined `size: 2x` in the `definitions` section of our pipeline.yaml. So defining size 2x we would get up to 8 GB for the build container, for the ones without size: 2x half of that, so 4 GB
Now if we inspect the memory logs that are being written for each of the 12 containers (also the ones with size 2x), we always get logging along the lines of 
Mon Jun 20 12:04:51 UTC 2022
Memory usage in megabytes:
3903

Mon Jun 20 12:05:21 UTC 2022
Memory usage in megabytes:
3896

Mon Jun 20 12:05:51 UTC 2022
Memory usage in megabytes:
3917

Mon Jun 20 12:06:21 UTC 2022
Memory usage in megabytes:
3754

Mon Jun 20 12:06:51 UTC 2022
Memory usage in megabytes:
3754

Mon Jun 20 12:07:21 UTC 2022
Memory usage in megabytes:
3735

Mon Jun 20 12:07:51 UTC 2022
Memory usage in megabytes:
3740

Now, to my actual question: Are these containers actually not requiring up to 7 GB of memory to run their script commands (hence the logs with <= 4GB) or are the containers somehow only getting up to 4 GB of memory - contrary to the size 2x definition. - or is there an entirely different issue at hand?

We initially tried to increase memory for some of these steps, as the scripts running inside the steps (end-to-end tests) will give weird results and/or fail randomly, essentually becoming flaky, while on local machines they run fine. However on local machines they are granted more than 3 GB of memory, which is why we tried to also increase memory during test runs in the pipeline

Sadly the size property and memory allocation for containers in the context of parallel steps is not documented that well.

Thanks in advance - help and or directing me to the proper documentation would be highly appreciated. 

Best regards

Deniz

 

1 answer

1 accepted

0 votes
Answer accepted
Patrik S
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
Jun 21, 2022

Hello @Cengiz Deniz ,

Welcome to Atlassian Community!

Your understanding is correct and each step, be it a normal or parallel step, will spin up its own container with its own set of resources:  8GB for 2x steps and 4GB for 1x steps.

The command you are currently using to print the memory is actually printing the currently used memory and not the available memory. 

So in this case, not seeing values greater than 4GB in your log just means that your current script does not require more than 4GB of memory, although it's getting very close to that value at its peak (3903 MB in your logs), so if you limit it to 4GB (1x) you might eventually run into out of memory errors in pipelines. When the build/services container runs out of memory, the pipeline will fail and explicitly say that it's a memory-related issue.

For more details about the pipelines memory allocation you can refer to the following documentation :

Thank you @Cengiz Deniz .

Kind regards,

Patrik S

Suggest an answer

Log in or Sign up to answer
TAGS
AUG Leaders

Atlassian Community Events