This question is in reference to Atlassian Documentation: Limitations of Bitbucket Pipelines (beta)
In pipelines, are there limits on how much disk space you can use during the build, and what is that limit?
We have one project (sort of a custom packet-based db) that has tests that generate some pretty large files (that get deleted after test), but there's all much less than 5GB - perhaps 1GB of files at most. Just investigating some test failures there, and wanted to make sure we not hitting some limits.
For reference, we added some diagnostics to our build steps - so I don't think disk space is the issue. We have noticed weird behavior of the docker filesystem though on other tests (we're pretty new to docker - issues with filesystem not syncing quickly at all, even with explicit sync's in our code), so perhaps we're seeing something like that in these tests too.
+ df -h
Filesystem Size Used Avail Use% Mounted on
none 197G 65G 125G 35% /
tmpfs 16G 0 16G 0% /dev
tmpfs 16G 0 16G 0% /sys/fs/cgroup
/dev/xvda1 197G 65G 125G 35% /root/.m2
shm 64M 0 64M 0% /dev/shm
I'm trying to recreate failures on local docker, but get different results.
Hello! My name is Mark Askew and I am a Premier Support Engineer for products Bitbucket Server/Data Center, Fisheye & Crucible. Today, I want to bring the discussion that Jennifer, Matt, and ...
Connect with like-minded Atlassian users at free events near you!Find a group
Connect with like-minded Atlassian users at free events near you!
Unfortunately there are no AUG chapters near you at the moment.Start an AUG
You're one step closer to meeting fellow Atlassian users at your local meet up. Learn more about AUGs