This question is in reference to Atlassian Documentation: Limitations of Bitbucket Pipelines (beta)
In pipelines, are there limits on how much disk space you can use during the build, and what is that limit?
We have one project (sort of a custom packet-based db) that has tests that generate some pretty large files (that get deleted after test), but there's all much less than 5GB - perhaps 1GB of files at most. Just investigating some test failures there, and wanted to make sure we not hitting some limits.
For reference, we added some diagnostics to our build steps - so I don't think disk space is the issue. We have noticed weird behavior of the docker filesystem though on other tests (we're pretty new to docker - issues with filesystem not syncing quickly at all, even with explicit sync's in our code), so perhaps we're seeing something like that in these tests too.
+ df -h
Filesystem Size Used Avail Use% Mounted on
none 197G 65G 125G 35% /
tmpfs 16G 0 16G 0% /dev
tmpfs 16G 0 16G 0% /sys/fs/cgroup
/dev/xvda1 197G 65G 125G 35% /root/.m2
shm 64M 0 64M 0% /dev/shm
I'm trying to recreate failures on local docker, but get different results.
Bitbucket Pipelines helps me manage and automate a number of serverless deployments to AWS Lambda and this is how I do it. I'm building Node.js Lambda functions using node-lambda ...
Connect with like-minded Atlassian users at free events near you!Find a group
Connect with like-minded Atlassian users at free events near you!
Unfortunately there are no AUG chapters near you at the moment.Start an AUG
We're bringing product updates and pro tips on teamwork to ten cities around the world.Save your spot