The Atlassian Community can help you and your team get more value out of Atlassian products and practices.
Is it possible to access gpu by using an nvidia docker image instead of the base python image? Will the pipeline able to access the gpu given we are running a container in a container?
I have been able to use bitbucket pipelines runners successfully for close to a year now with much success! Excellent work on this.
I am starting to find issues with some larger pipelines now, though. I'm finding that I have limited space in /tmp (which is by default used as the working directory), but the mkfifo command for the build_result file causes issues if I use my larger storage. Is there any way to configure the runner to use a larger storage for the working directory but a mkfifo-supported storage for just tmp (or ideally build_result)?
15 accepted answers
121 total posts