You're on your way to the next level! Join the Kudos program to earn points and save your progress.
Level 1: Seed
25 / 150 points
Next: Root
1 badge earned
Challenges come and go, but your rewards stay with you. Do more to earn more!
What goes around comes around! Share the love by gifting kudos to your peers.
Keep earning points to reach the top of the leaderboard. It resets every quarter so you always have a chance!
Join now to unlock these features and more
The Atlassian Community can help you and your team get more value out of Atlassian products and practices.
Hi everyone,
Short background: I am developing Python project which requires GPU access (CUDA).
I have experimented with bitbucket pipelines runner on the workstation with GPUs but unfortunately I cannot make it to work correctly. I am able to build a container with all the required CUDA dependencies etc inside the step script, but afterwards it cannot be run as it is essential to use nvidia-docker runtime https://github.com/NVIDIA/nvidia-docker.
I am passing `-v /var/run/docker.sock:/var/run/docker.sock` from host to the runner, but it does not look like it is passed through to the container running given step.
The docker service which can be added to each step does not help as it seems to use `docker-public.packages.atlassian.com/sox/atlassian/bitbucket-pipelines-docker-daemon:v19-prod-stable` image which does not use nvidia-docker runtime.
I have a question if and how could it be done - I have spent already several hours trying to resolve that (in simpler or more complex ways), but I am running out of feasible ideas now.
Thanks in advance for any advice.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.