How does source code mounting for Docker containers work?

jandyman February 19, 2021

I'm just getting started implementing CI/CD for embedded ARM code using Bitbucket. I've learned about Docker containers and have successfully found a suitable public image and have successfully done a headless build on my local machine with Docker Desktop, the embedded toolset being Eclipse for ARM (STM32CubeIDE)

In order to do that, I had to create a bind mount for the source code. I'm assuming that when using Bitbucket Pipelines, that something similar is done with the branch(es) being processed. Can someone explain how this works with Bitbucket Pipelines? Without that understanding, I fear I'll fall into pothole trying to get the CI/CD stuff working.

1 answer

0 votes
Theodora Boudale
Atlassian Team
Atlassian Team members are employees working across the company in a wide variety of roles.
February 22, 2021

Hi @jandyman ,

By source code, I assume that you are referring to the source code of the repository where Pipelines is running? (If not, please feel free to correct me/clarify).

For every step of a Pipelines build, a Docker container starts with the image you have specified in your bitbucket-pipelines.yml file. If you haven't specified one, the image atlassian/default-image:latest will be used.

If you have builds that run on branches, then this branch will be cloned in the Docker container with a default depth of 50 (you can change the depth in your yml file). Then the commands of the script in your yml file will be executed in the container, in the directory where the branch of the repo was cloned.

Builds for pull requests run a bit differently, the source branch is cloned in the docker container and then it is merged with the destination branch. Then, the build is executed on the merged code. Please note that no merge is actually happening in the original Bitbucket repo, the code is merged only in the clone in the Docker container where the build runs.

If you run custom builds (triggered manually) or if you manually trigger a pipeline for a certain commit, the repo is cloned and then a git checkout is done for the commit the pipeline is running for, so the repo is in a 'detached HEAD' state.

I'm not sure how you've made your tests locally, but you can also check this guide we have for debugging Pipelines builds locally with Docker, which specifies options for simulating the memory restrictions in Pipelines:

If you have any questions, please feel free to let me know.

Kind regards,
Theodora

Suggest an answer

Log in or Sign up to answer
TAGS
AUG Leaders

Atlassian Community Events