Come for the products,
stay for the community

The Atlassian Community can help you and your team get more value out of Atlassian products and practices.

Atlassian Community about banner
4,365,059
Community Members
 
Community Events
168
Community Groups

Build Docker image with git submodule

I have ASP NET Core project referring to my library using git submodules.

Project structure:
-Project Folder:

---DockerFile

---MY_APP:

------MY_APP.csproj

---MY_SUBMODULE

------MY_SUBMODULE.csproj

---MY_APP.sln

My Dockerfile:

FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build-env
WORKDIR /app

COPY . ./
RUN dotnet publish MY_APP-c Release -o publish

FROM mcr.microsoft.com/dotnet/aspnet:5.0
WORKDIR /app
COPY --from=build-env /app/publish .

ENTRYPOINT ["dotnet", "MY_APP.dll"]

My pipelines file:
image: mcr.microsoft.com/dotnet/sdk:5.0

pipelines:
default:
- step:
name: Update submodules
script:
- apt-get update
- apt-get install --yes openssh-client
- git submodule update --init --recursive
- step:
name: Build and push
caches:
- dotnetcore
- docker
script:
- docker build -f Dockerfile -t IMAGENAME .

....
services:
- docker
- step:
name: Deploy to server branch develop
.....

 

On my local machine all is going well, but build with pipelines throws an error:
Skipping project "/app/MY_SUBMODULE/MY_SUBMODULE.csproj" because it was not found. I guess I have some problems with submodules files.

Step with submodule update does well, but i don't see that it uses path Cloning into '/opt/atlassian/pipelines/agent/build/MY_SUBMODULE'...

So, what is a proper way to build docker image inside bitbucket pipeline on a project with submodules?

 

1 answer

1 accepted

1 vote
Answer accepted

Hi @rchernigovskikh and welcome to the community!

I believe the culprit might be that the submodule is updated in a different step than the one that builds the Docker image.

Just to give you some context: for every step in a Pipelines build, a Docker container starts. The repo is cloned in that container, the commands of that step's script run, and then the container gets destroyed. Then, another container starts for the second step etc.

If files are generated or downloaded in a certain step, they won't be available to the next steps unless they are defined as artifacts. In your specific example, the submodule directory is empty until you run the command git submodule update --init --recursive, so the data is fetched during this first step. After this command, the build container for that step gets destroyed. A new container will then start for the second step, with a fresh clone, so the submodule data that was fetched in the previous step won't be available.

If you combine the first and second steps into one, the submodule files should be available in the build's clone directory when you build your Docker image.

Kind regards,
Theodora

Thanks for help!

Suggest an answer

Log in or Sign up to answer
TAGS

Atlassian Community Events