Pipelines best practices for multi-repo projects?

Daniel
Contributor
July 10, 2019

We're in the process of automating our test and integration procedures, but sort of hit a roadblock with figuring out the “proper” or “best” way to handle integration tests and browser testing for a product that spans multiple repositories and projects within Bitbucket.

(My apologies for the long-winded question)

We are primarily dev’ing in Java as a webapp and for a REST layer. We also have several nodejs projects, each providing a REST service. We also have a couple of database services. Each service runs in its own Docker container. Here’s a partial overview of our repo layout:

Project 1:

  • Repo_1_1: This is the “main” product. It is a large Maven project containing multiple modules.
    • Contains unit tests

Project 2:

  • Repo_2_1: This is a standalone Maven project. This provides a service, which at runtime is a dependency to Repo_1_1
  • Repo_2_2: This is also a standalone Maven project. This provides a service, which at runtime is a dependency to Repo_2_1

Project 3 (builder):

  • Repo_3_1: This is the “builder” project that creates the final “releasable” bundle. It builds the release with artifacts that were built from the other repos and a bunch of support files.
    • Also contains our integration tests

Currently, our workflow involves:

  1. Each Maven repo is compiled and packaged (artifact created)
  2. Artifacts are then uploaded to our maven store
  3. In the builder project (Repo_3_1), the artifacts are pulled down from our maven store and packaged up with support files

It’s straight forward to replicate the manual work that we are doing for each repo into Pipelines. The part that I’m wrestling with is figuring out the most appropriate way to automate the work that is done in Repo_3_1.

In terms of release building, my initial thought was to use a branch name for a given release, for example, “Release_19Q3” as the name to use across the repos when building artifacts and packaging the release. So the flow could be:

  1. When a release cycle starts, create a release branch “Release_19Q3” in all repos
  2. For repo’s Repo_1_1, Repo_2_1, Repo_2_2 in their corresponding Pipelines, push the artifacts to the “Release_19Q3” S3 bucket
  3. In Repo_3_1, a Pipelines step would pull the “Release_19Q3” artifacts from S3 and run through Maven as usual

For release testing, we have integration tests and browser tests (Selenium).  In regards to browser testing, current a developer would run all of the product services (Repo_1_1, Repo_2_1, Repo_2_2, etc.) locally and separately kick off the Selenium tests. This is easy because the services just run standalone and the Selenium tests are running separately from the build/run process of the services. The issue I’m running into with Pipelines is since steps are completely independent of each other, I can’t run the services in one step and let that loop while at the same time execute the Selenium tests in a different step. Running everything in one step is also a bit tricky as I’m not totally able to get the Docker in Docker to work correctly (still debugging this). I’ve also toyed with deploying the release to EC2 in one step, and then running the Selenium tests in another step.

I guess my two main questions are:

  • Regarding release building, is the workflow that I mentioned an appropriate way to handle this? Does anyone have experience with a project layout similar to what we have and using Pipelines or other CI/CD tool?
  • For browser testing, is there decent support for Docker in Docker in Pipelines? I didn’t find too much usage of this in Pipelines and other resources have made a point to stay away from Docker in Docker. Given that we have multiple services and Docker containers, what would be a good solution for kicking off browser testing in Pipelines?

If you’ve read this far, thank you for you time! Any help or advice would be greatly appreciated!

0 answers

Suggest an answer

Log in or Sign up to answer
TAGS
AUG Leaders

Atlassian Community Events