Here is what I am trying to do as part of my pipeline.
- Do a maven build which will also run the acceptance test cases
- Docker build,
- Push the image to repository
The challenge i face is with the variation in time it takes across the steps.
The maven build for the same code base takes, 5min for a best case scenario. This number shoots upto 20min. The docker build performance varies moderately, between 5-8 min.
The "Build setup" phase shows the most variance, with best performance figures of 39s to worst case of 10min.
I am caching node modules and maven as part of the pipeline setup.
-- yaml file for reference
options:
docker: true
image:
name: $server/$product/services:latest
username: $DTR_USERNAME
password: $DTR_PASSWORD
email: $DTR_EMAIL
pipelines:
branches:
'**':
- step:
caches:
- maven
- nodemodules
script:
- export MAVEN_OPTS=-Xmx1G && export M2_HOME=/opt/apache-maven-3.3.9 && export PATH=$PATH:/opt/apache-maven-3.3.9/bin
- mvn -B -T 1C -q verify -DskipITs=true -Dmaven.repo.local=/root/.m2/repository -Djsse.enableSNIExtension=false
- export GITNUMBER=`git rev-parse HEAD`
- docker login $server --username $DTR_PUSH_USERNAME --password $DTR_PUSH_PASSWORD
- docker build --build-arg GIT_VERSION_NUMBER=${GITNUMBER} --force-rm=true .
- docker push <docker image>
definitions:
caches:
nodemodules: application/node_modules
Could the pipeline performance fluctuate so sharply due to resource constraints, and many developers committing alongside ? Is there a way to obtain metrics on this front ?
Thanks
Hi Lenin,
In regards to your pipelines/maven script:
We've been tracking providing more consistent CPU in this open ticket that you can follow: https://bitbucket.org/site/master/issues/13079/provide-a-fairly-constant-cpu-and-network
We've currently fixed several performance problems (such as IO and networking), CPU is the next thing we're looking into.
To summarise the reason why CPU variance exists, we run pipelines on shared infrastructure. Which does include some resource sharing across other pipelines executing. However, when we experimented with restricting CPU on a per pipeline basis, it made the CPU performance worse.:
We've looked at a range of solutions for the variance in build time. Unfortunately, the solution isn't as simple as just restricting the CPU available to each build. While this makes build times more consistent as every pipeline will have the same amount of resources allocated to it, regardless of when it's being run, it also means that builds could be slower overall. Your build that has previously had no limit in the CPUs it could use, is now limited in this resource. We will continue to investigate this issue and experiment with different solutions in the new year. I'll keep you updated on our progress via this issue.
The 'Build Setup' and Docker pull may vary, both on the above CPU issue. But possibly due to networking on the server side, depending on the Docker Registry you use.
We'll be posting more updates on the ticket above, when we have them.
Thanks,
Phil
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.