So Bamboo has this cool feature where a job can have a set of requirements and be run on a remote agent that satisfy the requirements. I'd like to use this to compile source code on different targets. A downside to this I can see is if there is a testing stage... one must past the artifacts (binaries/outputs) from one job to another. Bamboo has features for this but my question is what happens when you have lots of jobs for building different targets and need to past them all to the next stage, wouldn't it take a lot of time to transfer these artifacts to be used? Secondly, source code must be pulled into each individual job, this creates a lot of overhead.
The alternative of course is to have one job that does all the work on a single machine so all files are located in one place and can be tested. There's down sides to this too, like not being able to utilize the job/staging workflow and thus build outputs would be ugly. Or if targets required different environments. etc...
Does anyone have any insight on this?
Thought about it, but in my experience latency has been an issue with compiling on an NFS.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
But you can build it on local hard drive, and as the last task copy to the shared NFS.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Wouldn't it be essentially imitating what the artifacts are doing? Files (artifacts) need to be copied to shared NFS. For next job, it must be retrieved.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Yup only a little. But instead of downloading all these artifacts from Bamboo server, through the Bamboo application itself with HTTPS, you'll be using much more efficient NFS protocol. All of course depends on the hardware you're using.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
I don't know how "much more efficient" it would be. Thanks for the suggestion. I had lot of using an NFS before and was closed off to the idea but you've inspired me to do a bit more research in that topic. Overall, transferring files around at different stages just seems like a bad way to increase your build time.
It's almost like I need to build a single super server that can handle multiple jobs so it can be all work for a plan can be done in the same drive.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
But consider this. You have to build 5 modules for an app. And with tests lets say every module will be building for 5 minutes. So you have to wait 25 minutes for the build. Now when you use parallel jobs and you have 5 free agents all those modules will be built in only 5 minutes. So even if the artifacts will take another 5min to download than you still have your app 15 minutes earlier. What you can gain depends strongly on what you build and what resources you have. If the build takes 5 minutes and the artifact is only about 10MB then it would be downloaded fast. But if it will be lets say 500MB and the HDD for Bamboo server is slow, and the network is slow, then as you say it's not worth it.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
You make good points and I've thought about the same things. The files I'm dealing with are quite large which is why this is such a big issue for me. If my files were like said 10MB then I wouldn't have any issues.
Taking your example a bit further. Say it takes n time to do a transfer. First transfer is n time, and for simplicity any additional transfer adds the time by n. Every additional stage I add in the build adds another transfer. Stages like packaging, testing, deployment etc...
So I'm dealing with (# of stages after compilation) * n + n = total time.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Yes you're right. In our environment we rarely use stages. Because the standard developing process is:
So for us there is no use for steps 3-6 being part of the CI build plan. Because we like to differentiate deployment from CI (you can use Deployment projects for that). Also because we have a microservice environment with small packages, but there are some of them with size 300MB+. All of those microservices are separate apps so they do not rely on build state of any other serivce.
One other thing I think we both forgot is the package deployment looses time twice, when you upload the artifact, and then when you download it :).
There was a time we used parallel builds though. It was when we operated on a one monolith app which test and build took a lot of time. So we ran parallel jobs for unit tests, sonar analysis and couple other security tests, bacuse each of those took 20+ minutes, so when ran in parallel it all took a lot less time. And only when all those jobs where successful then the package was built.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.