I have a question regarding the implication of using some Docker images that might be too large?
I was looking at the build summary and I could see it being broken down into the following categories:
The puzzling part for me is the fact that none of these steps show the process of getting the image and running it. For instance, if the Pipeline is based on `image: atlassian/pipelines-awscli`, I was expecting to see that image being pulled and processed.
The reason for my question is that I am trying to optimize the heck out of out deployment process to make them blazing fast. So I was to avoid always seeing commands like `apt-get update` or `apt-get install ...` or installing some PHP extensions.
I usually work with PHP and the aws cli. So I figured that if I bundle those and add the needed php extensions and `composer` in the same Docker image, I can then cache it in the Pipeline because the only thing that often changes is our code base.
The problem, is that I have noticed that a vanilla image that only has PHP 3.7.15 and AWS CLI is 500MB large which is too large for my taste.
If I could see the data for the images being used in the Pipelines logs, it would have help having a line in the sand (a reference) to use during the optimization that I am trying to do.
Is the time required for pulling and running the Docker image before the "Build setup" accounted for in the build minutes?
AFAIK downloading the image happens before the build (the first) step. And the building is done already "in" a container running that image. So the build steps means the image is running.
As you specify which image are in use, you can obtain them your own and get their sizes, too. This won't require waiting for a log of which as far as I'm aware, is inexistent.
Speaking about speeding things up, especially the build: Run it locally. Honestly. If you're looking for speed, don't do things remotely. With a local build you have all images at hand after first download (if you not even build them locally in the first place), so starting the build container is just seconds if not fraction of a second.
And the more important stuff of the build should already run bare-metal. This keeps distance to development short, so much more direct feedback cycles.
For small PHP containers, I normally take from Alpine. Some features in details might be missing due to the different libc, but I have had no show stoppers so far. Your mileage may vary.
Thanks for your answer @ktomk. I am not entirely sure that we are talking about all the same things but the key takeaway for me is that the time required by Bitbucket pipeline to prepare a running container based on the selected Docker image "IS NOT considered build minutes". If that assumption is wrong, may someone point us in the right direction.
As for your suggestion to build locally, that's not an option for my use case. I can't start including a gigantic `vendor` directory generated by the build process locally to `git ?` to be deployed somehow later. The whole idea of using CI/CD solutions is to ease the automation process. When you need to add some new servers to your infrastructure, it should be trivial to have the installation handled by AWS Codedeploy for instance that will deploy a know stable version from source code. My Scenario is not like an application that can generate a single binary file with few or not extra accompanying files.
If, on the other hand I needed to generate CSS from LESS or SASS, or to minify some code, I would run that locally because it's just about pre-processing.
Even with all that said, I will bear in mind that some things may be best processed locally while others will be kept in the pipeline.
Hi @Abdoulaye Siby I must admit I don't know if the download time of the images counts on the pipeline minutes, I have not double checked that. Someone from Atlassian should know that, also comparing the overall minutes of a pipeline and the minutes per each step / build start/teradown might provide a good base.
For AWS Codedeploy: AFAIK it has those releases and you can run any number of deployments of each of them to as many or little systems you want to. That by the way is also the single binary file you build of which you think you don't have it. It's the release package.
Nevertheless, hopefully someone from Atlassian might share more insight about the build minutes so you get that part of your question answered.
@Abdoulaye Siby Thanks for the heads-up, it was just some days ago that I noticed that Bitbucket Pipelines build minute collection/counting is much more precise now. It looks like it has second precision and while still not knowing if the image pull-time adds up to build minutes for sure, I also noticed that pull-times are much faster. For me it feels like that someone at Atlassian has been doing some useful tweaking.
Connect with like-minded Atlassian users at free events near you!Find an event
Connect with like-minded Atlassian users at free events near you!
Unfortunately there are no Community Events near you at the moment.Host an event