Yes, this is not a question. But more feedback from a 'fresh' eyes on it.
Background :
- Small team ~10 people.
- Years old code base. Non-trivial build, but also not big-enterprise scale build.
- Migration away from on prem.
- We are not using runners, but the provided built-in machines.
The positives:
- Built in to Bitbucket. So, one less system to wrangle.
- You get isolated environments.
However, here are the negatives. Aka the things that cost me a lot of time:
Size Limitations, Oh the damn Size Limitations
- Oh..why why the 1GByte Cache / Artifact limit. This is a huge pain point and resulted in countless work arounds:
- I've had to come up with various schemes on how to split up caches, including:
- Extra clean up steps & restore steps for cache directories.
- Splitting up a cache into sub caches. Creating a explosion of caches. And you have to find a split strategy.
- Packing and unpacking directories myself for better control.
- Find the right 'sub-set' to cache, dropping some parts.
- Docker layer cache has same limit (afaik?). And that is trivial to be over 1Gbyte.
- Same story for Artifacts: Mostly an issue if you want to pass on intermediate compiled things into later steps.
More cache control
The cache control is under-powered. You can specify a key, and then the 'first' build step wins.
However, what I often want:
- Populate the cache after a few steps ran, accumulating the data.
Currently, I've to jam things together, or again: Do manual cache managment workaround.
- Allow for a more dynamic cache key. Eg. the cache key is based on file content, branch-name and 'timestamp' of data in a remote repository.
Logs are missing time stamps
That is just plain annoying. Yes, I can add time stamps in your part. But that still doesn't give you an overview of other operations, especially the operations Pipelines do for you.
No Diagnostics Help
Sooner or later a Build or a Test hangs, is oddly slow: and of course only on the CI system.
Then there it gets very adventurous with pipelines, because there is no SSH/diagnostics/etc story.
What I basically ended up is a lot of bash script that run the build while collecting extra info to dump somewhere.
Better Docker support would help a lot
- Allow to mount from outside the build directory. At least the $HOME directory.
So many tools put the cache there. And we often want to invoke a Docker container while share already cached data. If the cache is in $HOME, you cannot mount it in Pipelines. Then you are off to do cache management again. Coping things back and forth or mess env variables to change cache location defaults of tooling.
- Support larger build caches (see above).
- Support --network host. This is helpful for test setups. For example, in your main build you run your app you are building. And then a testing harness, for example with a Browser in it, can talk back to it.
Yes, we can move the app itself as well into a extra container just for testing as work-around. But that means extra work/steps.